Test Report: KVM_Linux_crio 19423

                    
                      1f2c26fb323282b69eee479fdee82bbe44410c3d:2024-08-16:35811
                    
                

Test fail (31/314)

Order failed test Duration
34 TestAddons/parallel/Ingress 152.42
36 TestAddons/parallel/MetricsServer 323.96
45 TestAddons/StoppedEnableDisable 154.36
164 TestMultiControlPlane/serial/StopSecondaryNode 141.89
166 TestMultiControlPlane/serial/RestartSecondaryNode 49.42
168 TestMultiControlPlane/serial/RestartClusterKeepsNodes 356.32
171 TestMultiControlPlane/serial/StopCluster 141.66
231 TestMultiNode/serial/RestartKeepsNodes 327.63
233 TestMultiNode/serial/StopMultiNode 141.32
240 TestPreload 352.82
248 TestKubernetesUpgrade 442.36
262 TestPause/serial/SecondStartNoReconfiguration 69.64
268 TestNoKubernetes/serial/StartNoArgs 72.22
285 TestStartStop/group/old-k8s-version/serial/FirstStart 319
292 TestStartStop/group/embed-certs/serial/Stop 139.07
295 TestStartStop/group/no-preload/serial/Stop 139.07
298 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
299 TestStartStop/group/old-k8s-version/serial/DeployApp 0.49
300 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 82.75
302 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
306 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.09
309 TestStartStop/group/old-k8s-version/serial/SecondStart 723.78
310 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
312 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.15
313 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.18
314 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.18
315 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.37
316 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 417.96
317 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 542.54
318 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 369.44
319 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 130.53
x
+
TestAddons/parallel/Ingress (152.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-966941 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-966941 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-966941 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [293f8398-f883-4566-aa48-f7d867211e99] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [293f8398-f883-4566-aa48-f7d867211e99] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.003555069s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-966941 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-966941 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.807122091s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-966941 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-966941 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.129
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-966941 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-966941 addons disable ingress-dns --alsologtostderr -v=1: (1.001629258s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-966941 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-966941 addons disable ingress --alsologtostderr -v=1: (7.68072045s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-966941 -n addons-966941
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-966941 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-966941 logs -n 25: (1.172495339s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-238279                                                                     | download-only-238279 | jenkins | v1.33.1 | 16 Aug 24 12:21 UTC | 16 Aug 24 12:21 UTC |
	| delete  | -p download-only-723080                                                                     | download-only-723080 | jenkins | v1.33.1 | 16 Aug 24 12:21 UTC | 16 Aug 24 12:21 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-862449 | jenkins | v1.33.1 | 16 Aug 24 12:21 UTC |                     |
	|         | binary-mirror-862449                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:43873                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-862449                                                                     | binary-mirror-862449 | jenkins | v1.33.1 | 16 Aug 24 12:21 UTC | 16 Aug 24 12:21 UTC |
	| addons  | enable dashboard -p                                                                         | addons-966941        | jenkins | v1.33.1 | 16 Aug 24 12:21 UTC |                     |
	|         | addons-966941                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-966941        | jenkins | v1.33.1 | 16 Aug 24 12:21 UTC |                     |
	|         | addons-966941                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-966941 --wait=true                                                                | addons-966941        | jenkins | v1.33.1 | 16 Aug 24 12:21 UTC | 16 Aug 24 12:23 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-966941 addons disable                                                                | addons-966941        | jenkins | v1.33.1 | 16 Aug 24 12:24 UTC | 16 Aug 24 12:24 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-966941 addons disable                                                                | addons-966941        | jenkins | v1.33.1 | 16 Aug 24 12:24 UTC | 16 Aug 24 12:24 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-966941 ssh cat                                                                       | addons-966941        | jenkins | v1.33.1 | 16 Aug 24 12:24 UTC | 16 Aug 24 12:24 UTC |
	|         | /opt/local-path-provisioner/pvc-e2d2f869-e0e4-4450-9779-9bdaae043e0c_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-966941 addons disable                                                                | addons-966941        | jenkins | v1.33.1 | 16 Aug 24 12:24 UTC | 16 Aug 24 12:25 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-966941 ip                                                                            | addons-966941        | jenkins | v1.33.1 | 16 Aug 24 12:24 UTC | 16 Aug 24 12:24 UTC |
	| addons  | addons-966941 addons disable                                                                | addons-966941        | jenkins | v1.33.1 | 16 Aug 24 12:24 UTC | 16 Aug 24 12:24 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-966941        | jenkins | v1.33.1 | 16 Aug 24 12:24 UTC | 16 Aug 24 12:24 UTC |
	|         | -p addons-966941                                                                            |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-966941        | jenkins | v1.33.1 | 16 Aug 24 12:24 UTC | 16 Aug 24 12:24 UTC |
	|         | addons-966941                                                                               |                      |         |         |                     |                     |
	| addons  | addons-966941 addons disable                                                                | addons-966941        | jenkins | v1.33.1 | 16 Aug 24 12:24 UTC | 16 Aug 24 12:24 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-966941        | jenkins | v1.33.1 | 16 Aug 24 12:25 UTC | 16 Aug 24 12:25 UTC |
	|         | addons-966941                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-966941        | jenkins | v1.33.1 | 16 Aug 24 12:25 UTC | 16 Aug 24 12:25 UTC |
	|         | -p addons-966941                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-966941 ssh curl -s                                                                   | addons-966941        | jenkins | v1.33.1 | 16 Aug 24 12:25 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-966941 addons disable                                                                | addons-966941        | jenkins | v1.33.1 | 16 Aug 24 12:25 UTC | 16 Aug 24 12:25 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-966941 addons                                                                        | addons-966941        | jenkins | v1.33.1 | 16 Aug 24 12:25 UTC | 16 Aug 24 12:25 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-966941 addons                                                                        | addons-966941        | jenkins | v1.33.1 | 16 Aug 24 12:25 UTC | 16 Aug 24 12:25 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-966941 ip                                                                            | addons-966941        | jenkins | v1.33.1 | 16 Aug 24 12:27 UTC | 16 Aug 24 12:27 UTC |
	| addons  | addons-966941 addons disable                                                                | addons-966941        | jenkins | v1.33.1 | 16 Aug 24 12:27 UTC | 16 Aug 24 12:27 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-966941 addons disable                                                                | addons-966941        | jenkins | v1.33.1 | 16 Aug 24 12:27 UTC | 16 Aug 24 12:27 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 12:21:36
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 12:21:36.812588   11845 out.go:345] Setting OutFile to fd 1 ...
	I0816 12:21:36.812700   11845 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 12:21:36.812711   11845 out.go:358] Setting ErrFile to fd 2...
	I0816 12:21:36.812717   11845 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 12:21:36.812897   11845 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-3966/.minikube/bin
	I0816 12:21:36.813482   11845 out.go:352] Setting JSON to false
	I0816 12:21:36.814262   11845 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":242,"bootTime":1723810655,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 12:21:36.814322   11845 start.go:139] virtualization: kvm guest
	I0816 12:21:36.816396   11845 out.go:177] * [addons-966941] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 12:21:36.817807   11845 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 12:21:36.817833   11845 notify.go:220] Checking for updates...
	I0816 12:21:36.820495   11845 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 12:21:36.821803   11845 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 12:21:36.822969   11845 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-3966/.minikube
	I0816 12:21:36.824101   11845 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 12:21:36.825313   11845 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 12:21:36.826555   11845 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 12:21:36.857351   11845 out.go:177] * Using the kvm2 driver based on user configuration
	I0816 12:21:36.858597   11845 start.go:297] selected driver: kvm2
	I0816 12:21:36.858616   11845 start.go:901] validating driver "kvm2" against <nil>
	I0816 12:21:36.858628   11845 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 12:21:36.859277   11845 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 12:21:36.859382   11845 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-3966/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 12:21:36.873504   11845 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0816 12:21:36.873554   11845 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 12:21:36.873756   11845 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 12:21:36.873787   11845 cni.go:84] Creating CNI manager for ""
	I0816 12:21:36.873800   11845 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 12:21:36.873807   11845 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0816 12:21:36.873861   11845 start.go:340] cluster config:
	{Name:addons-966941 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-966941 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 12:21:36.873969   11845 iso.go:125] acquiring lock: {Name:mk00139b359c6fe4c84d80e843a2099303fb7c5b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 12:21:36.875713   11845 out.go:177] * Starting "addons-966941" primary control-plane node in "addons-966941" cluster
	I0816 12:21:36.876944   11845 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 12:21:36.876968   11845 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0816 12:21:36.876974   11845 cache.go:56] Caching tarball of preloaded images
	I0816 12:21:36.877045   11845 preload.go:172] Found /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 12:21:36.877055   11845 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0816 12:21:36.877341   11845 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/config.json ...
	I0816 12:21:36.877360   11845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/config.json: {Name:mka6e26b83c1ff181c94a2ba1ba48c6b50bbc421 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:21:36.877475   11845 start.go:360] acquireMachinesLock for addons-966941: {Name:mk0b4b88ee8893cefe753073bc08069ac50e5641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 12:21:36.877516   11845 start.go:364] duration metric: took 28.838µs to acquireMachinesLock for "addons-966941"
	I0816 12:21:36.877532   11845 start.go:93] Provisioning new machine with config: &{Name:addons-966941 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:addons-966941 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 12:21:36.877586   11845 start.go:125] createHost starting for "" (driver="kvm2")
	I0816 12:21:36.878992   11845 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0816 12:21:36.879114   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:21:36.879147   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:21:36.893177   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35685
	I0816 12:21:36.893662   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:21:36.894203   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:21:36.894222   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:21:36.894593   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:21:36.894772   11845 main.go:141] libmachine: (addons-966941) Calling .GetMachineName
	I0816 12:21:36.894938   11845 main.go:141] libmachine: (addons-966941) Calling .DriverName
	I0816 12:21:36.895076   11845 start.go:159] libmachine.API.Create for "addons-966941" (driver="kvm2")
	I0816 12:21:36.895110   11845 client.go:168] LocalClient.Create starting
	I0816 12:21:36.895161   11845 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem
	I0816 12:21:37.117247   11845 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem
	I0816 12:21:37.321675   11845 main.go:141] libmachine: Running pre-create checks...
	I0816 12:21:37.321698   11845 main.go:141] libmachine: (addons-966941) Calling .PreCreateCheck
	I0816 12:21:37.322183   11845 main.go:141] libmachine: (addons-966941) Calling .GetConfigRaw
	I0816 12:21:37.322570   11845 main.go:141] libmachine: Creating machine...
	I0816 12:21:37.322582   11845 main.go:141] libmachine: (addons-966941) Calling .Create
	I0816 12:21:37.322731   11845 main.go:141] libmachine: (addons-966941) Creating KVM machine...
	I0816 12:21:37.323976   11845 main.go:141] libmachine: (addons-966941) DBG | found existing default KVM network
	I0816 12:21:37.324706   11845 main.go:141] libmachine: (addons-966941) DBG | I0816 12:21:37.324555   11867 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1f0}
	I0816 12:21:37.324727   11845 main.go:141] libmachine: (addons-966941) DBG | created network xml: 
	I0816 12:21:37.324742   11845 main.go:141] libmachine: (addons-966941) DBG | <network>
	I0816 12:21:37.324757   11845 main.go:141] libmachine: (addons-966941) DBG |   <name>mk-addons-966941</name>
	I0816 12:21:37.324767   11845 main.go:141] libmachine: (addons-966941) DBG |   <dns enable='no'/>
	I0816 12:21:37.324777   11845 main.go:141] libmachine: (addons-966941) DBG |   
	I0816 12:21:37.324789   11845 main.go:141] libmachine: (addons-966941) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0816 12:21:37.324797   11845 main.go:141] libmachine: (addons-966941) DBG |     <dhcp>
	I0816 12:21:37.324804   11845 main.go:141] libmachine: (addons-966941) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0816 12:21:37.324811   11845 main.go:141] libmachine: (addons-966941) DBG |     </dhcp>
	I0816 12:21:37.324818   11845 main.go:141] libmachine: (addons-966941) DBG |   </ip>
	I0816 12:21:37.324827   11845 main.go:141] libmachine: (addons-966941) DBG |   
	I0816 12:21:37.324843   11845 main.go:141] libmachine: (addons-966941) DBG | </network>
	I0816 12:21:37.324853   11845 main.go:141] libmachine: (addons-966941) DBG | 
	I0816 12:21:37.329745   11845 main.go:141] libmachine: (addons-966941) DBG | trying to create private KVM network mk-addons-966941 192.168.39.0/24...
	I0816 12:21:37.392880   11845 main.go:141] libmachine: (addons-966941) DBG | private KVM network mk-addons-966941 192.168.39.0/24 created
	I0816 12:21:37.392918   11845 main.go:141] libmachine: (addons-966941) DBG | I0816 12:21:37.392843   11867 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19423-3966/.minikube
	I0816 12:21:37.392955   11845 main.go:141] libmachine: (addons-966941) Setting up store path in /home/jenkins/minikube-integration/19423-3966/.minikube/machines/addons-966941 ...
	I0816 12:21:37.392980   11845 main.go:141] libmachine: (addons-966941) Building disk image from file:///home/jenkins/minikube-integration/19423-3966/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso
	I0816 12:21:37.392998   11845 main.go:141] libmachine: (addons-966941) Downloading /home/jenkins/minikube-integration/19423-3966/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19423-3966/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso...
	I0816 12:21:37.651788   11845 main.go:141] libmachine: (addons-966941) DBG | I0816 12:21:37.651616   11867 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/addons-966941/id_rsa...
	I0816 12:21:37.851487   11845 main.go:141] libmachine: (addons-966941) DBG | I0816 12:21:37.851337   11867 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/addons-966941/addons-966941.rawdisk...
	I0816 12:21:37.851522   11845 main.go:141] libmachine: (addons-966941) DBG | Writing magic tar header
	I0816 12:21:37.851537   11845 main.go:141] libmachine: (addons-966941) DBG | Writing SSH key tar header
	I0816 12:21:37.851578   11845 main.go:141] libmachine: (addons-966941) DBG | I0816 12:21:37.851459   11867 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19423-3966/.minikube/machines/addons-966941 ...
	I0816 12:21:37.851598   11845 main.go:141] libmachine: (addons-966941) Setting executable bit set on /home/jenkins/minikube-integration/19423-3966/.minikube/machines/addons-966941 (perms=drwx------)
	I0816 12:21:37.851618   11845 main.go:141] libmachine: (addons-966941) Setting executable bit set on /home/jenkins/minikube-integration/19423-3966/.minikube/machines (perms=drwxr-xr-x)
	I0816 12:21:37.851632   11845 main.go:141] libmachine: (addons-966941) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/addons-966941
	I0816 12:21:37.851642   11845 main.go:141] libmachine: (addons-966941) Setting executable bit set on /home/jenkins/minikube-integration/19423-3966/.minikube (perms=drwxr-xr-x)
	I0816 12:21:37.851655   11845 main.go:141] libmachine: (addons-966941) Setting executable bit set on /home/jenkins/minikube-integration/19423-3966 (perms=drwxrwxr-x)
	I0816 12:21:37.851665   11845 main.go:141] libmachine: (addons-966941) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0816 12:21:37.851677   11845 main.go:141] libmachine: (addons-966941) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-3966/.minikube/machines
	I0816 12:21:37.851694   11845 main.go:141] libmachine: (addons-966941) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-3966/.minikube
	I0816 12:21:37.851708   11845 main.go:141] libmachine: (addons-966941) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-3966
	I0816 12:21:37.851724   11845 main.go:141] libmachine: (addons-966941) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0816 12:21:37.851736   11845 main.go:141] libmachine: (addons-966941) DBG | Checking permissions on dir: /home/jenkins
	I0816 12:21:37.851754   11845 main.go:141] libmachine: (addons-966941) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0816 12:21:37.851765   11845 main.go:141] libmachine: (addons-966941) Creating domain...
	I0816 12:21:37.851777   11845 main.go:141] libmachine: (addons-966941) DBG | Checking permissions on dir: /home
	I0816 12:21:37.851793   11845 main.go:141] libmachine: (addons-966941) DBG | Skipping /home - not owner
	I0816 12:21:37.852714   11845 main.go:141] libmachine: (addons-966941) define libvirt domain using xml: 
	I0816 12:21:37.852747   11845 main.go:141] libmachine: (addons-966941) <domain type='kvm'>
	I0816 12:21:37.852759   11845 main.go:141] libmachine: (addons-966941)   <name>addons-966941</name>
	I0816 12:21:37.852767   11845 main.go:141] libmachine: (addons-966941)   <memory unit='MiB'>4000</memory>
	I0816 12:21:37.852776   11845 main.go:141] libmachine: (addons-966941)   <vcpu>2</vcpu>
	I0816 12:21:37.852785   11845 main.go:141] libmachine: (addons-966941)   <features>
	I0816 12:21:37.852794   11845 main.go:141] libmachine: (addons-966941)     <acpi/>
	I0816 12:21:37.852804   11845 main.go:141] libmachine: (addons-966941)     <apic/>
	I0816 12:21:37.852814   11845 main.go:141] libmachine: (addons-966941)     <pae/>
	I0816 12:21:37.852827   11845 main.go:141] libmachine: (addons-966941)     
	I0816 12:21:37.852839   11845 main.go:141] libmachine: (addons-966941)   </features>
	I0816 12:21:37.852855   11845 main.go:141] libmachine: (addons-966941)   <cpu mode='host-passthrough'>
	I0816 12:21:37.852865   11845 main.go:141] libmachine: (addons-966941)   
	I0816 12:21:37.852874   11845 main.go:141] libmachine: (addons-966941)   </cpu>
	I0816 12:21:37.852885   11845 main.go:141] libmachine: (addons-966941)   <os>
	I0816 12:21:37.852893   11845 main.go:141] libmachine: (addons-966941)     <type>hvm</type>
	I0816 12:21:37.852903   11845 main.go:141] libmachine: (addons-966941)     <boot dev='cdrom'/>
	I0816 12:21:37.852936   11845 main.go:141] libmachine: (addons-966941)     <boot dev='hd'/>
	I0816 12:21:37.852944   11845 main.go:141] libmachine: (addons-966941)     <bootmenu enable='no'/>
	I0816 12:21:37.852955   11845 main.go:141] libmachine: (addons-966941)   </os>
	I0816 12:21:37.852965   11845 main.go:141] libmachine: (addons-966941)   <devices>
	I0816 12:21:37.852977   11845 main.go:141] libmachine: (addons-966941)     <disk type='file' device='cdrom'>
	I0816 12:21:37.852990   11845 main.go:141] libmachine: (addons-966941)       <source file='/home/jenkins/minikube-integration/19423-3966/.minikube/machines/addons-966941/boot2docker.iso'/>
	I0816 12:21:37.853017   11845 main.go:141] libmachine: (addons-966941)       <target dev='hdc' bus='scsi'/>
	I0816 12:21:37.853039   11845 main.go:141] libmachine: (addons-966941)       <readonly/>
	I0816 12:21:37.853047   11845 main.go:141] libmachine: (addons-966941)     </disk>
	I0816 12:21:37.853052   11845 main.go:141] libmachine: (addons-966941)     <disk type='file' device='disk'>
	I0816 12:21:37.853061   11845 main.go:141] libmachine: (addons-966941)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0816 12:21:37.853073   11845 main.go:141] libmachine: (addons-966941)       <source file='/home/jenkins/minikube-integration/19423-3966/.minikube/machines/addons-966941/addons-966941.rawdisk'/>
	I0816 12:21:37.853082   11845 main.go:141] libmachine: (addons-966941)       <target dev='hda' bus='virtio'/>
	I0816 12:21:37.853087   11845 main.go:141] libmachine: (addons-966941)     </disk>
	I0816 12:21:37.853095   11845 main.go:141] libmachine: (addons-966941)     <interface type='network'>
	I0816 12:21:37.853100   11845 main.go:141] libmachine: (addons-966941)       <source network='mk-addons-966941'/>
	I0816 12:21:37.853107   11845 main.go:141] libmachine: (addons-966941)       <model type='virtio'/>
	I0816 12:21:37.853114   11845 main.go:141] libmachine: (addons-966941)     </interface>
	I0816 12:21:37.853125   11845 main.go:141] libmachine: (addons-966941)     <interface type='network'>
	I0816 12:21:37.853133   11845 main.go:141] libmachine: (addons-966941)       <source network='default'/>
	I0816 12:21:37.853138   11845 main.go:141] libmachine: (addons-966941)       <model type='virtio'/>
	I0816 12:21:37.853144   11845 main.go:141] libmachine: (addons-966941)     </interface>
	I0816 12:21:37.853148   11845 main.go:141] libmachine: (addons-966941)     <serial type='pty'>
	I0816 12:21:37.853156   11845 main.go:141] libmachine: (addons-966941)       <target port='0'/>
	I0816 12:21:37.853161   11845 main.go:141] libmachine: (addons-966941)     </serial>
	I0816 12:21:37.853168   11845 main.go:141] libmachine: (addons-966941)     <console type='pty'>
	I0816 12:21:37.853180   11845 main.go:141] libmachine: (addons-966941)       <target type='serial' port='0'/>
	I0816 12:21:37.853187   11845 main.go:141] libmachine: (addons-966941)     </console>
	I0816 12:21:37.853192   11845 main.go:141] libmachine: (addons-966941)     <rng model='virtio'>
	I0816 12:21:37.853206   11845 main.go:141] libmachine: (addons-966941)       <backend model='random'>/dev/random</backend>
	I0816 12:21:37.853218   11845 main.go:141] libmachine: (addons-966941)     </rng>
	I0816 12:21:37.853226   11845 main.go:141] libmachine: (addons-966941)     
	I0816 12:21:37.853235   11845 main.go:141] libmachine: (addons-966941)     
	I0816 12:21:37.853244   11845 main.go:141] libmachine: (addons-966941)   </devices>
	I0816 12:21:37.853253   11845 main.go:141] libmachine: (addons-966941) </domain>
	I0816 12:21:37.853262   11845 main.go:141] libmachine: (addons-966941) 
	I0816 12:21:37.859924   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:83:6d:e5 in network default
	I0816 12:21:37.860536   11845 main.go:141] libmachine: (addons-966941) Ensuring networks are active...
	I0816 12:21:37.860557   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:21:37.861280   11845 main.go:141] libmachine: (addons-966941) Ensuring network default is active
	I0816 12:21:37.861544   11845 main.go:141] libmachine: (addons-966941) Ensuring network mk-addons-966941 is active
	I0816 12:21:37.862039   11845 main.go:141] libmachine: (addons-966941) Getting domain xml...
	I0816 12:21:37.862702   11845 main.go:141] libmachine: (addons-966941) Creating domain...
	I0816 12:21:39.226117   11845 main.go:141] libmachine: (addons-966941) Waiting to get IP...
	I0816 12:21:39.226798   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:21:39.227181   11845 main.go:141] libmachine: (addons-966941) DBG | unable to find current IP address of domain addons-966941 in network mk-addons-966941
	I0816 12:21:39.227209   11845 main.go:141] libmachine: (addons-966941) DBG | I0816 12:21:39.227123   11867 retry.go:31] will retry after 212.176895ms: waiting for machine to come up
	I0816 12:21:39.440410   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:21:39.440876   11845 main.go:141] libmachine: (addons-966941) DBG | unable to find current IP address of domain addons-966941 in network mk-addons-966941
	I0816 12:21:39.440898   11845 main.go:141] libmachine: (addons-966941) DBG | I0816 12:21:39.440829   11867 retry.go:31] will retry after 318.628327ms: waiting for machine to come up
	I0816 12:21:39.761242   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:21:39.761693   11845 main.go:141] libmachine: (addons-966941) DBG | unable to find current IP address of domain addons-966941 in network mk-addons-966941
	I0816 12:21:39.761725   11845 main.go:141] libmachine: (addons-966941) DBG | I0816 12:21:39.761638   11867 retry.go:31] will retry after 326.446143ms: waiting for machine to come up
	I0816 12:21:40.090044   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:21:40.090529   11845 main.go:141] libmachine: (addons-966941) DBG | unable to find current IP address of domain addons-966941 in network mk-addons-966941
	I0816 12:21:40.090562   11845 main.go:141] libmachine: (addons-966941) DBG | I0816 12:21:40.090475   11867 retry.go:31] will retry after 510.023741ms: waiting for machine to come up
	I0816 12:21:40.601826   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:21:40.602271   11845 main.go:141] libmachine: (addons-966941) DBG | unable to find current IP address of domain addons-966941 in network mk-addons-966941
	I0816 12:21:40.602307   11845 main.go:141] libmachine: (addons-966941) DBG | I0816 12:21:40.602216   11867 retry.go:31] will retry after 470.811839ms: waiting for machine to come up
	I0816 12:21:41.074771   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:21:41.075149   11845 main.go:141] libmachine: (addons-966941) DBG | unable to find current IP address of domain addons-966941 in network mk-addons-966941
	I0816 12:21:41.075179   11845 main.go:141] libmachine: (addons-966941) DBG | I0816 12:21:41.075102   11867 retry.go:31] will retry after 951.863255ms: waiting for machine to come up
	I0816 12:21:42.028898   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:21:42.029352   11845 main.go:141] libmachine: (addons-966941) DBG | unable to find current IP address of domain addons-966941 in network mk-addons-966941
	I0816 12:21:42.029387   11845 main.go:141] libmachine: (addons-966941) DBG | I0816 12:21:42.029306   11867 retry.go:31] will retry after 738.943948ms: waiting for machine to come up
	I0816 12:21:42.770285   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:21:42.770676   11845 main.go:141] libmachine: (addons-966941) DBG | unable to find current IP address of domain addons-966941 in network mk-addons-966941
	I0816 12:21:42.770700   11845 main.go:141] libmachine: (addons-966941) DBG | I0816 12:21:42.770639   11867 retry.go:31] will retry after 1.372347115s: waiting for machine to come up
	I0816 12:21:44.145005   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:21:44.145379   11845 main.go:141] libmachine: (addons-966941) DBG | unable to find current IP address of domain addons-966941 in network mk-addons-966941
	I0816 12:21:44.145401   11845 main.go:141] libmachine: (addons-966941) DBG | I0816 12:21:44.145330   11867 retry.go:31] will retry after 1.259425595s: waiting for machine to come up
	I0816 12:21:45.406828   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:21:45.407302   11845 main.go:141] libmachine: (addons-966941) DBG | unable to find current IP address of domain addons-966941 in network mk-addons-966941
	I0816 12:21:45.407353   11845 main.go:141] libmachine: (addons-966941) DBG | I0816 12:21:45.407275   11867 retry.go:31] will retry after 1.739503164s: waiting for machine to come up
	I0816 12:21:47.147804   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:21:47.148256   11845 main.go:141] libmachine: (addons-966941) DBG | unable to find current IP address of domain addons-966941 in network mk-addons-966941
	I0816 12:21:47.148293   11845 main.go:141] libmachine: (addons-966941) DBG | I0816 12:21:47.148225   11867 retry.go:31] will retry after 2.662184372s: waiting for machine to come up
	I0816 12:21:49.814022   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:21:49.814419   11845 main.go:141] libmachine: (addons-966941) DBG | unable to find current IP address of domain addons-966941 in network mk-addons-966941
	I0816 12:21:49.814444   11845 main.go:141] libmachine: (addons-966941) DBG | I0816 12:21:49.814375   11867 retry.go:31] will retry after 2.650973984s: waiting for machine to come up
	I0816 12:21:52.466479   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:21:52.466900   11845 main.go:141] libmachine: (addons-966941) DBG | unable to find current IP address of domain addons-966941 in network mk-addons-966941
	I0816 12:21:52.466929   11845 main.go:141] libmachine: (addons-966941) DBG | I0816 12:21:52.466855   11867 retry.go:31] will retry after 3.024826315s: waiting for machine to come up
	I0816 12:21:55.494960   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:21:55.495405   11845 main.go:141] libmachine: (addons-966941) DBG | unable to find current IP address of domain addons-966941 in network mk-addons-966941
	I0816 12:21:55.495425   11845 main.go:141] libmachine: (addons-966941) DBG | I0816 12:21:55.495347   11867 retry.go:31] will retry after 5.305855896s: waiting for machine to come up
	I0816 12:22:00.805546   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:00.805964   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has current primary IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:00.805981   11845 main.go:141] libmachine: (addons-966941) Found IP for machine: 192.168.39.129
	I0816 12:22:00.805993   11845 main.go:141] libmachine: (addons-966941) Reserving static IP address...
	I0816 12:22:00.806435   11845 main.go:141] libmachine: (addons-966941) DBG | unable to find host DHCP lease matching {name: "addons-966941", mac: "52:54:00:72:dd:30", ip: "192.168.39.129"} in network mk-addons-966941
	I0816 12:22:00.875078   11845 main.go:141] libmachine: (addons-966941) Reserved static IP address: 192.168.39.129
	I0816 12:22:00.875104   11845 main.go:141] libmachine: (addons-966941) Waiting for SSH to be available...
	I0816 12:22:00.875115   11845 main.go:141] libmachine: (addons-966941) DBG | Getting to WaitForSSH function...
	I0816 12:22:00.877555   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:00.877959   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:minikube Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:00.877990   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:00.878211   11845 main.go:141] libmachine: (addons-966941) DBG | Using SSH client type: external
	I0816 12:22:00.878238   11845 main.go:141] libmachine: (addons-966941) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/addons-966941/id_rsa (-rw-------)
	I0816 12:22:00.878270   11845 main.go:141] libmachine: (addons-966941) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.129 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-3966/.minikube/machines/addons-966941/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 12:22:00.878284   11845 main.go:141] libmachine: (addons-966941) DBG | About to run SSH command:
	I0816 12:22:00.878297   11845 main.go:141] libmachine: (addons-966941) DBG | exit 0
	I0816 12:22:01.012784   11845 main.go:141] libmachine: (addons-966941) DBG | SSH cmd err, output: <nil>: 
	I0816 12:22:01.013045   11845 main.go:141] libmachine: (addons-966941) KVM machine creation complete!
	I0816 12:22:01.013338   11845 main.go:141] libmachine: (addons-966941) Calling .GetConfigRaw
	I0816 12:22:01.013852   11845 main.go:141] libmachine: (addons-966941) Calling .DriverName
	I0816 12:22:01.014047   11845 main.go:141] libmachine: (addons-966941) Calling .DriverName
	I0816 12:22:01.014204   11845 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0816 12:22:01.014221   11845 main.go:141] libmachine: (addons-966941) Calling .GetState
	I0816 12:22:01.015454   11845 main.go:141] libmachine: Detecting operating system of created instance...
	I0816 12:22:01.015470   11845 main.go:141] libmachine: Waiting for SSH to be available...
	I0816 12:22:01.015477   11845 main.go:141] libmachine: Getting to WaitForSSH function...
	I0816 12:22:01.015483   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHHostname
	I0816 12:22:01.017805   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:01.018107   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-966941 Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:01.018132   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:01.018246   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHPort
	I0816 12:22:01.018415   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHKeyPath
	I0816 12:22:01.018561   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHKeyPath
	I0816 12:22:01.018687   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHUsername
	I0816 12:22:01.018833   11845 main.go:141] libmachine: Using SSH client type: native
	I0816 12:22:01.018994   11845 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0816 12:22:01.019004   11845 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0816 12:22:01.124103   11845 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 12:22:01.124123   11845 main.go:141] libmachine: Detecting the provisioner...
	I0816 12:22:01.124130   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHHostname
	I0816 12:22:01.126664   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:01.126968   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-966941 Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:01.126996   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:01.127101   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHPort
	I0816 12:22:01.127297   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHKeyPath
	I0816 12:22:01.127466   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHKeyPath
	I0816 12:22:01.127627   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHUsername
	I0816 12:22:01.127772   11845 main.go:141] libmachine: Using SSH client type: native
	I0816 12:22:01.127964   11845 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0816 12:22:01.127979   11845 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0816 12:22:01.237557   11845 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0816 12:22:01.237613   11845 main.go:141] libmachine: found compatible host: buildroot
	I0816 12:22:01.237623   11845 main.go:141] libmachine: Provisioning with buildroot...
	I0816 12:22:01.237634   11845 main.go:141] libmachine: (addons-966941) Calling .GetMachineName
	I0816 12:22:01.237834   11845 buildroot.go:166] provisioning hostname "addons-966941"
	I0816 12:22:01.237855   11845 main.go:141] libmachine: (addons-966941) Calling .GetMachineName
	I0816 12:22:01.238044   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHHostname
	I0816 12:22:01.240394   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:01.240712   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-966941 Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:01.240740   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:01.240880   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHPort
	I0816 12:22:01.241039   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHKeyPath
	I0816 12:22:01.241198   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHKeyPath
	I0816 12:22:01.241310   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHUsername
	I0816 12:22:01.241479   11845 main.go:141] libmachine: Using SSH client type: native
	I0816 12:22:01.241630   11845 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0816 12:22:01.241643   11845 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-966941 && echo "addons-966941" | sudo tee /etc/hostname
	I0816 12:22:01.363855   11845 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-966941
	
	I0816 12:22:01.363878   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHHostname
	I0816 12:22:01.366629   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:01.367013   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-966941 Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:01.367046   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:01.367263   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHPort
	I0816 12:22:01.367451   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHKeyPath
	I0816 12:22:01.367607   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHKeyPath
	I0816 12:22:01.367703   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHUsername
	I0816 12:22:01.367869   11845 main.go:141] libmachine: Using SSH client type: native
	I0816 12:22:01.368066   11845 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0816 12:22:01.368085   11845 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-966941' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-966941/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-966941' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 12:22:01.487413   11845 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 12:22:01.487443   11845 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-3966/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-3966/.minikube}
	I0816 12:22:01.487478   11845 buildroot.go:174] setting up certificates
	I0816 12:22:01.487488   11845 provision.go:84] configureAuth start
	I0816 12:22:01.487502   11845 main.go:141] libmachine: (addons-966941) Calling .GetMachineName
	I0816 12:22:01.487758   11845 main.go:141] libmachine: (addons-966941) Calling .GetIP
	I0816 12:22:01.490514   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:01.490908   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-966941 Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:01.490974   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:01.491063   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHHostname
	I0816 12:22:01.493334   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:01.493680   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-966941 Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:01.493706   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:01.493838   11845 provision.go:143] copyHostCerts
	I0816 12:22:01.493896   11845 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem (1123 bytes)
	I0816 12:22:01.494044   11845 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem (1675 bytes)
	I0816 12:22:01.494129   11845 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem (1082 bytes)
	I0816 12:22:01.494202   11845 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem org=jenkins.addons-966941 san=[127.0.0.1 192.168.39.129 addons-966941 localhost minikube]
	I0816 12:22:01.559551   11845 provision.go:177] copyRemoteCerts
	I0816 12:22:01.559598   11845 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 12:22:01.559617   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHHostname
	I0816 12:22:01.562323   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:01.562653   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-966941 Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:01.562676   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:01.562833   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHPort
	I0816 12:22:01.563019   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHKeyPath
	I0816 12:22:01.563143   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHUsername
	I0816 12:22:01.563306   11845 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/addons-966941/id_rsa Username:docker}
	I0816 12:22:01.646652   11845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 12:22:01.670552   11845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0816 12:22:01.693669   11845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 12:22:01.716481   11845 provision.go:87] duration metric: took 228.980328ms to configureAuth
	I0816 12:22:01.716513   11845 buildroot.go:189] setting minikube options for container-runtime
	I0816 12:22:01.716691   11845 config.go:182] Loaded profile config "addons-966941": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 12:22:01.716772   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHHostname
	I0816 12:22:01.719693   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:01.720061   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-966941 Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:01.720087   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:01.720257   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHPort
	I0816 12:22:01.720424   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHKeyPath
	I0816 12:22:01.720577   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHKeyPath
	I0816 12:22:01.720764   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHUsername
	I0816 12:22:01.720918   11845 main.go:141] libmachine: Using SSH client type: native
	I0816 12:22:01.721143   11845 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0816 12:22:01.721159   11845 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 12:22:01.985305   11845 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 12:22:01.985329   11845 main.go:141] libmachine: Checking connection to Docker...
	I0816 12:22:01.985336   11845 main.go:141] libmachine: (addons-966941) Calling .GetURL
	I0816 12:22:01.986661   11845 main.go:141] libmachine: (addons-966941) DBG | Using libvirt version 6000000
	I0816 12:22:01.988765   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:01.989112   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-966941 Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:01.989137   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:01.989280   11845 main.go:141] libmachine: Docker is up and running!
	I0816 12:22:01.989294   11845 main.go:141] libmachine: Reticulating splines...
	I0816 12:22:01.989301   11845 client.go:171] duration metric: took 25.094181306s to LocalClient.Create
	I0816 12:22:01.989329   11845 start.go:167] duration metric: took 25.094258123s to libmachine.API.Create "addons-966941"
	I0816 12:22:01.989341   11845 start.go:293] postStartSetup for "addons-966941" (driver="kvm2")
	I0816 12:22:01.989353   11845 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 12:22:01.989376   11845 main.go:141] libmachine: (addons-966941) Calling .DriverName
	I0816 12:22:01.989570   11845 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 12:22:01.989598   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHHostname
	I0816 12:22:01.991457   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:01.991717   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-966941 Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:01.991739   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:01.991830   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHPort
	I0816 12:22:01.992009   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHKeyPath
	I0816 12:22:01.992155   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHUsername
	I0816 12:22:01.992305   11845 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/addons-966941/id_rsa Username:docker}
	I0816 12:22:02.078668   11845 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 12:22:02.082932   11845 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 12:22:02.082954   11845 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/addons for local assets ...
	I0816 12:22:02.083058   11845 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/files for local assets ...
	I0816 12:22:02.083083   11845 start.go:296] duration metric: took 93.736523ms for postStartSetup
	I0816 12:22:02.083113   11845 main.go:141] libmachine: (addons-966941) Calling .GetConfigRaw
	I0816 12:22:02.083715   11845 main.go:141] libmachine: (addons-966941) Calling .GetIP
	I0816 12:22:02.086531   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:02.086836   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-966941 Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:02.086868   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:02.087038   11845 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/config.json ...
	I0816 12:22:02.087204   11845 start.go:128] duration metric: took 25.209609244s to createHost
	I0816 12:22:02.087223   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHHostname
	I0816 12:22:02.089238   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:02.089524   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-966941 Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:02.089545   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:02.089668   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHPort
	I0816 12:22:02.089844   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHKeyPath
	I0816 12:22:02.089994   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHKeyPath
	I0816 12:22:02.090126   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHUsername
	I0816 12:22:02.090281   11845 main.go:141] libmachine: Using SSH client type: native
	I0816 12:22:02.090458   11845 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0816 12:22:02.090472   11845 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 12:22:02.197425   11845 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723810922.174464922
	
	I0816 12:22:02.197453   11845 fix.go:216] guest clock: 1723810922.174464922
	I0816 12:22:02.197461   11845 fix.go:229] Guest: 2024-08-16 12:22:02.174464922 +0000 UTC Remote: 2024-08-16 12:22:02.087214216 +0000 UTC m=+25.306065307 (delta=87.250706ms)
	I0816 12:22:02.197495   11845 fix.go:200] guest clock delta is within tolerance: 87.250706ms
	I0816 12:22:02.197502   11845 start.go:83] releasing machines lock for "addons-966941", held for 25.319977694s
	I0816 12:22:02.197526   11845 main.go:141] libmachine: (addons-966941) Calling .DriverName
	I0816 12:22:02.197792   11845 main.go:141] libmachine: (addons-966941) Calling .GetIP
	I0816 12:22:02.200291   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:02.200584   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-966941 Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:02.200611   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:02.200753   11845 main.go:141] libmachine: (addons-966941) Calling .DriverName
	I0816 12:22:02.201288   11845 main.go:141] libmachine: (addons-966941) Calling .DriverName
	I0816 12:22:02.201467   11845 main.go:141] libmachine: (addons-966941) Calling .DriverName
	I0816 12:22:02.201542   11845 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 12:22:02.201591   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHHostname
	I0816 12:22:02.201679   11845 ssh_runner.go:195] Run: cat /version.json
	I0816 12:22:02.201697   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHHostname
	I0816 12:22:02.204038   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:02.204199   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:02.204351   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-966941 Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:02.204376   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:02.204480   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHPort
	I0816 12:22:02.204573   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-966941 Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:02.204599   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:02.204622   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHKeyPath
	I0816 12:22:02.204747   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHPort
	I0816 12:22:02.204804   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHUsername
	I0816 12:22:02.204893   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHKeyPath
	I0816 12:22:02.204969   11845 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/addons-966941/id_rsa Username:docker}
	I0816 12:22:02.205046   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHUsername
	I0816 12:22:02.205194   11845 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/addons-966941/id_rsa Username:docker}
	I0816 12:22:02.282405   11845 ssh_runner.go:195] Run: systemctl --version
	I0816 12:22:02.310959   11845 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 12:22:02.463048   11845 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 12:22:02.469095   11845 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 12:22:02.469163   11845 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 12:22:02.484875   11845 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 12:22:02.484896   11845 start.go:495] detecting cgroup driver to use...
	I0816 12:22:02.484968   11845 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 12:22:02.500442   11845 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 12:22:02.513648   11845 docker.go:217] disabling cri-docker service (if available) ...
	I0816 12:22:02.513694   11845 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 12:22:02.526809   11845 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 12:22:02.539950   11845 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 12:22:02.654923   11845 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 12:22:02.814398   11845 docker.go:233] disabling docker service ...
	I0816 12:22:02.814468   11845 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 12:22:02.828862   11845 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 12:22:02.841976   11845 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 12:22:02.968630   11845 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 12:22:03.085351   11845 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 12:22:03.098835   11845 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 12:22:03.117187   11845 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 12:22:03.117258   11845 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:22:03.127227   11845 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 12:22:03.127281   11845 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:22:03.137242   11845 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:22:03.146828   11845 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:22:03.156523   11845 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 12:22:03.166478   11845 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:22:03.176319   11845 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:22:03.193240   11845 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:22:03.203596   11845 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 12:22:03.212489   11845 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 12:22:03.212530   11845 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 12:22:03.224505   11845 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 12:22:03.233622   11845 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 12:22:03.348164   11845 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 12:22:03.482079   11845 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 12:22:03.482211   11845 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 12:22:03.486809   11845 start.go:563] Will wait 60s for crictl version
	I0816 12:22:03.486867   11845 ssh_runner.go:195] Run: which crictl
	I0816 12:22:03.490519   11845 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 12:22:03.527537   11845 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 12:22:03.527664   11845 ssh_runner.go:195] Run: crio --version
	I0816 12:22:03.554262   11845 ssh_runner.go:195] Run: crio --version
	I0816 12:22:03.590661   11845 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 12:22:03.591739   11845 main.go:141] libmachine: (addons-966941) Calling .GetIP
	I0816 12:22:03.594182   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:03.594464   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-966941 Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:03.594492   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:03.594670   11845 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0816 12:22:03.598688   11845 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 12:22:03.610921   11845 kubeadm.go:883] updating cluster {Name:addons-966941 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:addons-966941 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 12:22:03.611044   11845 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 12:22:03.611103   11845 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 12:22:03.645533   11845 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 12:22:03.645614   11845 ssh_runner.go:195] Run: which lz4
	I0816 12:22:03.649556   11845 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 12:22:03.653575   11845 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 12:22:03.653599   11845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0816 12:22:04.891048   11845 crio.go:462] duration metric: took 1.241534232s to copy over tarball
	I0816 12:22:04.891116   11845 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 12:22:06.973093   11845 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.081946785s)
	I0816 12:22:06.973131   11845 crio.go:469] duration metric: took 2.082059301s to extract the tarball
	I0816 12:22:06.973141   11845 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 12:22:07.009569   11845 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 12:22:07.052109   11845 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 12:22:07.052137   11845 cache_images.go:84] Images are preloaded, skipping loading
	I0816 12:22:07.052146   11845 kubeadm.go:934] updating node { 192.168.39.129 8443 v1.31.0 crio true true} ...
	I0816 12:22:07.052269   11845 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-966941 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.129
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-966941 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 12:22:07.052339   11845 ssh_runner.go:195] Run: crio config
	I0816 12:22:07.096889   11845 cni.go:84] Creating CNI manager for ""
	I0816 12:22:07.096924   11845 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 12:22:07.096936   11845 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 12:22:07.096963   11845 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.129 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-966941 NodeName:addons-966941 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.129"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.129 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 12:22:07.097101   11845 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.129
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-966941"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.129
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.129"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 12:22:07.097171   11845 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 12:22:07.107194   11845 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 12:22:07.107260   11845 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 12:22:07.116707   11845 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0816 12:22:07.132780   11845 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 12:22:07.148546   11845 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0816 12:22:07.164109   11845 ssh_runner.go:195] Run: grep 192.168.39.129	control-plane.minikube.internal$ /etc/hosts
	I0816 12:22:07.167796   11845 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.129	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 12:22:07.179782   11845 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 12:22:07.286913   11845 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 12:22:07.302836   11845 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941 for IP: 192.168.39.129
	I0816 12:22:07.302855   11845 certs.go:194] generating shared ca certs ...
	I0816 12:22:07.302870   11845 certs.go:226] acquiring lock for ca certs: {Name:mkdb46755373c135acf8239d2d4352f4b0b3d1d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:22:07.302995   11845 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key
	I0816 12:22:07.515227   11845 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt ...
	I0816 12:22:07.515252   11845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt: {Name:mkf4a08bf4f9517231e76adaa006f3cfec5b8c3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:22:07.515400   11845 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key ...
	I0816 12:22:07.515410   11845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key: {Name:mkeb561bc804238c8341bd7caa5e937264af6e02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:22:07.515479   11845 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key
	I0816 12:22:07.665289   11845 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.crt ...
	I0816 12:22:07.665313   11845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.crt: {Name:mk6e24ac0958fb888c7b45e9b9ff4f9b47a400f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:22:07.665469   11845 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key ...
	I0816 12:22:07.665479   11845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key: {Name:mk52e12681516341823900b431cf27eff2c25926 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:22:07.665544   11845 certs.go:256] generating profile certs ...
	I0816 12:22:07.665593   11845 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/client.key
	I0816 12:22:07.665608   11845 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/client.crt with IP's: []
	I0816 12:22:07.869665   11845 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/client.crt ...
	I0816 12:22:07.869706   11845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/client.crt: {Name:mk4fd6c50a34763252e9be1fa8164abc03f798c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:22:07.869941   11845 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/client.key ...
	I0816 12:22:07.869962   11845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/client.key: {Name:mke945e0b84d1e635d8998ea4f5f2312ee99d533 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:22:07.870064   11845 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/apiserver.key.b0466fab
	I0816 12:22:07.870089   11845 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/apiserver.crt.b0466fab with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.129]
	I0816 12:22:08.155629   11845 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/apiserver.crt.b0466fab ...
	I0816 12:22:08.155657   11845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/apiserver.crt.b0466fab: {Name:mkc9e55f65455e2a2112f379dd348dcb607f2c14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:22:08.155840   11845 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/apiserver.key.b0466fab ...
	I0816 12:22:08.155861   11845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/apiserver.key.b0466fab: {Name:mk3c3c0d79dd1f95b45b87062287113b27972793 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:22:08.155955   11845 certs.go:381] copying /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/apiserver.crt.b0466fab -> /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/apiserver.crt
	I0816 12:22:08.156046   11845 certs.go:385] copying /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/apiserver.key.b0466fab -> /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/apiserver.key
	I0816 12:22:08.156115   11845 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/proxy-client.key
	I0816 12:22:08.156137   11845 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/proxy-client.crt with IP's: []
	I0816 12:22:08.256661   11845 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/proxy-client.crt ...
	I0816 12:22:08.256690   11845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/proxy-client.crt: {Name:mke2f3cbe449def9dd50c5af26d075a11d855b7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:22:08.256872   11845 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/proxy-client.key ...
	I0816 12:22:08.256889   11845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/proxy-client.key: {Name:mk8c52ea3248ce4904141d7f91b4cbbec73df04d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:22:08.257118   11845 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 12:22:08.257157   11845 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem (1082 bytes)
	I0816 12:22:08.257177   11845 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem (1123 bytes)
	I0816 12:22:08.257202   11845 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem (1675 bytes)
	I0816 12:22:08.257768   11845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 12:22:08.283301   11845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 12:22:08.306394   11845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 12:22:08.329281   11845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 12:22:08.351799   11845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0816 12:22:08.374345   11845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 12:22:08.397056   11845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 12:22:08.426561   11845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 12:22:08.450869   11845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 12:22:08.474499   11845 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 12:22:08.490373   11845 ssh_runner.go:195] Run: openssl version
	I0816 12:22:08.495969   11845 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 12:22:08.506077   11845 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 12:22:08.510159   11845 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 12:22 /usr/share/ca-certificates/minikubeCA.pem
	I0816 12:22:08.510206   11845 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 12:22:08.515693   11845 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 12:22:08.525583   11845 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 12:22:08.529444   11845 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0816 12:22:08.529487   11845 kubeadm.go:392] StartCluster: {Name:addons-966941 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:addons-966941 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 12:22:08.529553   11845 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 12:22:08.529606   11845 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 12:22:08.562869   11845 cri.go:89] found id: ""
	I0816 12:22:08.562938   11845 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 12:22:08.572825   11845 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 12:22:08.581820   11845 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 12:22:08.590928   11845 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 12:22:08.590947   11845 kubeadm.go:157] found existing configuration files:
	
	I0816 12:22:08.590988   11845 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 12:22:08.599064   11845 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 12:22:08.599122   11845 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 12:22:08.608131   11845 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 12:22:08.616329   11845 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 12:22:08.616379   11845 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 12:22:08.624988   11845 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 12:22:08.633413   11845 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 12:22:08.633466   11845 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 12:22:08.641933   11845 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 12:22:08.650507   11845 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 12:22:08.650560   11845 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 12:22:08.659696   11845 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 12:22:08.716290   11845 kubeadm.go:310] W0816 12:22:08.700218     830 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 12:22:08.716871   11845 kubeadm.go:310] W0816 12:22:08.700996     830 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 12:22:08.833875   11845 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 12:22:18.614213   11845 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0816 12:22:18.614280   11845 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 12:22:18.614382   11845 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 12:22:18.614524   11845 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 12:22:18.614640   11845 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0816 12:22:18.614735   11845 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 12:22:18.616409   11845 out.go:235]   - Generating certificates and keys ...
	I0816 12:22:18.616487   11845 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 12:22:18.616549   11845 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 12:22:18.616609   11845 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0816 12:22:18.616659   11845 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0816 12:22:18.616710   11845 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0816 12:22:18.616773   11845 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0816 12:22:18.616861   11845 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0816 12:22:18.617015   11845 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-966941 localhost] and IPs [192.168.39.129 127.0.0.1 ::1]
	I0816 12:22:18.617100   11845 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0816 12:22:18.617228   11845 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-966941 localhost] and IPs [192.168.39.129 127.0.0.1 ::1]
	I0816 12:22:18.617319   11845 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0816 12:22:18.617419   11845 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0816 12:22:18.617522   11845 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0816 12:22:18.617602   11845 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 12:22:18.617682   11845 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 12:22:18.617768   11845 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0816 12:22:18.617846   11845 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 12:22:18.617938   11845 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 12:22:18.618005   11845 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 12:22:18.618122   11845 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 12:22:18.618223   11845 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 12:22:18.619736   11845 out.go:235]   - Booting up control plane ...
	I0816 12:22:18.619829   11845 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 12:22:18.619917   11845 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 12:22:18.619992   11845 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 12:22:18.620091   11845 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 12:22:18.620169   11845 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 12:22:18.620203   11845 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 12:22:18.620315   11845 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0816 12:22:18.620423   11845 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0816 12:22:18.620512   11845 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 505.296739ms
	I0816 12:22:18.620583   11845 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0816 12:22:18.620635   11845 kubeadm.go:310] [api-check] The API server is healthy after 5.001574902s
	I0816 12:22:18.620731   11845 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0816 12:22:18.620839   11845 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0816 12:22:18.620888   11845 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0816 12:22:18.621055   11845 kubeadm.go:310] [mark-control-plane] Marking the node addons-966941 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0816 12:22:18.621112   11845 kubeadm.go:310] [bootstrap-token] Using token: 7fq1v5.5ofnkq5fbptaxy8o
	I0816 12:22:18.622308   11845 out.go:235]   - Configuring RBAC rules ...
	I0816 12:22:18.622392   11845 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0816 12:22:18.622465   11845 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0816 12:22:18.622584   11845 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0816 12:22:18.622754   11845 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0816 12:22:18.622901   11845 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0816 12:22:18.623039   11845 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0816 12:22:18.623164   11845 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0816 12:22:18.623208   11845 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0816 12:22:18.623282   11845 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0816 12:22:18.623292   11845 kubeadm.go:310] 
	I0816 12:22:18.623377   11845 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0816 12:22:18.623387   11845 kubeadm.go:310] 
	I0816 12:22:18.623477   11845 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0816 12:22:18.623484   11845 kubeadm.go:310] 
	I0816 12:22:18.623505   11845 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0816 12:22:18.623559   11845 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0816 12:22:18.623606   11845 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0816 12:22:18.623612   11845 kubeadm.go:310] 
	I0816 12:22:18.623656   11845 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0816 12:22:18.623661   11845 kubeadm.go:310] 
	I0816 12:22:18.623703   11845 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0816 12:22:18.623709   11845 kubeadm.go:310] 
	I0816 12:22:18.623752   11845 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0816 12:22:18.623817   11845 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0816 12:22:18.623880   11845 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0816 12:22:18.623886   11845 kubeadm.go:310] 
	I0816 12:22:18.623974   11845 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0816 12:22:18.624081   11845 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0816 12:22:18.624096   11845 kubeadm.go:310] 
	I0816 12:22:18.624198   11845 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 7fq1v5.5ofnkq5fbptaxy8o \
	I0816 12:22:18.624320   11845 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4dc33a2efd2478f5cbf5aa38b4f971f510c5b41ce450063af834610254851641 \
	I0816 12:22:18.624348   11845 kubeadm.go:310] 	--control-plane 
	I0816 12:22:18.624356   11845 kubeadm.go:310] 
	I0816 12:22:18.624466   11845 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0816 12:22:18.624476   11845 kubeadm.go:310] 
	I0816 12:22:18.624577   11845 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 7fq1v5.5ofnkq5fbptaxy8o \
	I0816 12:22:18.624725   11845 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4dc33a2efd2478f5cbf5aa38b4f971f510c5b41ce450063af834610254851641 
	I0816 12:22:18.624738   11845 cni.go:84] Creating CNI manager for ""
	I0816 12:22:18.624744   11845 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 12:22:18.626158   11845 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 12:22:18.627164   11845 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 12:22:18.637696   11845 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 12:22:18.656322   11845 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 12:22:18.656398   11845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 12:22:18.656402   11845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-966941 minikube.k8s.io/updated_at=2024_08_16T12_22_18_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ab84f9bc76071a77c857a14f5c66dccc01002b05 minikube.k8s.io/name=addons-966941 minikube.k8s.io/primary=true
	I0816 12:22:18.798830   11845 ops.go:34] apiserver oom_adj: -16
	I0816 12:22:18.798952   11845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 12:22:19.299911   11845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 12:22:19.799392   11845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 12:22:20.299457   11845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 12:22:20.800128   11845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 12:22:21.300064   11845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 12:22:21.799276   11845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 12:22:22.299033   11845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 12:22:22.799901   11845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 12:22:23.299827   11845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 12:22:23.468244   11845 kubeadm.go:1113] duration metric: took 4.811897383s to wait for elevateKubeSystemPrivileges
	I0816 12:22:23.468276   11845 kubeadm.go:394] duration metric: took 14.938792629s to StartCluster
	I0816 12:22:23.468295   11845 settings.go:142] acquiring lock: {Name:mk96ffc9331ceacd8f3c1c33a59b38e047898a0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:22:23.468438   11845 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 12:22:23.468766   11845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/kubeconfig: {Name:mk102d7d0e1fecb6a50b9d6c1ee82dcff7f7a898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:22:23.468983   11845 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 12:22:23.468998   11845 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 12:22:23.469056   11845 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0816 12:22:23.469150   11845 addons.go:69] Setting yakd=true in profile "addons-966941"
	I0816 12:22:23.469189   11845 addons.go:234] Setting addon yakd=true in "addons-966941"
	I0816 12:22:23.469225   11845 host.go:66] Checking if "addons-966941" exists ...
	I0816 12:22:23.469235   11845 config.go:182] Loaded profile config "addons-966941": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 12:22:23.469291   11845 addons.go:69] Setting inspektor-gadget=true in profile "addons-966941"
	I0816 12:22:23.469321   11845 addons.go:234] Setting addon inspektor-gadget=true in "addons-966941"
	I0816 12:22:23.469355   11845 host.go:66] Checking if "addons-966941" exists ...
	I0816 12:22:23.469436   11845 addons.go:69] Setting storage-provisioner=true in profile "addons-966941"
	I0816 12:22:23.469469   11845 addons.go:234] Setting addon storage-provisioner=true in "addons-966941"
	I0816 12:22:23.469502   11845 host.go:66] Checking if "addons-966941" exists ...
	I0816 12:22:23.469661   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.469687   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.469713   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.469741   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.469929   11845 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-966941"
	I0816 12:22:23.469946   11845 addons.go:69] Setting registry=true in profile "addons-966941"
	I0816 12:22:23.469965   11845 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-966941"
	I0816 12:22:23.469963   11845 addons.go:69] Setting volcano=true in profile "addons-966941"
	I0816 12:22:23.469972   11845 addons.go:234] Setting addon registry=true in "addons-966941"
	I0816 12:22:23.469970   11845 addons.go:69] Setting volumesnapshots=true in profile "addons-966941"
	I0816 12:22:23.469996   11845 addons.go:234] Setting addon volcano=true in "addons-966941"
	I0816 12:22:23.469998   11845 host.go:66] Checking if "addons-966941" exists ...
	I0816 12:22:23.469932   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.470019   11845 addons.go:234] Setting addon volumesnapshots=true in "addons-966941"
	I0816 12:22:23.470025   11845 host.go:66] Checking if "addons-966941" exists ...
	I0816 12:22:23.470047   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.470052   11845 host.go:66] Checking if "addons-966941" exists ...
	I0816 12:22:23.470339   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.470358   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.470387   11845 addons.go:69] Setting metrics-server=true in profile "addons-966941"
	I0816 12:22:23.470395   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.470413   11845 addons.go:234] Setting addon metrics-server=true in "addons-966941"
	I0816 12:22:23.470424   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.470435   11845 host.go:66] Checking if "addons-966941" exists ...
	I0816 12:22:23.470435   11845 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-966941"
	I0816 12:22:23.470446   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.470468   11845 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-966941"
	I0816 12:22:23.470751   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.470791   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.470795   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.470801   11845 addons.go:69] Setting gcp-auth=true in profile "addons-966941"
	I0816 12:22:23.470821   11845 mustload.go:65] Loading cluster: addons-966941
	I0816 12:22:23.469998   11845 host.go:66] Checking if "addons-966941" exists ...
	I0816 12:22:23.470838   11845 addons.go:69] Setting helm-tiller=true in profile "addons-966941"
	I0816 12:22:23.470860   11845 addons.go:234] Setting addon helm-tiller=true in "addons-966941"
	I0816 12:22:23.470881   11845 host.go:66] Checking if "addons-966941" exists ...
	I0816 12:22:23.471196   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.471198   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.471217   11845 addons.go:69] Setting cloud-spanner=true in profile "addons-966941"
	I0816 12:22:23.471221   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.471240   11845 addons.go:234] Setting addon cloud-spanner=true in "addons-966941"
	I0816 12:22:23.471262   11845 host.go:66] Checking if "addons-966941" exists ...
	I0816 12:22:23.471276   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.471341   11845 addons.go:69] Setting ingress-dns=true in profile "addons-966941"
	I0816 12:22:23.471362   11845 addons.go:234] Setting addon ingress-dns=true in "addons-966941"
	I0816 12:22:23.471546   11845 host.go:66] Checking if "addons-966941" exists ...
	I0816 12:22:23.471597   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.471622   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.471663   11845 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-966941"
	I0816 12:22:23.471705   11845 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-966941"
	I0816 12:22:23.471861   11845 addons.go:69] Setting default-storageclass=true in profile "addons-966941"
	I0816 12:22:23.471885   11845 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-966941"
	I0816 12:22:23.471910   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.471941   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.470824   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.470428   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.472380   11845 host.go:66] Checking if "addons-966941" exists ...
	I0816 12:22:23.474283   11845 config.go:182] Loaded profile config "addons-966941": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 12:22:23.474634   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.474661   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.475322   11845 out.go:177] * Verifying Kubernetes components...
	I0816 12:22:23.477124   11845 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 12:22:23.470832   11845 addons.go:69] Setting ingress=true in profile "addons-966941"
	I0816 12:22:23.477383   11845 addons.go:234] Setting addon ingress=true in "addons-966941"
	I0816 12:22:23.477443   11845 host.go:66] Checking if "addons-966941" exists ...
	I0816 12:22:23.477913   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.477954   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.490375   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44741
	I0816 12:22:23.491344   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.498560   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32807
	I0816 12:22:23.498677   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.498696   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.499177   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.499789   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.499828   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.500324   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.500413   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34147
	I0816 12:22:23.500495   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46455
	I0816 12:22:23.501008   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.501116   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.501130   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.501141   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.501449   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.501580   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.501594   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.501939   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.501948   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.501968   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.502542   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.502574   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.502813   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.502824   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.503235   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.503800   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.503832   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.503857   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37673
	I0816 12:22:23.504208   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.504239   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40709
	I0816 12:22:23.504638   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.504664   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.505150   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.507689   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37431
	I0816 12:22:23.513139   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45705
	I0816 12:22:23.513167   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33073
	I0816 12:22:23.513252   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41973
	I0816 12:22:23.513505   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.513550   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.513577   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.513654   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.513673   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.514105   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.514121   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.514247   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.514258   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.514376   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.514398   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.514835   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.514918   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.514957   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.515072   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.515085   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.515138   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.515726   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.515745   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.516126   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.516158   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.516855   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.516885   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.520882   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.520931   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.521174   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.521253   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.521394   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.521407   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.521783   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.521810   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.522169   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.522466   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.522499   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.523108   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.523151   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.524442   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45389
	I0816 12:22:23.537094   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35145
	I0816 12:22:23.537511   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.537955   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.537975   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.538312   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.538503   11845 main.go:141] libmachine: (addons-966941) Calling .GetState
	I0816 12:22:23.539797   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33367
	I0816 12:22:23.540209   11845 main.go:141] libmachine: (addons-966941) Calling .DriverName
	I0816 12:22:23.540368   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.541506   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36461
	I0816 12:22:23.541599   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.541618   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.541720   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38951
	I0816 12:22:23.542155   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.542498   11845 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0816 12:22:23.542709   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.542727   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.542796   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.543450   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.543494   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.543505   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.543811   11845 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0816 12:22:23.543826   11845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0816 12:22:23.543842   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHHostname
	I0816 12:22:23.544082   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.544483   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.544513   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.545195   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.545227   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.545819   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.545995   11845 main.go:141] libmachine: (addons-966941) Calling .GetState
	I0816 12:22:23.547492   11845 main.go:141] libmachine: (addons-966941) Calling .DriverName
	I0816 12:22:23.547991   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:23.548510   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-966941 Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:23.548531   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:23.548692   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHPort
	I0816 12:22:23.548830   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHKeyPath
	I0816 12:22:23.549389   11845 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0816 12:22:23.549427   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHUsername
	I0816 12:22:23.549591   11845 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/addons-966941/id_rsa Username:docker}
	I0816 12:22:23.549610   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.550219   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.550246   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.550473   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.550591   11845 main.go:141] libmachine: (addons-966941) Calling .GetState
	I0816 12:22:23.551103   11845 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0816 12:22:23.551119   11845 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0816 12:22:23.551136   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHHostname
	I0816 12:22:23.552752   11845 main.go:141] libmachine: (addons-966941) Calling .DriverName
	I0816 12:22:23.554172   11845 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0816 12:22:23.554668   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:23.554903   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-966941 Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:23.554928   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:23.555234   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHPort
	I0816 12:22:23.555396   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHKeyPath
	I0816 12:22:23.555429   11845 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0816 12:22:23.555439   11845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0816 12:22:23.555454   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHHostname
	I0816 12:22:23.555492   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHUsername
	I0816 12:22:23.555963   11845 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/addons-966941/id_rsa Username:docker}
	I0816 12:22:23.556269   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38987
	I0816 12:22:23.557218   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.557746   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.557771   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.558100   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.559138   11845 main.go:141] libmachine: (addons-966941) Calling .GetState
	I0816 12:22:23.559571   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:23.560121   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45811
	I0816 12:22:23.560336   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-966941 Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:23.560353   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:23.560430   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHPort
	I0816 12:22:23.560581   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHKeyPath
	I0816 12:22:23.560644   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.560915   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHUsername
	I0816 12:22:23.561051   11845 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/addons-966941/id_rsa Username:docker}
	I0816 12:22:23.561411   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.561428   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.561795   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.561975   11845 main.go:141] libmachine: (addons-966941) Calling .GetState
	I0816 12:22:23.563802   11845 main.go:141] libmachine: (addons-966941) Calling .DriverName
	I0816 12:22:23.565590   11845 addons.go:234] Setting addon default-storageclass=true in "addons-966941"
	I0816 12:22:23.565637   11845 host.go:66] Checking if "addons-966941" exists ...
	I0816 12:22:23.565771   11845 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 12:22:23.565914   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37253
	I0816 12:22:23.566027   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.566058   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.566268   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.566724   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.566741   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.567118   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.567151   11845 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 12:22:23.567165   11845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 12:22:23.567183   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHHostname
	I0816 12:22:23.567333   11845 main.go:141] libmachine: (addons-966941) Calling .GetState
	I0816 12:22:23.569799   11845 main.go:141] libmachine: (addons-966941) Calling .DriverName
	I0816 12:22:23.571449   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:23.571680   11845 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0816 12:22:23.571968   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-966941 Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:23.571986   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:23.572210   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHPort
	I0816 12:22:23.572378   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHKeyPath
	I0816 12:22:23.572523   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHUsername
	I0816 12:22:23.572635   11845 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/addons-966941/id_rsa Username:docker}
	I0816 12:22:23.573906   11845 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0816 12:22:23.575359   11845 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0816 12:22:23.575492   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41217
	I0816 12:22:23.576076   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.576575   11845 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0816 12:22:23.576596   11845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0816 12:22:23.576615   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHHostname
	I0816 12:22:23.576636   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.576653   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.577037   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.577195   11845 main.go:141] libmachine: (addons-966941) Calling .GetState
	I0816 12:22:23.577857   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34779
	I0816 12:22:23.578801   11845 main.go:141] libmachine: (addons-966941) Calling .DriverName
	I0816 12:22:23.578860   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.579381   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.579398   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.579946   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.580242   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42409
	I0816 12:22:23.580494   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38511
	I0816 12:22:23.580647   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.580659   11845 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0816 12:22:23.580738   11845 main.go:141] libmachine: (addons-966941) Calling .GetState
	I0816 12:22:23.581129   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.581147   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.581471   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.581647   11845 main.go:141] libmachine: (addons-966941) Calling .GetState
	I0816 12:22:23.582173   11845 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0816 12:22:23.582201   11845 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0816 12:22:23.582220   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHHostname
	I0816 12:22:23.582373   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.582472   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45795
	I0816 12:22:23.582829   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.584158   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.584175   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.584226   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:23.584245   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-966941 Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:23.584263   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:23.584291   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHPort
	I0816 12:22:23.584391   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.584401   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.584754   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.584985   11845 main.go:141] libmachine: (addons-966941) Calling .GetState
	I0816 12:22:23.585825   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHKeyPath
	I0816 12:22:23.585929   11845 main.go:141] libmachine: (addons-966941) Calling .DriverName
	I0816 12:22:23.585937   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46723
	I0816 12:22:23.585701   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.586687   11845 main.go:141] libmachine: (addons-966941) Calling .DriverName
	I0816 12:22:23.586744   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHUsername
	I0816 12:22:23.586785   11845 main.go:141] libmachine: (addons-966941) Calling .GetState
	I0816 12:22:23.587317   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43489
	I0816 12:22:23.587364   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.587604   11845 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/addons-966941/id_rsa Username:docker}
	I0816 12:22:23.587984   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.588161   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.588180   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.588758   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32917
	I0816 12:22:23.588835   11845 main.go:141] libmachine: (addons-966941) Calling .DriverName
	I0816 12:22:23.588962   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.588970   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.589153   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.589190   11845 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0816 12:22:23.589295   11845 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0816 12:22:23.589371   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.589571   11845 main.go:141] libmachine: (addons-966941) Calling .GetState
	I0816 12:22:23.589572   11845 main.go:141] libmachine: (addons-966941) Calling .GetState
	I0816 12:22:23.589781   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.590236   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.590258   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.590350   11845 out.go:177]   - Using image docker.io/registry:2.8.3
	I0816 12:22:23.590463   11845 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 12:22:23.590474   11845 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 12:22:23.590490   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHHostname
	I0816 12:22:23.590574   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.590751   11845 main.go:141] libmachine: (addons-966941) Calling .GetState
	I0816 12:22:23.591455   11845 main.go:141] libmachine: (addons-966941) Calling .DriverName
	I0816 12:22:23.592456   11845 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0816 12:22:23.592480   11845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0816 12:22:23.592499   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHHostname
	I0816 12:22:23.592573   11845 main.go:141] libmachine: (addons-966941) Calling .DriverName
	I0816 12:22:23.592619   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:23.592627   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:23.592768   11845 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0816 12:22:23.593757   11845 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-966941"
	I0816 12:22:23.593904   11845 host.go:66] Checking if "addons-966941" exists ...
	I0816 12:22:23.593996   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33551
	I0816 12:22:23.594290   11845 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0816 12:22:23.594295   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.594304   11845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0816 12:22:23.594320   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHHostname
	I0816 12:22:23.594339   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.594436   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.595656   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:23.595673   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:23.595747   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:23.595829   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:23.595889   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:23.596017   11845 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0816 12:22:23.596929   11845 main.go:141] libmachine: (addons-966941) DBG | Closing plugin on server side
	I0816 12:22:23.596957   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:23.596970   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:23.596959   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45727
	W0816 12:22:23.597047   11845 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0816 12:22:23.597104   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHPort
	I0816 12:22:23.597131   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:23.597161   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-966941 Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:23.597178   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:23.597411   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHKeyPath
	I0816 12:22:23.597477   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.597559   11845 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0816 12:22:23.597571   11845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0816 12:22:23.597587   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHHostname
	I0816 12:22:23.597650   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHUsername
	I0816 12:22:23.597860   11845 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/addons-966941/id_rsa Username:docker}
	I0816 12:22:23.598023   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-966941 Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:23.598042   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:23.598456   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42467
	I0816 12:22:23.598537   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHPort
	I0816 12:22:23.598543   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.598555   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.598887   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.598905   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHKeyPath
	I0816 12:22:23.599104   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHUsername
	I0816 12:22:23.599248   11845 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/addons-966941/id_rsa Username:docker}
	I0816 12:22:23.599623   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.599631   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.599646   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.600188   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.601018   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.600452   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.601051   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.600853   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:23.601295   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.601362   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-966941 Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:23.601379   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:23.601407   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.601671   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHPort
	I0816 12:22:23.601805   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHKeyPath
	I0816 12:22:23.601805   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.601833   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.601983   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHUsername
	I0816 12:22:23.602118   11845 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/addons-966941/id_rsa Username:docker}
	I0816 12:22:23.602237   11845 main.go:141] libmachine: (addons-966941) Calling .GetState
	I0816 12:22:23.602283   11845 host.go:66] Checking if "addons-966941" exists ...
	I0816 12:22:23.602640   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.602672   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.604167   11845 main.go:141] libmachine: (addons-966941) Calling .DriverName
	I0816 12:22:23.604749   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:23.605344   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:23.605807   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-966941 Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:23.605843   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:23.605865   11845 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0816 12:22:23.606165   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-966941 Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:23.606187   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:23.606418   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHPort
	I0816 12:22:23.606478   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHPort
	I0816 12:22:23.606727   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHKeyPath
	I0816 12:22:23.606737   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHKeyPath
	I0816 12:22:23.606846   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHUsername
	I0816 12:22:23.606901   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHUsername
	I0816 12:22:23.606999   11845 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0816 12:22:23.607012   11845 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0816 12:22:23.607017   11845 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/addons-966941/id_rsa Username:docker}
	I0816 12:22:23.607025   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHHostname
	I0816 12:22:23.607072   11845 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/addons-966941/id_rsa Username:docker}
	I0816 12:22:23.609876   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:23.610272   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-966941 Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:23.610294   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:23.610452   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHPort
	I0816 12:22:23.610615   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHKeyPath
	I0816 12:22:23.610730   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHUsername
	I0816 12:22:23.610834   11845 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/addons-966941/id_rsa Username:docker}
	W0816 12:22:23.612590   11845 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:42514->192.168.39.129:22: read: connection reset by peer
	I0816 12:22:23.612616   11845 retry.go:31] will retry after 194.925376ms: ssh: handshake failed: read tcp 192.168.39.1:42514->192.168.39.129:22: read: connection reset by peer
	I0816 12:22:23.621657   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46533
	I0816 12:22:23.621877   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38069
	I0816 12:22:23.622012   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43389
	I0816 12:22:23.622136   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.622238   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.622397   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.622656   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.622678   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.622802   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.622819   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.623162   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.623181   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.623288   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.623306   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.623461   11845 main.go:141] libmachine: (addons-966941) Calling .GetState
	I0816 12:22:23.623466   11845 main.go:141] libmachine: (addons-966941) Calling .GetState
	I0816 12:22:23.623615   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.624316   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.624341   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.625197   11845 main.go:141] libmachine: (addons-966941) Calling .DriverName
	I0816 12:22:23.625434   11845 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 12:22:23.625446   11845 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 12:22:23.625463   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHHostname
	I0816 12:22:23.625510   11845 main.go:141] libmachine: (addons-966941) Calling .DriverName
	I0816 12:22:23.627185   11845 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0816 12:22:23.628405   11845 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0816 12:22:23.628762   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:23.629316   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-966941 Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:23.629346   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:23.629526   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHPort
	I0816 12:22:23.629724   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHKeyPath
	I0816 12:22:23.629805   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46189
	I0816 12:22:23.629978   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHUsername
	I0816 12:22:23.630104   11845 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/addons-966941/id_rsa Username:docker}
	I0816 12:22:23.630387   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.630906   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.630920   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.630956   11845 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0816 12:22:23.631192   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.631374   11845 main.go:141] libmachine: (addons-966941) Calling .DriverName
	I0816 12:22:23.633515   11845 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0816 12:22:23.634544   11845 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0816 12:22:23.635576   11845 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0816 12:22:23.636748   11845 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0816 12:22:23.637897   11845 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0816 12:22:23.638825   11845 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0816 12:22:23.638844   11845 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0816 12:22:23.638866   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHHostname
	I0816 12:22:23.642288   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:23.642741   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-966941 Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:23.642759   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:23.642793   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33591
	I0816 12:22:23.642994   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHPort
	I0816 12:22:23.643190   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHKeyPath
	I0816 12:22:23.643265   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.643330   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHUsername
	I0816 12:22:23.643478   11845 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/addons-966941/id_rsa Username:docker}
	I0816 12:22:23.643830   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.643845   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.644107   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.644284   11845 main.go:141] libmachine: (addons-966941) Calling .GetState
	I0816 12:22:23.646627   11845 main.go:141] libmachine: (addons-966941) Calling .DriverName
	I0816 12:22:23.648447   11845 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0816 12:22:23.649649   11845 out.go:177]   - Using image docker.io/busybox:stable
	I0816 12:22:23.650774   11845 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0816 12:22:23.650791   11845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0816 12:22:23.650809   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHHostname
	I0816 12:22:23.653467   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:23.653851   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-966941 Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:23.653873   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:23.654039   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHPort
	I0816 12:22:23.654217   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHKeyPath
	I0816 12:22:23.654369   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHUsername
	I0816 12:22:23.654521   11845 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/addons-966941/id_rsa Username:docker}
	I0816 12:22:23.941405   11845 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 12:22:23.941431   11845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0816 12:22:24.022139   11845 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0816 12:22:24.022163   11845 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0816 12:22:24.023642   11845 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0816 12:22:24.023658   11845 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0816 12:22:24.043309   11845 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0816 12:22:24.043329   11845 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0816 12:22:24.094527   11845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 12:22:24.096079   11845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0816 12:22:24.099657   11845 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0816 12:22:24.099671   11845 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0816 12:22:24.110544   11845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 12:22:24.122528   11845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0816 12:22:24.124784   11845 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0816 12:22:24.124800   11845 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0816 12:22:24.146421   11845 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 12:22:24.146442   11845 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 12:22:24.148072   11845 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 12:22:24.148158   11845 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0816 12:22:24.158647   11845 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0816 12:22:24.158659   11845 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0816 12:22:24.166111   11845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0816 12:22:24.182153   11845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0816 12:22:24.205113   11845 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0816 12:22:24.205135   11845 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0816 12:22:24.225502   11845 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0816 12:22:24.225529   11845 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0816 12:22:24.226264   11845 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0816 12:22:24.226285   11845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0816 12:22:24.228254   11845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0816 12:22:24.280203   11845 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0816 12:22:24.280231   11845 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0816 12:22:24.372348   11845 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 12:22:24.372369   11845 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 12:22:24.400643   11845 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0816 12:22:24.400668   11845 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0816 12:22:24.404851   11845 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0816 12:22:24.404873   11845 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0816 12:22:24.496122   11845 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0816 12:22:24.496149   11845 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0816 12:22:24.512916   11845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0816 12:22:24.515105   11845 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0816 12:22:24.515128   11845 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0816 12:22:24.521305   11845 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0816 12:22:24.521323   11845 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0816 12:22:24.583410   11845 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0816 12:22:24.583436   11845 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0816 12:22:24.621221   11845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 12:22:24.665459   11845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0816 12:22:24.704783   11845 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0816 12:22:24.704807   11845 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0816 12:22:24.716503   11845 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0816 12:22:24.716521   11845 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0816 12:22:24.784729   11845 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0816 12:22:24.784758   11845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0816 12:22:24.829846   11845 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0816 12:22:24.829872   11845 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0816 12:22:24.925868   11845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0816 12:22:24.938254   11845 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0816 12:22:24.938275   11845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0816 12:22:24.963895   11845 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0816 12:22:24.963919   11845 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0816 12:22:25.097380   11845 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0816 12:22:25.097405   11845 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0816 12:22:25.154382   11845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0816 12:22:25.238674   11845 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0816 12:22:25.238700   11845 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0816 12:22:25.318146   11845 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0816 12:22:25.318172   11845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0816 12:22:25.518885   11845 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0816 12:22:25.518912   11845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0816 12:22:25.615746   11845 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0816 12:22:25.615779   11845 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0816 12:22:25.761737   11845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0816 12:22:25.815462   11845 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0816 12:22:25.815484   11845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0816 12:22:26.087710   11845 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0816 12:22:26.087732   11845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0816 12:22:26.433359   11845 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0816 12:22:26.433379   11845 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0816 12:22:26.867961   11845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0816 12:22:28.504924   11845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.408815225s)
	I0816 12:22:28.504971   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:28.504984   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:28.504989   11845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.394415434s)
	I0816 12:22:28.504997   11845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.410438542s)
	I0816 12:22:28.505026   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:28.505043   11845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.382495202s)
	I0816 12:22:28.505045   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:28.505062   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:28.505072   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:28.505075   11845 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.356960751s)
	I0816 12:22:28.505122   11845 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.356940327s)
	I0816 12:22:28.505136   11845 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0816 12:22:28.505205   11845 main.go:141] libmachine: (addons-966941) DBG | Closing plugin on server side
	I0816 12:22:28.505244   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:28.505253   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:28.505261   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:28.505267   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:28.505340   11845 main.go:141] libmachine: (addons-966941) DBG | Closing plugin on server side
	I0816 12:22:28.505365   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:28.505384   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:28.505393   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:28.505392   11845 main.go:141] libmachine: (addons-966941) DBG | Closing plugin on server side
	I0816 12:22:28.505400   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:28.505370   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:28.505413   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:28.505422   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:28.505430   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:28.505485   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:28.505494   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:28.505612   11845 main.go:141] libmachine: (addons-966941) DBG | Closing plugin on server side
	I0816 12:22:28.505622   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:28.505630   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:28.505635   11845 main.go:141] libmachine: (addons-966941) DBG | Closing plugin on server side
	I0816 12:22:28.505661   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:28.505668   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:28.505677   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:28.505683   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:28.506274   11845 main.go:141] libmachine: (addons-966941) DBG | Closing plugin on server side
	I0816 12:22:28.506303   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:28.506310   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:28.506535   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:28.506552   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:28.506985   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:28.506998   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:28.507017   11845 main.go:141] libmachine: (addons-966941) DBG | Closing plugin on server side
	I0816 12:22:28.507952   11845 node_ready.go:35] waiting up to 6m0s for node "addons-966941" to be "Ready" ...
	I0816 12:22:28.548105   11845 node_ready.go:49] node "addons-966941" has status "Ready":"True"
	I0816 12:22:28.548125   11845 node_ready.go:38] duration metric: took 40.150541ms for node "addons-966941" to be "Ready" ...
	I0816 12:22:28.548135   11845 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 12:22:28.559837   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:28.559859   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:28.560067   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:28.560081   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:28.589970   11845 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-jmsfb" in "kube-system" namespace to be "Ready" ...
	I0816 12:22:28.648867   11845 pod_ready.go:93] pod "coredns-6f6b679f8f-jmsfb" in "kube-system" namespace has status "Ready":"True"
	I0816 12:22:28.648893   11845 pod_ready.go:82] duration metric: took 58.898081ms for pod "coredns-6f6b679f8f-jmsfb" in "kube-system" namespace to be "Ready" ...
	I0816 12:22:28.648918   11845 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-xvjnw" in "kube-system" namespace to be "Ready" ...
	I0816 12:22:28.768413   11845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.586229371s)
	I0816 12:22:28.768467   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:28.768481   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:28.768413   11845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.602267797s)
	I0816 12:22:28.768534   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:28.768547   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:28.768786   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:28.768805   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:28.768817   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:28.768825   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:28.770140   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:28.770146   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:28.770158   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:28.770162   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:28.770140   11845 main.go:141] libmachine: (addons-966941) DBG | Closing plugin on server side
	I0816 12:22:28.770136   11845 main.go:141] libmachine: (addons-966941) DBG | Closing plugin on server side
	I0816 12:22:28.770172   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:28.770229   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:28.770407   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:28.770420   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:28.820892   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:28.820930   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:28.821214   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:28.821233   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:29.050091   11845 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-966941" context rescaled to 1 replicas
	I0816 12:22:29.154870   11845 pod_ready.go:93] pod "coredns-6f6b679f8f-xvjnw" in "kube-system" namespace has status "Ready":"True"
	I0816 12:22:29.154891   11845 pod_ready.go:82] duration metric: took 505.964599ms for pod "coredns-6f6b679f8f-xvjnw" in "kube-system" namespace to be "Ready" ...
	I0816 12:22:29.154902   11845 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-966941" in "kube-system" namespace to be "Ready" ...
	I0816 12:22:30.682641   11845 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0816 12:22:30.682678   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHHostname
	I0816 12:22:30.685680   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:30.686039   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-966941 Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:30.686068   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:30.686248   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHPort
	I0816 12:22:30.686471   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHKeyPath
	I0816 12:22:30.686628   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHUsername
	I0816 12:22:30.686781   11845 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/addons-966941/id_rsa Username:docker}
	I0816 12:22:31.049976   11845 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0816 12:22:31.194037   11845 pod_ready.go:103] pod "etcd-addons-966941" in "kube-system" namespace has status "Ready":"False"
	I0816 12:22:31.333523   11845 addons.go:234] Setting addon gcp-auth=true in "addons-966941"
	I0816 12:22:31.333574   11845 host.go:66] Checking if "addons-966941" exists ...
	I0816 12:22:31.333893   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:31.333926   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:31.349282   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38205
	I0816 12:22:31.349713   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:31.350164   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:31.350184   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:31.350508   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:31.351001   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:31.351032   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:31.367339   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41783
	I0816 12:22:31.367754   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:31.368298   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:31.368322   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:31.368611   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:31.368842   11845 main.go:141] libmachine: (addons-966941) Calling .GetState
	I0816 12:22:31.370404   11845 main.go:141] libmachine: (addons-966941) Calling .DriverName
	I0816 12:22:31.370635   11845 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0816 12:22:31.370662   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHHostname
	I0816 12:22:31.373350   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:31.373773   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-966941 Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:31.373801   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:31.373978   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHPort
	I0816 12:22:31.374172   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHKeyPath
	I0816 12:22:31.374360   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHUsername
	I0816 12:22:31.374531   11845 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/addons-966941/id_rsa Username:docker}
	I0816 12:22:31.703903   11845 pod_ready.go:93] pod "etcd-addons-966941" in "kube-system" namespace has status "Ready":"True"
	I0816 12:22:31.703927   11845 pod_ready.go:82] duration metric: took 2.549017221s for pod "etcd-addons-966941" in "kube-system" namespace to be "Ready" ...
	I0816 12:22:31.703940   11845 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-966941" in "kube-system" namespace to be "Ready" ...
	I0816 12:22:31.722356   11845 pod_ready.go:93] pod "kube-apiserver-addons-966941" in "kube-system" namespace has status "Ready":"True"
	I0816 12:22:31.722378   11845 pod_ready.go:82] duration metric: took 18.43144ms for pod "kube-apiserver-addons-966941" in "kube-system" namespace to be "Ready" ...
	I0816 12:22:31.722396   11845 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-966941" in "kube-system" namespace to be "Ready" ...
	I0816 12:22:31.739275   11845 pod_ready.go:93] pod "kube-controller-manager-addons-966941" in "kube-system" namespace has status "Ready":"True"
	I0816 12:22:31.739298   11845 pod_ready.go:82] duration metric: took 16.893964ms for pod "kube-controller-manager-addons-966941" in "kube-system" namespace to be "Ready" ...
	I0816 12:22:31.739312   11845 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qnd5q" in "kube-system" namespace to be "Ready" ...
	I0816 12:22:31.779959   11845 pod_ready.go:93] pod "kube-proxy-qnd5q" in "kube-system" namespace has status "Ready":"True"
	I0816 12:22:31.779981   11845 pod_ready.go:82] duration metric: took 40.66068ms for pod "kube-proxy-qnd5q" in "kube-system" namespace to be "Ready" ...
	I0816 12:22:31.779993   11845 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-966941" in "kube-system" namespace to be "Ready" ...
	I0816 12:22:32.614978   11845 pod_ready.go:93] pod "kube-scheduler-addons-966941" in "kube-system" namespace has status "Ready":"True"
	I0816 12:22:32.615001   11845 pod_ready.go:82] duration metric: took 835.000712ms for pod "kube-scheduler-addons-966941" in "kube-system" namespace to be "Ready" ...
	I0816 12:22:32.615015   11845 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-t2vgg" in "kube-system" namespace to be "Ready" ...
	I0816 12:22:32.987059   11845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.758774581s)
	I0816 12:22:32.987106   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:32.987113   11845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.474166804s)
	I0816 12:22:32.987119   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:32.987151   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:32.987162   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:32.987191   11845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.365943264s)
	I0816 12:22:32.987213   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:32.987226   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:32.987250   11845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.321765642s)
	I0816 12:22:32.987272   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:32.987282   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:32.987293   11845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.06138814s)
	I0816 12:22:32.987323   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:32.987338   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:32.987425   11845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.833001652s)
	W0816 12:22:32.987462   11845 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0816 12:22:32.987492   11845 retry.go:31] will retry after 159.215338ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0816 12:22:32.987576   11845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.225805494s)
	I0816 12:22:32.987601   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:32.987611   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:32.987664   11845 main.go:141] libmachine: (addons-966941) DBG | Closing plugin on server side
	I0816 12:22:32.987667   11845 main.go:141] libmachine: (addons-966941) DBG | Closing plugin on server side
	I0816 12:22:32.987671   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:32.987681   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:32.987685   11845 main.go:141] libmachine: (addons-966941) DBG | Closing plugin on server side
	I0816 12:22:32.987688   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:32.987697   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:32.987705   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:32.987711   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:32.987720   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:32.987731   11845 main.go:141] libmachine: (addons-966941) DBG | Closing plugin on server side
	I0816 12:22:32.987735   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:32.987743   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:32.987746   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:32.987749   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:32.987752   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:32.987755   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:32.987760   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:32.987762   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:32.987767   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:32.987690   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:32.987792   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:32.989391   11845 main.go:141] libmachine: (addons-966941) DBG | Closing plugin on server side
	I0816 12:22:32.989415   11845 main.go:141] libmachine: (addons-966941) DBG | Closing plugin on server side
	I0816 12:22:32.989437   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:32.989444   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:32.989453   11845 addons.go:475] Verifying addon metrics-server=true in "addons-966941"
	I0816 12:22:32.989464   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:32.989477   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:32.989486   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:32.989509   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:32.989564   11845 main.go:141] libmachine: (addons-966941) DBG | Closing plugin on server side
	I0816 12:22:32.989601   11845 main.go:141] libmachine: (addons-966941) DBG | Closing plugin on server side
	I0816 12:22:32.989621   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:32.989630   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:32.989639   11845 addons.go:475] Verifying addon ingress=true in "addons-966941"
	I0816 12:22:32.989682   11845 main.go:141] libmachine: (addons-966941) DBG | Closing plugin on server side
	I0816 12:22:32.989442   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:32.989701   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:32.989707   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:32.989711   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:32.989904   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:32.989918   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:32.989715   11845 addons.go:475] Verifying addon registry=true in "addons-966941"
	I0816 12:22:32.989742   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:32.990014   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:32.990861   11845 main.go:141] libmachine: (addons-966941) DBG | Closing plugin on server side
	I0816 12:22:32.990891   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:32.991233   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:32.992552   11845 out.go:177] * Verifying ingress addon...
	I0816 12:22:32.992557   11845 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-966941 service yakd-dashboard -n yakd-dashboard
	
	I0816 12:22:32.992552   11845 out.go:177] * Verifying registry addon...
	I0816 12:22:32.994649   11845 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0816 12:22:32.994649   11845 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0816 12:22:33.011466   11845 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0816 12:22:33.011491   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:33.011619   11845 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0816 12:22:33.011631   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:33.147161   11845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0816 12:22:33.525848   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:33.528790   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:33.864826   11845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.996816198s)
	I0816 12:22:33.864845   11845 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.494187916s)
	I0816 12:22:33.864883   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:33.864897   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:33.865221   11845 main.go:141] libmachine: (addons-966941) DBG | Closing plugin on server side
	I0816 12:22:33.865254   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:33.865265   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:33.865274   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:33.865287   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:33.865512   11845 main.go:141] libmachine: (addons-966941) DBG | Closing plugin on server side
	I0816 12:22:33.865529   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:33.865541   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:33.865556   11845 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-966941"
	I0816 12:22:33.866611   11845 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0816 12:22:33.866674   11845 out.go:177] * Verifying csi-hostpath-driver addon...
	I0816 12:22:33.868781   11845 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0816 12:22:33.869470   11845 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0816 12:22:33.870349   11845 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0816 12:22:33.870371   11845 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0816 12:22:33.886491   11845 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0816 12:22:33.886518   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:33.972096   11845 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0816 12:22:33.972120   11845 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0816 12:22:34.006271   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:34.006533   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:34.050901   11845 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0816 12:22:34.050928   11845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0816 12:22:34.109041   11845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0816 12:22:34.387980   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:34.500505   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:34.500624   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:34.628445   11845 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-t2vgg" in "kube-system" namespace has status "Ready":"False"
	I0816 12:22:34.874499   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:35.000252   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:35.000615   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:35.042979   11845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.895774601s)
	I0816 12:22:35.043031   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:35.043053   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:35.043330   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:35.043349   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:35.043368   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:35.043380   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:35.043664   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:35.043684   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:35.392167   11845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.283088349s)
	I0816 12:22:35.392218   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:35.392235   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:35.392586   11845 main.go:141] libmachine: (addons-966941) DBG | Closing plugin on server side
	I0816 12:22:35.392625   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:35.392635   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:35.392644   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:35.392655   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:35.392887   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:35.392951   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:35.392957   11845 main.go:141] libmachine: (addons-966941) DBG | Closing plugin on server side
	I0816 12:22:35.393995   11845 addons.go:475] Verifying addon gcp-auth=true in "addons-966941"
	I0816 12:22:35.395823   11845 out.go:177] * Verifying gcp-auth addon...
	I0816 12:22:35.397753   11845 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0816 12:22:35.408487   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:35.438162   11845 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0816 12:22:35.438181   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:35.501852   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:35.502770   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:35.883274   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:35.902727   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:36.001402   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:36.002596   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:36.374254   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:36.401413   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:36.501446   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:36.501636   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:36.874904   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:36.901374   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:36.999651   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:37.000010   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:37.121105   11845 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-t2vgg" in "kube-system" namespace has status "Ready":"False"
	I0816 12:22:37.374753   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:37.400837   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:37.499949   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:37.500302   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:37.879033   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:37.974183   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:37.999307   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:37.999500   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:38.572222   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:38.572530   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:38.574170   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:38.577558   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:38.874998   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:38.901220   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:39.000196   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:39.000503   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:39.122469   11845 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-t2vgg" in "kube-system" namespace has status "Ready":"False"
	I0816 12:22:39.374716   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:39.400957   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:39.498852   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:39.499125   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:39.873798   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:39.901943   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:40.000064   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:40.000345   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:40.375439   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:40.401389   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:40.503091   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:40.503300   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:40.879224   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:40.902218   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:41.000640   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:41.001241   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:41.399213   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:41.403044   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:41.499321   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:41.499961   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:41.621620   11845 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-t2vgg" in "kube-system" namespace has status "Ready":"False"
	I0816 12:22:41.874315   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:41.901485   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:41.998390   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:41.998549   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:42.375934   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:42.401266   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:42.498921   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:42.499200   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:42.875387   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:42.902785   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:42.998900   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:42.999872   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:43.374824   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:43.401088   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:43.498446   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:43.498725   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:43.875379   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:43.901903   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:43.999281   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:43.999377   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:44.122122   11845 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-t2vgg" in "kube-system" namespace has status "Ready":"False"
	I0816 12:22:44.374353   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:44.401556   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:44.500367   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:44.500821   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:44.874277   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:44.901461   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:44.999884   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:45.000149   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:45.374182   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:45.401335   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:45.500501   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:45.500635   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:45.875588   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:45.901307   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:45.999508   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:46.000122   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:46.373778   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:46.401493   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:46.499788   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:46.500119   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:46.621018   11845 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-t2vgg" in "kube-system" namespace has status "Ready":"False"
	I0816 12:22:46.874772   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:46.900804   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:46.999438   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:47.000073   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:47.374902   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:47.401145   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:47.498942   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:47.499004   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:47.873894   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:47.901528   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:48.000268   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:48.001360   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:48.375118   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:48.400897   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:48.499656   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:48.500704   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:48.874350   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:48.901571   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:48.999732   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:48.999858   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:49.120968   11845 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-t2vgg" in "kube-system" namespace has status "Ready":"False"
	I0816 12:22:49.376495   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:49.402146   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:49.503154   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:49.505216   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:49.874109   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:49.901306   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:49.999044   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:50.000586   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:50.374233   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:50.401364   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:50.499094   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:50.499453   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:50.874365   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:50.902033   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:50.998275   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:50.999883   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:51.121475   11845 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-t2vgg" in "kube-system" namespace has status "Ready":"False"
	I0816 12:22:51.377854   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:51.401689   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:51.499858   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:51.499952   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:51.875795   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:51.900761   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:51.999047   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:51.999315   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:52.121184   11845 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-t2vgg" in "kube-system" namespace has status "Ready":"True"
	I0816 12:22:52.121208   11845 pod_ready.go:82] duration metric: took 19.506185483s for pod "nvidia-device-plugin-daemonset-t2vgg" in "kube-system" namespace to be "Ready" ...
	I0816 12:22:52.121232   11845 pod_ready.go:39] duration metric: took 23.573086665s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 12:22:52.121250   11845 api_server.go:52] waiting for apiserver process to appear ...
	I0816 12:22:52.121298   11845 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 12:22:52.138217   11845 api_server.go:72] duration metric: took 28.669188574s to wait for apiserver process to appear ...
	I0816 12:22:52.138242   11845 api_server.go:88] waiting for apiserver healthz status ...
	I0816 12:22:52.138262   11845 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0816 12:22:52.142298   11845 api_server.go:279] https://192.168.39.129:8443/healthz returned 200:
	ok
	I0816 12:22:52.143279   11845 api_server.go:141] control plane version: v1.31.0
	I0816 12:22:52.143297   11845 api_server.go:131] duration metric: took 5.048115ms to wait for apiserver health ...
	I0816 12:22:52.143304   11845 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 12:22:52.150910   11845 system_pods.go:59] 18 kube-system pods found
	I0816 12:22:52.150932   11845 system_pods.go:61] "coredns-6f6b679f8f-jmsfb" [541a04aa-8d7e-4811-b4b3-dbc0c1bebbb7] Running
	I0816 12:22:52.150939   11845 system_pods.go:61] "csi-hostpath-attacher-0" [5478a03b-ccb2-41ad-80b2-ac918d2be036] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0816 12:22:52.150947   11845 system_pods.go:61] "csi-hostpath-resizer-0" [4d1634dc-6351-4561-985c-5ce419dd8959] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0816 12:22:52.150958   11845 system_pods.go:61] "csi-hostpathplugin-hxhgw" [b59fe750-7fe2-4c40-bba9-836bc4990c73] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0816 12:22:52.150969   11845 system_pods.go:61] "etcd-addons-966941" [98a85f02-a468-4db0-9f86-69a1339f6f3b] Running
	I0816 12:22:52.150975   11845 system_pods.go:61] "kube-apiserver-addons-966941" [93d8e2a8-a4b0-4e0e-a54f-db67df0f7d4a] Running
	I0816 12:22:52.150981   11845 system_pods.go:61] "kube-controller-manager-addons-966941" [b1bc1e28-2d78-4080-9d44-7d9fdfe18914] Running
	I0816 12:22:52.150990   11845 system_pods.go:61] "kube-ingress-dns-minikube" [ac8db978-31ce-467e-8c0c-585910bf0042] Running
	I0816 12:22:52.150998   11845 system_pods.go:61] "kube-proxy-qnd5q" [0d7c8f55-8a0f-4598-a0fd-2f7116e8af54] Running
	I0816 12:22:52.151002   11845 system_pods.go:61] "kube-scheduler-addons-966941" [28625162-35c5-4cc6-be67-f64f326e8edd] Running
	I0816 12:22:52.151008   11845 system_pods.go:61] "metrics-server-8988944d9-p6z8v" [32196dc2-ada2-4e60-b64c-573967f34e54] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 12:22:52.151012   11845 system_pods.go:61] "nvidia-device-plugin-daemonset-t2vgg" [67831983-255a-47c4-9db7-8be119bea725] Running
	I0816 12:22:52.151018   11845 system_pods.go:61] "registry-6fb4cdfc84-pbs55" [ce8c7d7b-e1bd-4400-989e-ff5ee6472906] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0816 12:22:52.151026   11845 system_pods.go:61] "registry-proxy-ntgtj" [1d1c166b-3b57-45d7-a283-a4e340b16541] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0816 12:22:52.151033   11845 system_pods.go:61] "snapshot-controller-56fcc65765-c5drr" [071997c6-7740-4297-a69c-b4d219bbebc8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0816 12:22:52.151041   11845 system_pods.go:61] "snapshot-controller-56fcc65765-ln299" [b41b38e8-3e51-4c0c-87b1-6d3abc4889a4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0816 12:22:52.151047   11845 system_pods.go:61] "storage-provisioner" [be4bc2aa-70f7-48ee-b9f1-46102ba63337] Running
	I0816 12:22:52.151055   11845 system_pods.go:61] "tiller-deploy-b48cc5f79-v26s2" [505f660d-cfba-443f-a970-69b28a26f3c1] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0816 12:22:52.151064   11845 system_pods.go:74] duration metric: took 7.754399ms to wait for pod list to return data ...
	I0816 12:22:52.151078   11845 default_sa.go:34] waiting for default service account to be created ...
	I0816 12:22:52.152806   11845 default_sa.go:45] found service account: "default"
	I0816 12:22:52.152820   11845 default_sa.go:55] duration metric: took 1.735265ms for default service account to be created ...
	I0816 12:22:52.152826   11845 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 12:22:52.159573   11845 system_pods.go:86] 18 kube-system pods found
	I0816 12:22:52.159593   11845 system_pods.go:89] "coredns-6f6b679f8f-jmsfb" [541a04aa-8d7e-4811-b4b3-dbc0c1bebbb7] Running
	I0816 12:22:52.159602   11845 system_pods.go:89] "csi-hostpath-attacher-0" [5478a03b-ccb2-41ad-80b2-ac918d2be036] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0816 12:22:52.159610   11845 system_pods.go:89] "csi-hostpath-resizer-0" [4d1634dc-6351-4561-985c-5ce419dd8959] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0816 12:22:52.159621   11845 system_pods.go:89] "csi-hostpathplugin-hxhgw" [b59fe750-7fe2-4c40-bba9-836bc4990c73] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0816 12:22:52.159631   11845 system_pods.go:89] "etcd-addons-966941" [98a85f02-a468-4db0-9f86-69a1339f6f3b] Running
	I0816 12:22:52.159638   11845 system_pods.go:89] "kube-apiserver-addons-966941" [93d8e2a8-a4b0-4e0e-a54f-db67df0f7d4a] Running
	I0816 12:22:52.159644   11845 system_pods.go:89] "kube-controller-manager-addons-966941" [b1bc1e28-2d78-4080-9d44-7d9fdfe18914] Running
	I0816 12:22:52.159654   11845 system_pods.go:89] "kube-ingress-dns-minikube" [ac8db978-31ce-467e-8c0c-585910bf0042] Running
	I0816 12:22:52.159661   11845 system_pods.go:89] "kube-proxy-qnd5q" [0d7c8f55-8a0f-4598-a0fd-2f7116e8af54] Running
	I0816 12:22:52.159665   11845 system_pods.go:89] "kube-scheduler-addons-966941" [28625162-35c5-4cc6-be67-f64f326e8edd] Running
	I0816 12:22:52.159670   11845 system_pods.go:89] "metrics-server-8988944d9-p6z8v" [32196dc2-ada2-4e60-b64c-573967f34e54] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 12:22:52.159677   11845 system_pods.go:89] "nvidia-device-plugin-daemonset-t2vgg" [67831983-255a-47c4-9db7-8be119bea725] Running
	I0816 12:22:52.159683   11845 system_pods.go:89] "registry-6fb4cdfc84-pbs55" [ce8c7d7b-e1bd-4400-989e-ff5ee6472906] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0816 12:22:52.159691   11845 system_pods.go:89] "registry-proxy-ntgtj" [1d1c166b-3b57-45d7-a283-a4e340b16541] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0816 12:22:52.159700   11845 system_pods.go:89] "snapshot-controller-56fcc65765-c5drr" [071997c6-7740-4297-a69c-b4d219bbebc8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0816 12:22:52.159708   11845 system_pods.go:89] "snapshot-controller-56fcc65765-ln299" [b41b38e8-3e51-4c0c-87b1-6d3abc4889a4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0816 12:22:52.159717   11845 system_pods.go:89] "storage-provisioner" [be4bc2aa-70f7-48ee-b9f1-46102ba63337] Running
	I0816 12:22:52.159728   11845 system_pods.go:89] "tiller-deploy-b48cc5f79-v26s2" [505f660d-cfba-443f-a970-69b28a26f3c1] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0816 12:22:52.159740   11845 system_pods.go:126] duration metric: took 6.908249ms to wait for k8s-apps to be running ...
	I0816 12:22:52.159752   11845 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 12:22:52.159796   11845 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 12:22:52.175468   11845 system_svc.go:56] duration metric: took 15.712057ms WaitForService to wait for kubelet
	I0816 12:22:52.175485   11845 kubeadm.go:582] duration metric: took 28.706463274s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 12:22:52.175503   11845 node_conditions.go:102] verifying NodePressure condition ...
	I0816 12:22:52.177953   11845 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 12:22:52.177970   11845 node_conditions.go:123] node cpu capacity is 2
	I0816 12:22:52.177981   11845 node_conditions.go:105] duration metric: took 2.474011ms to run NodePressure ...
	I0816 12:22:52.177992   11845 start.go:241] waiting for startup goroutines ...
	I0816 12:22:52.374694   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:52.401290   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:52.498772   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:52.499202   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:52.874789   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:52.901374   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:52.998738   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:52.999053   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:53.374628   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:53.400598   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:53.498995   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:53.499805   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:53.874669   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:53.900902   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:53.998447   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:53.999969   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:54.374153   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:54.400994   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:54.499807   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:54.500601   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:54.874918   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:54.901272   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:54.998827   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:54.999146   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:55.374864   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:55.402006   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:55.498877   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:55.501821   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:55.873907   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:55.901446   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:55.999355   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:55.999731   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:56.374667   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:56.400841   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:56.498344   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:56.498843   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:56.873686   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:56.901219   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:57.000225   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:57.000684   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:57.406452   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:57.406464   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:57.581809   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:57.581811   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:57.873553   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:57.901897   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:57.999428   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:58.000647   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:58.375854   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:58.400890   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:58.499130   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:58.499366   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:58.874318   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:58.901171   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:58.998876   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:58.999273   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:59.374396   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:59.401990   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:59.498496   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:59.498866   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:59.873864   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:59.901177   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:59.999686   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:59.999863   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:00.375290   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:00.400741   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:00.500522   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:00.500540   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:23:00.874191   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:00.901875   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:00.998704   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:23:00.998915   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:01.374078   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:01.401750   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:01.506346   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:23:01.506527   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:01.876368   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:01.902422   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:01.999596   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:02.000086   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:23:02.374151   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:02.401472   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:02.498948   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:23:02.499534   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:02.875274   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:02.901557   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:03.000269   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:03.000771   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:23:03.374993   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:03.401214   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:03.499004   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:23:03.500065   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:03.874452   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:03.902257   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:04.000052   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:23:04.000236   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:04.373591   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:04.401858   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:04.499439   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:23:04.500206   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:04.875175   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:04.900804   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:04.999897   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:23:05.000047   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:05.374486   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:05.401688   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:05.499393   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:23:05.500080   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:05.877004   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:05.903549   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:05.999977   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:23:06.000110   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:06.374268   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:06.401613   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:06.499179   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:23:06.499565   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:06.875406   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:06.901511   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:06.998797   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:23:07.003046   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:07.374553   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:07.401523   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:07.499715   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:23:07.500159   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:07.874729   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:07.974038   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:08.074967   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:23:08.075671   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:08.373797   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:08.402092   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:08.498444   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:23:08.498864   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:08.875182   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:08.901422   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:08.999944   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:09.000033   11845 kapi.go:107] duration metric: took 36.005384954s to wait for kubernetes.io/minikube-addons=registry ...
	I0816 12:23:09.374660   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:09.402094   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:09.499479   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:09.874921   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:09.901199   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:09.999261   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:10.374195   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:10.401485   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:10.499632   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:10.880422   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:10.901095   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:11.000170   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:11.374609   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:11.400958   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:11.498800   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:11.875773   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:11.901576   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:12.025297   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:12.379651   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:12.404776   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:12.500633   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:12.875115   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:12.901587   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:13.002562   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:13.375903   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:13.401277   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:13.499028   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:13.875059   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:13.901218   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:14.000465   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:14.374907   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:14.401815   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:14.499800   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:14.873796   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:14.901670   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:15.000128   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:15.374272   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:15.401525   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:15.499405   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:15.912958   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:15.913975   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:15.999293   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:16.376802   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:16.474341   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:16.499099   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:16.873901   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:16.901914   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:16.999332   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:17.375635   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:17.402734   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:17.500349   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:17.874067   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:17.900941   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:18.000821   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:18.376430   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:18.401004   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:18.502166   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:18.874665   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:18.900609   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:18.999582   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:19.374808   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:19.401204   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:19.498622   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:19.874936   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:19.900675   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:19.999824   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:20.374071   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:20.401596   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:20.499720   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:20.877583   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:20.901311   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:20.998963   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:21.375218   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:21.400901   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:21.499725   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:21.874820   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:21.902058   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:21.999435   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:22.374582   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:22.724718   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:22.726063   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:22.877439   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:22.977942   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:23.002228   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:23.375176   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:23.400817   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:23.500420   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:23.874085   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:23.900945   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:24.000016   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:24.373927   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:24.402008   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:24.499806   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:24.875066   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:24.901018   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:24.998297   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:25.374686   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:25.401323   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:25.498956   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:25.873581   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:25.902769   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:26.225837   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:26.373952   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:26.401335   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:26.499236   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:26.874753   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:26.901658   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:26.998867   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:27.495357   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:27.496728   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:27.500287   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:27.874566   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:27.903747   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:28.001145   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:28.375469   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:28.410583   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:28.507272   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:28.876611   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:28.900471   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:28.999004   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:29.375014   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:29.400980   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:29.499253   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:29.878123   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:29.902334   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:29.998830   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:30.375185   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:30.401259   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:30.498734   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:30.874379   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:30.901585   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:30.999376   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:31.374060   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:31.402494   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:31.499240   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:31.873501   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:31.901964   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:32.001713   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:32.375004   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:32.475187   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:32.498490   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:32.874126   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:32.901660   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:32.999705   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:33.648062   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:33.648944   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:33.649048   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:33.879000   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:33.978503   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:33.998800   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:34.377154   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:34.401470   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:34.498933   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:34.874023   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:34.901202   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:34.998628   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:35.375722   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:35.475771   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:35.500319   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:35.874155   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:35.901147   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:35.998585   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:36.375292   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:36.402252   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:36.504605   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:36.874635   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:36.974912   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:37.075871   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:37.380259   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:37.476943   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:37.499312   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:37.884243   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:37.901536   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:37.999233   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:38.376164   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:38.401242   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:38.499014   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:38.874388   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:38.902109   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:39.007867   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:39.377261   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:39.401958   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:39.499842   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:39.874793   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:39.900837   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:40.001184   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:40.373598   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:40.401282   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:40.498665   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:40.874831   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:40.905107   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:40.999737   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:41.587811   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:41.588238   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:41.589218   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:41.875453   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:41.900826   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:41.999247   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:42.377133   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:42.400830   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:42.499335   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:42.873948   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:42.901613   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:42.999803   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:43.373968   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:43.401208   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:43.499080   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:43.873787   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:43.900721   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:44.000558   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:44.631266   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:44.631977   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:44.631992   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:44.875238   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:44.901193   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:44.999818   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:45.375253   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:45.401216   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:45.499005   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:45.875266   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:45.901645   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:46.000597   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:46.374564   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:46.401726   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:46.509223   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:46.874783   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:46.904394   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:47.015118   11845 kapi.go:107] duration metric: took 1m14.02046352s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0816 12:23:47.375233   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:47.401241   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:47.874630   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:47.900714   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:48.374248   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:48.400953   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:48.875333   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:48.901247   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:49.376363   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:49.401878   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:49.875581   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:49.901367   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:50.375708   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:50.401994   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:51.238531   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:51.240766   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:51.378359   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:51.401247   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:51.874269   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:51.973249   11845 kapi.go:107] duration metric: took 1m16.575493995s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0816 12:23:51.974718   11845 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-966941 cluster.
	I0816 12:23:51.976011   11845 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0816 12:23:51.977221   11845 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0816 12:23:52.375308   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:52.873775   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:53.374625   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:53.874171   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:54.374181   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:54.874272   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:55.376777   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:55.874150   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:56.374685   11845 kapi.go:107] duration metric: took 1m22.505212527s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0816 12:23:56.376692   11845 out.go:177] * Enabled addons: nvidia-device-plugin, storage-provisioner, ingress-dns, default-storageclass, cloud-spanner, storage-provisioner-rancher, metrics-server, inspektor-gadget, helm-tiller, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0816 12:23:56.377867   11845 addons.go:510] duration metric: took 1m32.908811507s for enable addons: enabled=[nvidia-device-plugin storage-provisioner ingress-dns default-storageclass cloud-spanner storage-provisioner-rancher metrics-server inspektor-gadget helm-tiller yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0816 12:23:56.377916   11845 start.go:246] waiting for cluster config update ...
	I0816 12:23:56.377942   11845 start.go:255] writing updated cluster config ...
	I0816 12:23:56.378258   11845 ssh_runner.go:195] Run: rm -f paused
	I0816 12:23:56.428917   11845 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 12:23:56.430937   11845 out.go:177] * Done! kubectl is now configured to use "addons-966941" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 16 12:27:40 addons-966941 crio[681]: time="2024-08-16 12:27:40.513726878Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723811260513702165,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590828,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=98b67d9b-36a5-4854-9970-cda52609c0ab name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 12:27:40 addons-966941 crio[681]: time="2024-08-16 12:27:40.514311770Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e531bc92-c460-4124-8008-a97d7f8b3dd1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 12:27:40 addons-966941 crio[681]: time="2024-08-16 12:27:40.514386605Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e531bc92-c460-4124-8008-a97d7f8b3dd1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 12:27:40 addons-966941 crio[681]: time="2024-08-16 12:27:40.514772045Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c7b774160af91adad43f12404c85d0837a2ba7fcf45a4cbcd1cd37044ffceaa9,PodSandboxId:a780f4468d501c5d9431e632fb120fb1e2e901a794e46acc703beceac17f385d,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1723811253756890448,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-xgd2h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f258e324-71ea-4930-9f6b-bbfed2eb5b61,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a2aea549461411c5baa256e80f79c058abfd14fb90bb929a522c674554a1a3b,PodSandboxId:a620cfcad2fa513f2d5b0b4c2693e4e3b1813e1fa85b6bec6977a2e3fbff77f5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1723811114047008258,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 293f8398-f883-4566-aa48-f7d867211e99,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:844855539e3e8ee7266bb520f0657f04d1401f30d8900c6b0cab2b33d3c97ea7,PodSandboxId:6a4d6b515995273427cc7b9a80957490f785ed505674a81cbf4ace8c48e1af97,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723811039919934465,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 809c2f02-508e-450d-8
88c-83832697c981,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:255fe3970e510a539373129a8a9f0b444757388e901842ad1e0b141496f82305,PodSandboxId:94403e9f4dd1cdb594c04916d381ec34995ada164fcf42a8674833a91e780cfe,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1723811012266113044,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-67cq2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c28c1c0d-fabb-46a9-a1bd-253ba889a9f3,},Anno
tations:map[string]string{io.kubernetes.container.hash: 8e23eadd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d62cfecbfbbeeea6af7ce63ee7fec9554e257448c052ab8f99795af75ac8b7fe,PodSandboxId:710b415c6d770876db945a50c2a6ecebed0d1501885bdce8c661af32330232ac,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1723811012046880378,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-drpm8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6d6a4
ba9-5cc7-4b50-b50d-38699a26cfa7,},Annotations:map[string]string{io.kubernetes.container.hash: 7b54fe70,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d2540ee0015260f79d7216e22bd020d23ba3f5b476926295586832ab614c8b8,PodSandboxId:f4ed0fa81aa0fcad219c6e65931663b3e3f8b654d17fefa34f606f99ecf2e622,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1723810992986918932,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-p6z8v,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 32196dc2-ada2-4e60-b64c-573967f34e54,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6219df4f6a9ab0f6b5579c6136b386e7097254a60f6cef5b1162ea5650ebd0a,PodSandboxId:e438f1d3d5dcb8af5c97f50deac825f20310b46bd988473c52cf2fe270f51ebe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723810949899739168,Labels:map[string]string{io.kubernetes.container.name: stora
ge-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be4bc2aa-70f7-48ee-b9f1-46102ba63337,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bae8bfdc21a399cfdf25528506b00489071eec863d2519c65a9d6fa7a4c667a,PodSandboxId:ba5c40550c3689f9ac8933f8dcb3d3a723b1b355b3282a41dd0af9e5dc7beeca,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723810946547512969,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.nam
e: coredns-6f6b679f8f-jmsfb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 541a04aa-8d7e-4811-b4b3-dbc0c1bebbb7,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:826db47751f803b9411a806994ccc674fdc9ef490dad62f4a9dea23670d53247,PodSandboxId:e13b5c028dc5173dc8aef6705c70877617dd130d5eb43dbb2570fc3d90ab912b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b
2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723810943771683793,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qnd5q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d7c8f55-8a0f-4598-a0fd-2f7116e8af54,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:087e5b6007d48af7bd58a02dfea863fe58e858b3fdbaac1c9265aeb756141853,PodSandboxId:09a7b3e67f6b4ed73ebcb3fe0371e13247e855fa826c2735ad75d9b125fe9a78,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a
7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723810932834701643,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-966941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd7f0f2e511e3ee492e03bbea1a692cb,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bb33a64c7a6efa428cde2b0584281471cfad35b6f88d3f978f389ad8d11bcd1,PodSandboxId:488317774d000c5795c0edbf4ca3205c524369679846a5c2b8de576ebbb68850,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9
d4,State:CONTAINER_RUNNING,CreatedAt:1723810932774115052,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-966941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 940807d2a7fa779893a3e1bd18518954,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6a417011125a0257acefc3dc36994fa81803b6e0adcda7539a8151c8c779ebf,PodSandboxId:145971dbfd079a6c99cc4057db517e09f461766963676daf20568687a8c4357e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:
1723810932830365759,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-966941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e37003e0366a0904b6a2e41d3bd1df29,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea1e709000b7f625b1f43aeb5c4527ba0a2bbfc2704787b2b5e3cd7641d29fb3,PodSandboxId:871d3079795f60786b563db947d7f0387d194d6dd3d4fd690287875745fb3c00,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:17238109
32817029528,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-966941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b02dca01b057b45613a9a29a35a25c5b,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e531bc92-c460-4124-8008-a97d7f8b3dd1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 12:27:40 addons-966941 crio[681]: time="2024-08-16 12:27:40.552515490Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d983d4b5-a943-4088-ae68-1d7e47f3e97e name=/runtime.v1.RuntimeService/Version
	Aug 16 12:27:40 addons-966941 crio[681]: time="2024-08-16 12:27:40.552586519Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d983d4b5-a943-4088-ae68-1d7e47f3e97e name=/runtime.v1.RuntimeService/Version
	Aug 16 12:27:40 addons-966941 crio[681]: time="2024-08-16 12:27:40.553652045Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=28250e3d-368e-42b2-9586-d05de1fdfca0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 12:27:40 addons-966941 crio[681]: time="2024-08-16 12:27:40.554888091Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723811260554859475,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590828,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=28250e3d-368e-42b2-9586-d05de1fdfca0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 12:27:40 addons-966941 crio[681]: time="2024-08-16 12:27:40.555522219Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=072fa2ac-c854-4c5c-8878-67d013211ade name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 12:27:40 addons-966941 crio[681]: time="2024-08-16 12:27:40.555593319Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=072fa2ac-c854-4c5c-8878-67d013211ade name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 12:27:40 addons-966941 crio[681]: time="2024-08-16 12:27:40.555852025Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c7b774160af91adad43f12404c85d0837a2ba7fcf45a4cbcd1cd37044ffceaa9,PodSandboxId:a780f4468d501c5d9431e632fb120fb1e2e901a794e46acc703beceac17f385d,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1723811253756890448,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-xgd2h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f258e324-71ea-4930-9f6b-bbfed2eb5b61,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a2aea549461411c5baa256e80f79c058abfd14fb90bb929a522c674554a1a3b,PodSandboxId:a620cfcad2fa513f2d5b0b4c2693e4e3b1813e1fa85b6bec6977a2e3fbff77f5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1723811114047008258,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 293f8398-f883-4566-aa48-f7d867211e99,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:844855539e3e8ee7266bb520f0657f04d1401f30d8900c6b0cab2b33d3c97ea7,PodSandboxId:6a4d6b515995273427cc7b9a80957490f785ed505674a81cbf4ace8c48e1af97,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723811039919934465,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 809c2f02-508e-450d-8
88c-83832697c981,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:255fe3970e510a539373129a8a9f0b444757388e901842ad1e0b141496f82305,PodSandboxId:94403e9f4dd1cdb594c04916d381ec34995ada164fcf42a8674833a91e780cfe,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1723811012266113044,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-67cq2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c28c1c0d-fabb-46a9-a1bd-253ba889a9f3,},Anno
tations:map[string]string{io.kubernetes.container.hash: 8e23eadd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d62cfecbfbbeeea6af7ce63ee7fec9554e257448c052ab8f99795af75ac8b7fe,PodSandboxId:710b415c6d770876db945a50c2a6ecebed0d1501885bdce8c661af32330232ac,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1723811012046880378,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-drpm8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6d6a4
ba9-5cc7-4b50-b50d-38699a26cfa7,},Annotations:map[string]string{io.kubernetes.container.hash: 7b54fe70,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d2540ee0015260f79d7216e22bd020d23ba3f5b476926295586832ab614c8b8,PodSandboxId:f4ed0fa81aa0fcad219c6e65931663b3e3f8b654d17fefa34f606f99ecf2e622,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1723810992986918932,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-p6z8v,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 32196dc2-ada2-4e60-b64c-573967f34e54,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6219df4f6a9ab0f6b5579c6136b386e7097254a60f6cef5b1162ea5650ebd0a,PodSandboxId:e438f1d3d5dcb8af5c97f50deac825f20310b46bd988473c52cf2fe270f51ebe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723810949899739168,Labels:map[string]string{io.kubernetes.container.name: stora
ge-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be4bc2aa-70f7-48ee-b9f1-46102ba63337,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bae8bfdc21a399cfdf25528506b00489071eec863d2519c65a9d6fa7a4c667a,PodSandboxId:ba5c40550c3689f9ac8933f8dcb3d3a723b1b355b3282a41dd0af9e5dc7beeca,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723810946547512969,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.nam
e: coredns-6f6b679f8f-jmsfb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 541a04aa-8d7e-4811-b4b3-dbc0c1bebbb7,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:826db47751f803b9411a806994ccc674fdc9ef490dad62f4a9dea23670d53247,PodSandboxId:e13b5c028dc5173dc8aef6705c70877617dd130d5eb43dbb2570fc3d90ab912b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b
2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723810943771683793,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qnd5q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d7c8f55-8a0f-4598-a0fd-2f7116e8af54,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:087e5b6007d48af7bd58a02dfea863fe58e858b3fdbaac1c9265aeb756141853,PodSandboxId:09a7b3e67f6b4ed73ebcb3fe0371e13247e855fa826c2735ad75d9b125fe9a78,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a
7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723810932834701643,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-966941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd7f0f2e511e3ee492e03bbea1a692cb,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bb33a64c7a6efa428cde2b0584281471cfad35b6f88d3f978f389ad8d11bcd1,PodSandboxId:488317774d000c5795c0edbf4ca3205c524369679846a5c2b8de576ebbb68850,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9
d4,State:CONTAINER_RUNNING,CreatedAt:1723810932774115052,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-966941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 940807d2a7fa779893a3e1bd18518954,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6a417011125a0257acefc3dc36994fa81803b6e0adcda7539a8151c8c779ebf,PodSandboxId:145971dbfd079a6c99cc4057db517e09f461766963676daf20568687a8c4357e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:
1723810932830365759,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-966941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e37003e0366a0904b6a2e41d3bd1df29,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea1e709000b7f625b1f43aeb5c4527ba0a2bbfc2704787b2b5e3cd7641d29fb3,PodSandboxId:871d3079795f60786b563db947d7f0387d194d6dd3d4fd690287875745fb3c00,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:17238109
32817029528,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-966941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b02dca01b057b45613a9a29a35a25c5b,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=072fa2ac-c854-4c5c-8878-67d013211ade name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 12:27:40 addons-966941 crio[681]: time="2024-08-16 12:27:40.606027078Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=001b033d-55d5-4529-a0c9-3cbe952e4e3d name=/runtime.v1.RuntimeService/Version
	Aug 16 12:27:40 addons-966941 crio[681]: time="2024-08-16 12:27:40.606098017Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=001b033d-55d5-4529-a0c9-3cbe952e4e3d name=/runtime.v1.RuntimeService/Version
	Aug 16 12:27:40 addons-966941 crio[681]: time="2024-08-16 12:27:40.607646733Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=39ac0586-5e77-49d0-afab-9a7ebdcd0f4c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 12:27:40 addons-966941 crio[681]: time="2024-08-16 12:27:40.608882315Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723811260608856419,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590828,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=39ac0586-5e77-49d0-afab-9a7ebdcd0f4c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 12:27:40 addons-966941 crio[681]: time="2024-08-16 12:27:40.610277757Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=25346863-4caf-4801-819b-3e36af01e9e9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 12:27:40 addons-966941 crio[681]: time="2024-08-16 12:27:40.610335737Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=25346863-4caf-4801-819b-3e36af01e9e9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 12:27:40 addons-966941 crio[681]: time="2024-08-16 12:27:40.611257544Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c7b774160af91adad43f12404c85d0837a2ba7fcf45a4cbcd1cd37044ffceaa9,PodSandboxId:a780f4468d501c5d9431e632fb120fb1e2e901a794e46acc703beceac17f385d,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1723811253756890448,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-xgd2h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f258e324-71ea-4930-9f6b-bbfed2eb5b61,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a2aea549461411c5baa256e80f79c058abfd14fb90bb929a522c674554a1a3b,PodSandboxId:a620cfcad2fa513f2d5b0b4c2693e4e3b1813e1fa85b6bec6977a2e3fbff77f5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1723811114047008258,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 293f8398-f883-4566-aa48-f7d867211e99,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:844855539e3e8ee7266bb520f0657f04d1401f30d8900c6b0cab2b33d3c97ea7,PodSandboxId:6a4d6b515995273427cc7b9a80957490f785ed505674a81cbf4ace8c48e1af97,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723811039919934465,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 809c2f02-508e-450d-8
88c-83832697c981,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:255fe3970e510a539373129a8a9f0b444757388e901842ad1e0b141496f82305,PodSandboxId:94403e9f4dd1cdb594c04916d381ec34995ada164fcf42a8674833a91e780cfe,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1723811012266113044,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-67cq2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c28c1c0d-fabb-46a9-a1bd-253ba889a9f3,},Anno
tations:map[string]string{io.kubernetes.container.hash: 8e23eadd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d62cfecbfbbeeea6af7ce63ee7fec9554e257448c052ab8f99795af75ac8b7fe,PodSandboxId:710b415c6d770876db945a50c2a6ecebed0d1501885bdce8c661af32330232ac,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1723811012046880378,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-drpm8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6d6a4
ba9-5cc7-4b50-b50d-38699a26cfa7,},Annotations:map[string]string{io.kubernetes.container.hash: 7b54fe70,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d2540ee0015260f79d7216e22bd020d23ba3f5b476926295586832ab614c8b8,PodSandboxId:f4ed0fa81aa0fcad219c6e65931663b3e3f8b654d17fefa34f606f99ecf2e622,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1723810992986918932,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-p6z8v,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 32196dc2-ada2-4e60-b64c-573967f34e54,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6219df4f6a9ab0f6b5579c6136b386e7097254a60f6cef5b1162ea5650ebd0a,PodSandboxId:e438f1d3d5dcb8af5c97f50deac825f20310b46bd988473c52cf2fe270f51ebe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723810949899739168,Labels:map[string]string{io.kubernetes.container.name: stora
ge-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be4bc2aa-70f7-48ee-b9f1-46102ba63337,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bae8bfdc21a399cfdf25528506b00489071eec863d2519c65a9d6fa7a4c667a,PodSandboxId:ba5c40550c3689f9ac8933f8dcb3d3a723b1b355b3282a41dd0af9e5dc7beeca,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723810946547512969,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.nam
e: coredns-6f6b679f8f-jmsfb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 541a04aa-8d7e-4811-b4b3-dbc0c1bebbb7,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:826db47751f803b9411a806994ccc674fdc9ef490dad62f4a9dea23670d53247,PodSandboxId:e13b5c028dc5173dc8aef6705c70877617dd130d5eb43dbb2570fc3d90ab912b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b
2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723810943771683793,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qnd5q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d7c8f55-8a0f-4598-a0fd-2f7116e8af54,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:087e5b6007d48af7bd58a02dfea863fe58e858b3fdbaac1c9265aeb756141853,PodSandboxId:09a7b3e67f6b4ed73ebcb3fe0371e13247e855fa826c2735ad75d9b125fe9a78,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a
7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723810932834701643,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-966941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd7f0f2e511e3ee492e03bbea1a692cb,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bb33a64c7a6efa428cde2b0584281471cfad35b6f88d3f978f389ad8d11bcd1,PodSandboxId:488317774d000c5795c0edbf4ca3205c524369679846a5c2b8de576ebbb68850,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9
d4,State:CONTAINER_RUNNING,CreatedAt:1723810932774115052,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-966941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 940807d2a7fa779893a3e1bd18518954,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6a417011125a0257acefc3dc36994fa81803b6e0adcda7539a8151c8c779ebf,PodSandboxId:145971dbfd079a6c99cc4057db517e09f461766963676daf20568687a8c4357e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:
1723810932830365759,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-966941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e37003e0366a0904b6a2e41d3bd1df29,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea1e709000b7f625b1f43aeb5c4527ba0a2bbfc2704787b2b5e3cd7641d29fb3,PodSandboxId:871d3079795f60786b563db947d7f0387d194d6dd3d4fd690287875745fb3c00,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:17238109
32817029528,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-966941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b02dca01b057b45613a9a29a35a25c5b,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=25346863-4caf-4801-819b-3e36af01e9e9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 12:27:40 addons-966941 crio[681]: time="2024-08-16 12:27:40.645964615Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0ec01b2f-1c59-4fb0-aa95-7d7be091e070 name=/runtime.v1.RuntimeService/Version
	Aug 16 12:27:40 addons-966941 crio[681]: time="2024-08-16 12:27:40.646054829Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0ec01b2f-1c59-4fb0-aa95-7d7be091e070 name=/runtime.v1.RuntimeService/Version
	Aug 16 12:27:40 addons-966941 crio[681]: time="2024-08-16 12:27:40.647346951Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9329abd5-2654-443c-85a8-f705b81ffd79 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 12:27:40 addons-966941 crio[681]: time="2024-08-16 12:27:40.648878223Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723811260648852315,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590828,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9329abd5-2654-443c-85a8-f705b81ffd79 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 12:27:40 addons-966941 crio[681]: time="2024-08-16 12:27:40.649500573Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b618966a-7768-4ad2-8927-31b5c8797a0a name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 12:27:40 addons-966941 crio[681]: time="2024-08-16 12:27:40.649553598Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b618966a-7768-4ad2-8927-31b5c8797a0a name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 12:27:40 addons-966941 crio[681]: time="2024-08-16 12:27:40.649832408Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c7b774160af91adad43f12404c85d0837a2ba7fcf45a4cbcd1cd37044ffceaa9,PodSandboxId:a780f4468d501c5d9431e632fb120fb1e2e901a794e46acc703beceac17f385d,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1723811253756890448,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-xgd2h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f258e324-71ea-4930-9f6b-bbfed2eb5b61,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a2aea549461411c5baa256e80f79c058abfd14fb90bb929a522c674554a1a3b,PodSandboxId:a620cfcad2fa513f2d5b0b4c2693e4e3b1813e1fa85b6bec6977a2e3fbff77f5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1723811114047008258,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 293f8398-f883-4566-aa48-f7d867211e99,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:844855539e3e8ee7266bb520f0657f04d1401f30d8900c6b0cab2b33d3c97ea7,PodSandboxId:6a4d6b515995273427cc7b9a80957490f785ed505674a81cbf4ace8c48e1af97,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723811039919934465,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 809c2f02-508e-450d-8
88c-83832697c981,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:255fe3970e510a539373129a8a9f0b444757388e901842ad1e0b141496f82305,PodSandboxId:94403e9f4dd1cdb594c04916d381ec34995ada164fcf42a8674833a91e780cfe,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1723811012266113044,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-67cq2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c28c1c0d-fabb-46a9-a1bd-253ba889a9f3,},Anno
tations:map[string]string{io.kubernetes.container.hash: 8e23eadd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d62cfecbfbbeeea6af7ce63ee7fec9554e257448c052ab8f99795af75ac8b7fe,PodSandboxId:710b415c6d770876db945a50c2a6ecebed0d1501885bdce8c661af32330232ac,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1723811012046880378,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-drpm8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6d6a4
ba9-5cc7-4b50-b50d-38699a26cfa7,},Annotations:map[string]string{io.kubernetes.container.hash: 7b54fe70,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d2540ee0015260f79d7216e22bd020d23ba3f5b476926295586832ab614c8b8,PodSandboxId:f4ed0fa81aa0fcad219c6e65931663b3e3f8b654d17fefa34f606f99ecf2e622,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1723810992986918932,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-p6z8v,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 32196dc2-ada2-4e60-b64c-573967f34e54,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6219df4f6a9ab0f6b5579c6136b386e7097254a60f6cef5b1162ea5650ebd0a,PodSandboxId:e438f1d3d5dcb8af5c97f50deac825f20310b46bd988473c52cf2fe270f51ebe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723810949899739168,Labels:map[string]string{io.kubernetes.container.name: stora
ge-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be4bc2aa-70f7-48ee-b9f1-46102ba63337,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bae8bfdc21a399cfdf25528506b00489071eec863d2519c65a9d6fa7a4c667a,PodSandboxId:ba5c40550c3689f9ac8933f8dcb3d3a723b1b355b3282a41dd0af9e5dc7beeca,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723810946547512969,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.nam
e: coredns-6f6b679f8f-jmsfb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 541a04aa-8d7e-4811-b4b3-dbc0c1bebbb7,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:826db47751f803b9411a806994ccc674fdc9ef490dad62f4a9dea23670d53247,PodSandboxId:e13b5c028dc5173dc8aef6705c70877617dd130d5eb43dbb2570fc3d90ab912b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b
2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723810943771683793,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qnd5q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d7c8f55-8a0f-4598-a0fd-2f7116e8af54,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:087e5b6007d48af7bd58a02dfea863fe58e858b3fdbaac1c9265aeb756141853,PodSandboxId:09a7b3e67f6b4ed73ebcb3fe0371e13247e855fa826c2735ad75d9b125fe9a78,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a
7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723810932834701643,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-966941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd7f0f2e511e3ee492e03bbea1a692cb,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bb33a64c7a6efa428cde2b0584281471cfad35b6f88d3f978f389ad8d11bcd1,PodSandboxId:488317774d000c5795c0edbf4ca3205c524369679846a5c2b8de576ebbb68850,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9
d4,State:CONTAINER_RUNNING,CreatedAt:1723810932774115052,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-966941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 940807d2a7fa779893a3e1bd18518954,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6a417011125a0257acefc3dc36994fa81803b6e0adcda7539a8151c8c779ebf,PodSandboxId:145971dbfd079a6c99cc4057db517e09f461766963676daf20568687a8c4357e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:
1723810932830365759,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-966941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e37003e0366a0904b6a2e41d3bd1df29,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea1e709000b7f625b1f43aeb5c4527ba0a2bbfc2704787b2b5e3cd7641d29fb3,PodSandboxId:871d3079795f60786b563db947d7f0387d194d6dd3d4fd690287875745fb3c00,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:17238109
32817029528,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-966941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b02dca01b057b45613a9a29a35a25c5b,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b618966a-7768-4ad2-8927-31b5c8797a0a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c7b774160af91       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        6 seconds ago       Running             hello-world-app           0                   a780f4468d501       hello-world-app-55bf9c44b4-xgd2h
	9a2aea5494614       docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0                              2 minutes ago       Running             nginx                     0                   a620cfcad2fa5       nginx
	844855539e3e8       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   6a4d6b5159952       busybox
	255fe3970e510       684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66                                                             4 minutes ago       Exited              patch                     1                   94403e9f4dd1c       ingress-nginx-admission-patch-67cq2
	d62cfecbfbbee       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   4 minutes ago       Exited              create                    0                   710b415c6d770       ingress-nginx-admission-create-drpm8
	3d2540ee00152       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872        4 minutes ago       Running             metrics-server            0                   f4ed0fa81aa0f       metrics-server-8988944d9-p6z8v
	c6219df4f6a9a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   e438f1d3d5dcb       storage-provisioner
	9bae8bfdc21a3       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             5 minutes ago       Running             coredns                   0                   ba5c40550c368       coredns-6f6b679f8f-jmsfb
	826db47751f80       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                                             5 minutes ago       Running             kube-proxy                0                   e13b5c028dc51       kube-proxy-qnd5q
	087e5b6007d48       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                                             5 minutes ago       Running             kube-scheduler            0                   09a7b3e67f6b4       kube-scheduler-addons-966941
	c6a417011125a       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                                             5 minutes ago       Running             kube-apiserver            0                   145971dbfd079       kube-apiserver-addons-966941
	ea1e709000b7f       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                                             5 minutes ago       Running             kube-controller-manager   0                   871d3079795f6       kube-controller-manager-addons-966941
	7bb33a64c7a6e       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             5 minutes ago       Running             etcd                      0                   488317774d000       etcd-addons-966941
	
	
	==> coredns [9bae8bfdc21a399cfdf25528506b00489071eec863d2519c65a9d6fa7a4c667a] <==
	[INFO] 10.244.0.7:36155 - 48735 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000296073s
	[INFO] 10.244.0.7:35810 - 31838 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000085852s
	[INFO] 10.244.0.7:35810 - 46684 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000093016s
	[INFO] 10.244.0.7:58626 - 41299 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00006217s
	[INFO] 10.244.0.7:58626 - 51285 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000102842s
	[INFO] 10.244.0.7:45240 - 11379 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00010382s
	[INFO] 10.244.0.7:45240 - 39029 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000178655s
	[INFO] 10.244.0.7:37080 - 4549 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000121924s
	[INFO] 10.244.0.7:37080 - 40920 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000172492s
	[INFO] 10.244.0.7:39588 - 57604 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000109906s
	[INFO] 10.244.0.7:39588 - 40450 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000031395s
	[INFO] 10.244.0.7:42235 - 63003 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000086268s
	[INFO] 10.244.0.7:42235 - 52761 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000090073s
	[INFO] 10.244.0.7:47747 - 57743 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000104217s
	[INFO] 10.244.0.7:47747 - 9612 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000090525s
	[INFO] 10.244.0.22:57347 - 46561 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000357197s
	[INFO] 10.244.0.22:40248 - 22516 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000114261s
	[INFO] 10.244.0.22:51106 - 49073 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00012772s
	[INFO] 10.244.0.22:34649 - 57528 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000053051s
	[INFO] 10.244.0.22:49390 - 51786 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000075473s
	[INFO] 10.244.0.22:45355 - 40275 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000043733s
	[INFO] 10.244.0.22:58957 - 56775 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000778625s
	[INFO] 10.244.0.22:58237 - 3316 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.00109432s
	[INFO] 10.244.0.26:40291 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000285169s
	[INFO] 10.244.0.26:47593 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000149944s
	
	
	==> describe nodes <==
	Name:               addons-966941
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-966941
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab84f9bc76071a77c857a14f5c66dccc01002b05
	                    minikube.k8s.io/name=addons-966941
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_16T12_22_18_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-966941
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 12:22:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-966941
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 12:27:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 12:25:52 +0000   Fri, 16 Aug 2024 12:22:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 12:25:52 +0000   Fri, 16 Aug 2024 12:22:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 12:25:52 +0000   Fri, 16 Aug 2024 12:22:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 12:25:52 +0000   Fri, 16 Aug 2024 12:22:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.129
	  Hostname:    addons-966941
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 fb11edb67432498084ba0979e0b9a2a0
	  System UUID:                fb11edb6-7432-4980-84ba-0979e0b9a2a0
	  Boot ID:                    99dd81b0-f07e-42c1-807f-c6307b945b9c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m44s
	  default                     hello-world-app-55bf9c44b4-xgd2h         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m31s
	  kube-system                 coredns-6f6b679f8f-jmsfb                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m17s
	  kube-system                 etcd-addons-966941                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m22s
	  kube-system                 kube-apiserver-addons-966941             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m22s
	  kube-system                 kube-controller-manager-addons-966941    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m22s
	  kube-system                 kube-proxy-qnd5q                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m18s
	  kube-system                 kube-scheduler-addons-966941             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m22s
	  kube-system                 metrics-server-8988944d9-p6z8v           100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         5m12s
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m16s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m28s (x8 over 5m28s)  kubelet          Node addons-966941 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m28s (x8 over 5m28s)  kubelet          Node addons-966941 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m28s (x7 over 5m28s)  kubelet          Node addons-966941 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m23s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m22s                  kubelet          Node addons-966941 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m22s                  kubelet          Node addons-966941 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m22s                  kubelet          Node addons-966941 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m21s                  kubelet          Node addons-966941 status is now: NodeReady
	  Normal  RegisteredNode           5m18s                  node-controller  Node addons-966941 event: Registered Node addons-966941 in Controller
	
	
	==> dmesg <==
	[Aug16 12:23] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.093804] kauditd_printk_skb: 4 callbacks suppressed
	[  +9.826264] kauditd_printk_skb: 2 callbacks suppressed
	[ +10.451320] kauditd_printk_skb: 52 callbacks suppressed
	[  +5.115925] kauditd_printk_skb: 77 callbacks suppressed
	[  +7.615968] kauditd_printk_skb: 31 callbacks suppressed
	[  +5.333674] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.009297] kauditd_printk_skb: 20 callbacks suppressed
	[Aug16 12:24] kauditd_printk_skb: 41 callbacks suppressed
	[  +8.348175] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.905413] kauditd_printk_skb: 42 callbacks suppressed
	[  +5.965412] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.953442] kauditd_printk_skb: 37 callbacks suppressed
	[  +8.399765] kauditd_printk_skb: 35 callbacks suppressed
	[  +6.425735] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.345615] kauditd_printk_skb: 7 callbacks suppressed
	[Aug16 12:25] kauditd_printk_skb: 15 callbacks suppressed
	[  +7.490148] kauditd_printk_skb: 45 callbacks suppressed
	[  +8.348775] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.286670] kauditd_printk_skb: 14 callbacks suppressed
	[ +11.082959] kauditd_printk_skb: 3 callbacks suppressed
	[  +7.848424] kauditd_printk_skb: 6 callbacks suppressed
	[  +8.375819] kauditd_printk_skb: 33 callbacks suppressed
	[Aug16 12:27] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.246086] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [7bb33a64c7a6efa428cde2b0584281471cfad35b6f88d3f978f389ad8d11bcd1] <==
	{"level":"warn","ts":"2024-08-16T12:23:44.614515Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.638969ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-16T12:23:44.615703Z","caller":"traceutil/trace.go:171","msg":"trace[1742710230] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1135; }","duration":"129.725833ms","start":"2024-08-16T12:23:44.485872Z","end":"2024-08-16T12:23:44.615598Z","steps":["trace[1742710230] 'agreement among raft nodes before linearized reading'  (duration: 128.631135ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T12:23:51.209538Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":16204392754877957419,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-08-16T12:23:51.221748Z","caller":"traceutil/trace.go:171","msg":"trace[2039939734] linearizableReadLoop","detail":"{readStateIndex:1202; appliedIndex:1201; }","duration":"512.57673ms","start":"2024-08-16T12:23:50.709157Z","end":"2024-08-16T12:23:51.221733Z","steps":["trace[2039939734] 'read index received'  (duration: 512.339957ms)","trace[2039939734] 'applied index is now lower than readState.Index'  (duration: 236.33µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-16T12:23:51.221945Z","caller":"traceutil/trace.go:171","msg":"trace[529895148] transaction","detail":"{read_only:false; response_revision:1169; number_of_response:1; }","duration":"565.556788ms","start":"2024-08-16T12:23:50.656341Z","end":"2024-08-16T12:23:51.221898Z","steps":["trace[529895148] 'process raft request'  (duration: 565.284352ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T12:23:51.222041Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"361.358985ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-16T12:23:51.222096Z","caller":"traceutil/trace.go:171","msg":"trace[136183815] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1169; }","duration":"361.425408ms","start":"2024-08-16T12:23:50.860662Z","end":"2024-08-16T12:23:51.222087Z","steps":["trace[136183815] 'agreement among raft nodes before linearized reading'  (duration: 361.341256ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T12:23:51.222122Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T12:23:50.860609Z","time spent":"361.506145ms","remote":"127.0.0.1:57548","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-08-16T12:23:51.222173Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"276.432867ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-16T12:23:51.222209Z","caller":"traceutil/trace.go:171","msg":"trace[114815219] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1169; }","duration":"276.501007ms","start":"2024-08-16T12:23:50.945702Z","end":"2024-08-16T12:23:51.222203Z","steps":["trace[114815219] 'agreement among raft nodes before linearized reading'  (duration: 276.369503ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T12:23:51.222271Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"333.649612ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-16T12:23:51.222306Z","caller":"traceutil/trace.go:171","msg":"trace[1057252280] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1169; }","duration":"333.684634ms","start":"2024-08-16T12:23:50.888615Z","end":"2024-08-16T12:23:51.222300Z","steps":["trace[1057252280] 'agreement among raft nodes before linearized reading'  (duration: 333.639927ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T12:23:51.222323Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T12:23:50.888582Z","time spent":"333.735805ms","remote":"127.0.0.1:57548","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-08-16T12:23:51.222049Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T12:23:50.656320Z","time spent":"565.675315ms","remote":"127.0.0.1:57536","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1163 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-08-16T12:23:51.222574Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"513.41428ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-16T12:23:51.222668Z","caller":"traceutil/trace.go:171","msg":"trace[1494340823] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1169; }","duration":"513.510286ms","start":"2024-08-16T12:23:50.709151Z","end":"2024-08-16T12:23:51.222661Z","steps":["trace[1494340823] 'agreement among raft nodes before linearized reading'  (duration: 513.402588ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T12:24:11.849192Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.022431ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-16T12:24:11.849473Z","caller":"traceutil/trace.go:171","msg":"trace[1305355956] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1293; }","duration":"140.289651ms","start":"2024-08-16T12:24:11.709117Z","end":"2024-08-16T12:24:11.849407Z","steps":["trace[1305355956] 'range keys from in-memory index tree'  (duration: 140.010663ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T12:25:08.757797Z","caller":"traceutil/trace.go:171","msg":"trace[1556507125] transaction","detail":"{read_only:false; response_revision:1630; number_of_response:1; }","duration":"342.552941ms","start":"2024-08-16T12:25:08.415211Z","end":"2024-08-16T12:25:08.757764Z","steps":["trace[1556507125] 'process raft request'  (duration: 342.199817ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T12:25:08.758074Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T12:25:08.415194Z","time spent":"342.732347ms","remote":"127.0.0.1:36570","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":538,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1603 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:451 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"warn","ts":"2024-08-16T12:25:21.796884Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"208.997808ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-16T12:25:21.796949Z","caller":"traceutil/trace.go:171","msg":"trace[1246856543] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1741; }","duration":"209.073243ms","start":"2024-08-16T12:25:21.587866Z","end":"2024-08-16T12:25:21.796939Z","steps":["trace[1246856543] 'range keys from in-memory index tree'  (duration: 208.900576ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T12:25:21.797111Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"137.440704ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/hpvc-restore\" ","response":"range_response_count:1 size:982"}
	{"level":"info","ts":"2024-08-16T12:25:21.797128Z","caller":"traceutil/trace.go:171","msg":"trace[1889004501] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/hpvc-restore; range_end:; response_count:1; response_revision:1741; }","duration":"137.459665ms","start":"2024-08-16T12:25:21.659663Z","end":"2024-08-16T12:25:21.797122Z","steps":["trace[1889004501] 'range keys from in-memory index tree'  (duration: 137.348197ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T12:25:23.899175Z","caller":"traceutil/trace.go:171","msg":"trace[755449834] transaction","detail":"{read_only:false; response_revision:1745; number_of_response:1; }","duration":"110.62388ms","start":"2024-08-16T12:25:23.788535Z","end":"2024-08-16T12:25:23.899159Z","steps":["trace[755449834] 'process raft request'  (duration: 109.976462ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:27:40 up 5 min,  0 users,  load average: 0.38, 1.12, 0.62
	Linux addons-966941 5.10.207 #1 SMP Wed Aug 14 19:18:01 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [c6a417011125a0257acefc3dc36994fa81803b6e0adcda7539a8151c8c779ebf] <==
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0816 12:24:19.290043       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0816 12:24:19.292846       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E0816 12:24:49.808317       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0816 12:24:54.814828       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.39.129:8443->10.244.0.28:49778: read: connection reset by peer
	I0816 12:25:03.844138       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0816 12:25:04.901306       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0816 12:25:09.527850       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0816 12:25:09.713962       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.106.85.112"}
	I0816 12:25:16.451689       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0816 12:25:18.460162       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.109.69.59"}
	I0816 12:25:51.054165       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0816 12:25:51.054222       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0816 12:25:51.082374       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0816 12:25:51.082476       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0816 12:25:51.190128       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0816 12:25:51.190230       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0816 12:25:51.200774       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0816 12:25:51.200822       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0816 12:25:52.191107       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0816 12:25:52.201843       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0816 12:25:52.333816       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0816 12:27:30.881595       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.97.242.210"}
	
	
	==> kube-controller-manager [ea1e709000b7f625b1f43aeb5c4527ba0a2bbfc2704787b2b5e3cd7641d29fb3] <==
	W0816 12:26:29.861912       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 12:26:29.862067       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0816 12:26:31.565537       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 12:26:31.565671       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0816 12:26:33.263691       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 12:26:33.263809       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0816 12:26:50.834384       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 12:26:50.834581       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0816 12:27:16.498532       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 12:27:16.498860       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0816 12:27:19.583242       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 12:27:19.583414       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0816 12:27:20.782932       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 12:27:20.783005       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0816 12:27:30.695314       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="44.789087ms"
	I0816 12:27:30.717965       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="22.442878ms"
	I0816 12:27:30.718382       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="47.03µs"
	I0816 12:27:30.722167       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="498.511µs"
	I0816 12:27:32.725211       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0816 12:27:32.732380       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7559cbf597" duration="4.967µs"
	I0816 12:27:32.737709       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	W0816 12:27:33.052110       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 12:27:33.052227       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0816 12:27:34.149588       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="11.715746ms"
	I0816 12:27:34.150309       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="36.534µs"
	
	
	==> kube-proxy [826db47751f803b9411a806994ccc674fdc9ef490dad62f4a9dea23670d53247] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0816 12:22:24.474371       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0816 12:22:24.484976       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.129"]
	E0816 12:22:24.485053       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 12:22:24.623813       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0816 12:22:24.623845       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0816 12:22:24.623872       1 server_linux.go:169] "Using iptables Proxier"
	I0816 12:22:24.630017       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 12:22:24.630244       1 server.go:483] "Version info" version="v1.31.0"
	I0816 12:22:24.630255       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 12:22:24.636227       1 config.go:197] "Starting service config controller"
	I0816 12:22:24.636253       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 12:22:24.636275       1 config.go:104] "Starting endpoint slice config controller"
	I0816 12:22:24.636279       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 12:22:24.636857       1 config.go:326] "Starting node config controller"
	I0816 12:22:24.636874       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 12:22:24.736382       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0816 12:22:24.736480       1 shared_informer.go:320] Caches are synced for service config
	I0816 12:22:24.737127       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [087e5b6007d48af7bd58a02dfea863fe58e858b3fdbaac1c9265aeb756141853] <==
	W0816 12:22:15.506679       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 12:22:15.506709       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0816 12:22:15.507222       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 12:22:15.507264       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 12:22:15.507874       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 12:22:15.507920       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 12:22:16.394177       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 12:22:16.394232       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 12:22:16.406752       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 12:22:16.406840       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 12:22:16.491918       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0816 12:22:16.491967       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0816 12:22:16.614672       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 12:22:16.614720       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 12:22:16.635236       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 12:22:16.635290       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0816 12:22:16.645900       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 12:22:16.646045       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 12:22:16.682324       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 12:22:16.682461       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 12:22:16.698490       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 12:22:16.699097       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 12:22:16.698926       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 12:22:16.699381       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0816 12:22:18.695410       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 16 12:27:30 addons-966941 kubelet[1224]: I0816 12:27:30.748316    1224 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmz2j\" (UniqueName: \"kubernetes.io/projected/f258e324-71ea-4930-9f6b-bbfed2eb5b61-kube-api-access-cmz2j\") pod \"hello-world-app-55bf9c44b4-xgd2h\" (UID: \"f258e324-71ea-4930-9f6b-bbfed2eb5b61\") " pod="default/hello-world-app-55bf9c44b4-xgd2h"
	Aug 16 12:27:31 addons-966941 kubelet[1224]: I0816 12:27:31.857803    1224 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d988d\" (UniqueName: \"kubernetes.io/projected/ac8db978-31ce-467e-8c0c-585910bf0042-kube-api-access-d988d\") pod \"ac8db978-31ce-467e-8c0c-585910bf0042\" (UID: \"ac8db978-31ce-467e-8c0c-585910bf0042\") "
	Aug 16 12:27:31 addons-966941 kubelet[1224]: I0816 12:27:31.861287    1224 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac8db978-31ce-467e-8c0c-585910bf0042-kube-api-access-d988d" (OuterVolumeSpecName: "kube-api-access-d988d") pod "ac8db978-31ce-467e-8c0c-585910bf0042" (UID: "ac8db978-31ce-467e-8c0c-585910bf0042"). InnerVolumeSpecName "kube-api-access-d988d". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 16 12:27:31 addons-966941 kubelet[1224]: I0816 12:27:31.958520    1224 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-d988d\" (UniqueName: \"kubernetes.io/projected/ac8db978-31ce-467e-8c0c-585910bf0042-kube-api-access-d988d\") on node \"addons-966941\" DevicePath \"\""
	Aug 16 12:27:32 addons-966941 kubelet[1224]: I0816 12:27:32.092495    1224 scope.go:117] "RemoveContainer" containerID="e8989d45399930d8d21a60b8715f8416070d49e45338b876414ad11e21a88985"
	Aug 16 12:27:32 addons-966941 kubelet[1224]: I0816 12:27:32.121220    1224 scope.go:117] "RemoveContainer" containerID="e8989d45399930d8d21a60b8715f8416070d49e45338b876414ad11e21a88985"
	Aug 16 12:27:32 addons-966941 kubelet[1224]: E0816 12:27:32.121983    1224 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8989d45399930d8d21a60b8715f8416070d49e45338b876414ad11e21a88985\": container with ID starting with e8989d45399930d8d21a60b8715f8416070d49e45338b876414ad11e21a88985 not found: ID does not exist" containerID="e8989d45399930d8d21a60b8715f8416070d49e45338b876414ad11e21a88985"
	Aug 16 12:27:32 addons-966941 kubelet[1224]: I0816 12:27:32.122036    1224 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8989d45399930d8d21a60b8715f8416070d49e45338b876414ad11e21a88985"} err="failed to get container status \"e8989d45399930d8d21a60b8715f8416070d49e45338b876414ad11e21a88985\": rpc error: code = NotFound desc = could not find container \"e8989d45399930d8d21a60b8715f8416070d49e45338b876414ad11e21a88985\": container with ID starting with e8989d45399930d8d21a60b8715f8416070d49e45338b876414ad11e21a88985 not found: ID does not exist"
	Aug 16 12:27:33 addons-966941 kubelet[1224]: I0816 12:27:33.961732    1224 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d6a4ba9-5cc7-4b50-b50d-38699a26cfa7" path="/var/lib/kubelet/pods/6d6a4ba9-5cc7-4b50-b50d-38699a26cfa7/volumes"
	Aug 16 12:27:33 addons-966941 kubelet[1224]: I0816 12:27:33.962115    1224 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac8db978-31ce-467e-8c0c-585910bf0042" path="/var/lib/kubelet/pods/ac8db978-31ce-467e-8c0c-585910bf0042/volumes"
	Aug 16 12:27:33 addons-966941 kubelet[1224]: I0816 12:27:33.962555    1224 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c28c1c0d-fabb-46a9-a1bd-253ba889a9f3" path="/var/lib/kubelet/pods/c28c1c0d-fabb-46a9-a1bd-253ba889a9f3/volumes"
	Aug 16 12:27:36 addons-966941 kubelet[1224]: I0816 12:27:36.089549    1224 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqwsw\" (UniqueName: \"kubernetes.io/projected/4b712210-6c63-4cd5-ade4-341975b76182-kube-api-access-xqwsw\") pod \"4b712210-6c63-4cd5-ade4-341975b76182\" (UID: \"4b712210-6c63-4cd5-ade4-341975b76182\") "
	Aug 16 12:27:36 addons-966941 kubelet[1224]: I0816 12:27:36.089616    1224 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4b712210-6c63-4cd5-ade4-341975b76182-webhook-cert\") pod \"4b712210-6c63-4cd5-ade4-341975b76182\" (UID: \"4b712210-6c63-4cd5-ade4-341975b76182\") "
	Aug 16 12:27:36 addons-966941 kubelet[1224]: I0816 12:27:36.091948    1224 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b712210-6c63-4cd5-ade4-341975b76182-kube-api-access-xqwsw" (OuterVolumeSpecName: "kube-api-access-xqwsw") pod "4b712210-6c63-4cd5-ade4-341975b76182" (UID: "4b712210-6c63-4cd5-ade4-341975b76182"). InnerVolumeSpecName "kube-api-access-xqwsw". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 16 12:27:36 addons-966941 kubelet[1224]: I0816 12:27:36.095069    1224 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b712210-6c63-4cd5-ade4-341975b76182-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "4b712210-6c63-4cd5-ade4-341975b76182" (UID: "4b712210-6c63-4cd5-ade4-341975b76182"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 16 12:27:36 addons-966941 kubelet[1224]: I0816 12:27:36.135874    1224 scope.go:117] "RemoveContainer" containerID="f5cd664c34cdfea5d41b84f9c68518843c7789eda6083b58146715c0fed5fef9"
	Aug 16 12:27:36 addons-966941 kubelet[1224]: I0816 12:27:36.156092    1224 scope.go:117] "RemoveContainer" containerID="f5cd664c34cdfea5d41b84f9c68518843c7789eda6083b58146715c0fed5fef9"
	Aug 16 12:27:36 addons-966941 kubelet[1224]: E0816 12:27:36.156804    1224 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5cd664c34cdfea5d41b84f9c68518843c7789eda6083b58146715c0fed5fef9\": container with ID starting with f5cd664c34cdfea5d41b84f9c68518843c7789eda6083b58146715c0fed5fef9 not found: ID does not exist" containerID="f5cd664c34cdfea5d41b84f9c68518843c7789eda6083b58146715c0fed5fef9"
	Aug 16 12:27:36 addons-966941 kubelet[1224]: I0816 12:27:36.156831    1224 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5cd664c34cdfea5d41b84f9c68518843c7789eda6083b58146715c0fed5fef9"} err="failed to get container status \"f5cd664c34cdfea5d41b84f9c68518843c7789eda6083b58146715c0fed5fef9\": rpc error: code = NotFound desc = could not find container \"f5cd664c34cdfea5d41b84f9c68518843c7789eda6083b58146715c0fed5fef9\": container with ID starting with f5cd664c34cdfea5d41b84f9c68518843c7789eda6083b58146715c0fed5fef9 not found: ID does not exist"
	Aug 16 12:27:36 addons-966941 kubelet[1224]: I0816 12:27:36.190326    1224 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4b712210-6c63-4cd5-ade4-341975b76182-webhook-cert\") on node \"addons-966941\" DevicePath \"\""
	Aug 16 12:27:36 addons-966941 kubelet[1224]: I0816 12:27:36.190380    1224 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-xqwsw\" (UniqueName: \"kubernetes.io/projected/4b712210-6c63-4cd5-ade4-341975b76182-kube-api-access-xqwsw\") on node \"addons-966941\" DevicePath \"\""
	Aug 16 12:27:37 addons-966941 kubelet[1224]: I0816 12:27:37.962548    1224 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b712210-6c63-4cd5-ade4-341975b76182" path="/var/lib/kubelet/pods/4b712210-6c63-4cd5-ade4-341975b76182/volumes"
	Aug 16 12:27:38 addons-966941 kubelet[1224]: E0816 12:27:38.149165    1224 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723811258148693464,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590828,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:27:38 addons-966941 kubelet[1224]: E0816 12:27:38.149191    1224 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723811258148693464,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590828,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:27:40 addons-966941 kubelet[1224]: I0816 12:27:40.958918    1224 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	
	
	==> storage-provisioner [c6219df4f6a9ab0f6b5579c6136b386e7097254a60f6cef5b1162ea5650ebd0a] <==
	I0816 12:22:30.562900       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0816 12:22:30.608636       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0816 12:22:30.608684       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0816 12:22:30.748783       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0816 12:22:30.748928       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-966941_3269c7d6-db75-4b54-a4b8-88c25905904a!
	I0816 12:22:30.750148       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a001f778-fcb5-42eb-a580-0d5d7ade1b5b", APIVersion:"v1", ResourceVersion:"667", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-966941_3269c7d6-db75-4b54-a4b8-88c25905904a became leader
	I0816 12:22:31.010791       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-966941_3269c7d6-db75-4b54-a4b8-88c25905904a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-966941 -n addons-966941
helpers_test.go:261: (dbg) Run:  kubectl --context addons-966941 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (152.42s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (323.96s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.799549ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-8988944d9-p6z8v" [32196dc2-ada2-4e60-b64c-573967f34e54] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004304193s
addons_test.go:417: (dbg) Run:  kubectl --context addons-966941 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-966941 top pods -n kube-system: exit status 1 (63.675885ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-jmsfb, age: 2m22.443663997s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-966941 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-966941 top pods -n kube-system: exit status 1 (63.389109ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-jmsfb, age: 2m25.666115711s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-966941 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-966941 top pods -n kube-system: exit status 1 (66.394519ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-jmsfb, age: 2m32.002046769s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-966941 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-966941 top pods -n kube-system: exit status 1 (64.017366ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-jmsfb, age: 2m41.850661882s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-966941 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-966941 top pods -n kube-system: exit status 1 (64.761493ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-jmsfb, age: 2m47.802929072s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-966941 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-966941 top pods -n kube-system: exit status 1 (64.712891ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-jmsfb, age: 2m59.363511873s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-966941 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-966941 top pods -n kube-system: exit status 1 (61.058344ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-jmsfb, age: 3m23.470628269s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-966941 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-966941 top pods -n kube-system: exit status 1 (60.562398ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-jmsfb, age: 3m54.651131065s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-966941 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-966941 top pods -n kube-system: exit status 1 (60.533302ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-jmsfb, age: 4m36.692273942s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-966941 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-966941 top pods -n kube-system: exit status 1 (60.179791ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-jmsfb, age: 5m12.667431162s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-966941 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-966941 top pods -n kube-system: exit status 1 (62.187031ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-jmsfb, age: 6m23.452540949s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-966941 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-966941 top pods -n kube-system: exit status 1 (63.681052ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-jmsfb, age: 7m37.662682793s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-966941 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-966941 -n addons-966941
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-966941 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-966941 logs -n 25: (1.287659279s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-723080                                                                     | download-only-723080 | jenkins | v1.33.1 | 16 Aug 24 12:21 UTC | 16 Aug 24 12:21 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-862449 | jenkins | v1.33.1 | 16 Aug 24 12:21 UTC |                     |
	|         | binary-mirror-862449                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:43873                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-862449                                                                     | binary-mirror-862449 | jenkins | v1.33.1 | 16 Aug 24 12:21 UTC | 16 Aug 24 12:21 UTC |
	| addons  | enable dashboard -p                                                                         | addons-966941        | jenkins | v1.33.1 | 16 Aug 24 12:21 UTC |                     |
	|         | addons-966941                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-966941        | jenkins | v1.33.1 | 16 Aug 24 12:21 UTC |                     |
	|         | addons-966941                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-966941 --wait=true                                                                | addons-966941        | jenkins | v1.33.1 | 16 Aug 24 12:21 UTC | 16 Aug 24 12:23 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-966941 addons disable                                                                | addons-966941        | jenkins | v1.33.1 | 16 Aug 24 12:24 UTC | 16 Aug 24 12:24 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-966941 addons disable                                                                | addons-966941        | jenkins | v1.33.1 | 16 Aug 24 12:24 UTC | 16 Aug 24 12:24 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-966941 ssh cat                                                                       | addons-966941        | jenkins | v1.33.1 | 16 Aug 24 12:24 UTC | 16 Aug 24 12:24 UTC |
	|         | /opt/local-path-provisioner/pvc-e2d2f869-e0e4-4450-9779-9bdaae043e0c_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-966941 addons disable                                                                | addons-966941        | jenkins | v1.33.1 | 16 Aug 24 12:24 UTC | 16 Aug 24 12:25 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-966941 ip                                                                            | addons-966941        | jenkins | v1.33.1 | 16 Aug 24 12:24 UTC | 16 Aug 24 12:24 UTC |
	| addons  | addons-966941 addons disable                                                                | addons-966941        | jenkins | v1.33.1 | 16 Aug 24 12:24 UTC | 16 Aug 24 12:24 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-966941        | jenkins | v1.33.1 | 16 Aug 24 12:24 UTC | 16 Aug 24 12:24 UTC |
	|         | -p addons-966941                                                                            |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-966941        | jenkins | v1.33.1 | 16 Aug 24 12:24 UTC | 16 Aug 24 12:24 UTC |
	|         | addons-966941                                                                               |                      |         |         |                     |                     |
	| addons  | addons-966941 addons disable                                                                | addons-966941        | jenkins | v1.33.1 | 16 Aug 24 12:24 UTC | 16 Aug 24 12:24 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-966941        | jenkins | v1.33.1 | 16 Aug 24 12:25 UTC | 16 Aug 24 12:25 UTC |
	|         | addons-966941                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-966941        | jenkins | v1.33.1 | 16 Aug 24 12:25 UTC | 16 Aug 24 12:25 UTC |
	|         | -p addons-966941                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-966941 ssh curl -s                                                                   | addons-966941        | jenkins | v1.33.1 | 16 Aug 24 12:25 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-966941 addons disable                                                                | addons-966941        | jenkins | v1.33.1 | 16 Aug 24 12:25 UTC | 16 Aug 24 12:25 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-966941 addons                                                                        | addons-966941        | jenkins | v1.33.1 | 16 Aug 24 12:25 UTC | 16 Aug 24 12:25 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-966941 addons                                                                        | addons-966941        | jenkins | v1.33.1 | 16 Aug 24 12:25 UTC | 16 Aug 24 12:25 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-966941 ip                                                                            | addons-966941        | jenkins | v1.33.1 | 16 Aug 24 12:27 UTC | 16 Aug 24 12:27 UTC |
	| addons  | addons-966941 addons disable                                                                | addons-966941        | jenkins | v1.33.1 | 16 Aug 24 12:27 UTC | 16 Aug 24 12:27 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-966941 addons disable                                                                | addons-966941        | jenkins | v1.33.1 | 16 Aug 24 12:27 UTC | 16 Aug 24 12:27 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-966941 addons                                                                        | addons-966941        | jenkins | v1.33.1 | 16 Aug 24 12:30 UTC | 16 Aug 24 12:30 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 12:21:36
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 12:21:36.812588   11845 out.go:345] Setting OutFile to fd 1 ...
	I0816 12:21:36.812700   11845 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 12:21:36.812711   11845 out.go:358] Setting ErrFile to fd 2...
	I0816 12:21:36.812717   11845 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 12:21:36.812897   11845 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-3966/.minikube/bin
	I0816 12:21:36.813482   11845 out.go:352] Setting JSON to false
	I0816 12:21:36.814262   11845 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":242,"bootTime":1723810655,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 12:21:36.814322   11845 start.go:139] virtualization: kvm guest
	I0816 12:21:36.816396   11845 out.go:177] * [addons-966941] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 12:21:36.817807   11845 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 12:21:36.817833   11845 notify.go:220] Checking for updates...
	I0816 12:21:36.820495   11845 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 12:21:36.821803   11845 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 12:21:36.822969   11845 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-3966/.minikube
	I0816 12:21:36.824101   11845 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 12:21:36.825313   11845 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 12:21:36.826555   11845 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 12:21:36.857351   11845 out.go:177] * Using the kvm2 driver based on user configuration
	I0816 12:21:36.858597   11845 start.go:297] selected driver: kvm2
	I0816 12:21:36.858616   11845 start.go:901] validating driver "kvm2" against <nil>
	I0816 12:21:36.858628   11845 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 12:21:36.859277   11845 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 12:21:36.859382   11845 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-3966/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 12:21:36.873504   11845 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0816 12:21:36.873554   11845 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 12:21:36.873756   11845 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 12:21:36.873787   11845 cni.go:84] Creating CNI manager for ""
	I0816 12:21:36.873800   11845 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 12:21:36.873807   11845 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0816 12:21:36.873861   11845 start.go:340] cluster config:
	{Name:addons-966941 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-966941 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 12:21:36.873969   11845 iso.go:125] acquiring lock: {Name:mk00139b359c6fe4c84d80e843a2099303fb7c5b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 12:21:36.875713   11845 out.go:177] * Starting "addons-966941" primary control-plane node in "addons-966941" cluster
	I0816 12:21:36.876944   11845 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 12:21:36.876968   11845 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0816 12:21:36.876974   11845 cache.go:56] Caching tarball of preloaded images
	I0816 12:21:36.877045   11845 preload.go:172] Found /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 12:21:36.877055   11845 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0816 12:21:36.877341   11845 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/config.json ...
	I0816 12:21:36.877360   11845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/config.json: {Name:mka6e26b83c1ff181c94a2ba1ba48c6b50bbc421 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:21:36.877475   11845 start.go:360] acquireMachinesLock for addons-966941: {Name:mk0b4b88ee8893cefe753073bc08069ac50e5641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 12:21:36.877516   11845 start.go:364] duration metric: took 28.838µs to acquireMachinesLock for "addons-966941"
	I0816 12:21:36.877532   11845 start.go:93] Provisioning new machine with config: &{Name:addons-966941 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:addons-966941 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 12:21:36.877586   11845 start.go:125] createHost starting for "" (driver="kvm2")
	I0816 12:21:36.878992   11845 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0816 12:21:36.879114   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:21:36.879147   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:21:36.893177   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35685
	I0816 12:21:36.893662   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:21:36.894203   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:21:36.894222   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:21:36.894593   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:21:36.894772   11845 main.go:141] libmachine: (addons-966941) Calling .GetMachineName
	I0816 12:21:36.894938   11845 main.go:141] libmachine: (addons-966941) Calling .DriverName
	I0816 12:21:36.895076   11845 start.go:159] libmachine.API.Create for "addons-966941" (driver="kvm2")
	I0816 12:21:36.895110   11845 client.go:168] LocalClient.Create starting
	I0816 12:21:36.895161   11845 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem
	I0816 12:21:37.117247   11845 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem
	I0816 12:21:37.321675   11845 main.go:141] libmachine: Running pre-create checks...
	I0816 12:21:37.321698   11845 main.go:141] libmachine: (addons-966941) Calling .PreCreateCheck
	I0816 12:21:37.322183   11845 main.go:141] libmachine: (addons-966941) Calling .GetConfigRaw
	I0816 12:21:37.322570   11845 main.go:141] libmachine: Creating machine...
	I0816 12:21:37.322582   11845 main.go:141] libmachine: (addons-966941) Calling .Create
	I0816 12:21:37.322731   11845 main.go:141] libmachine: (addons-966941) Creating KVM machine...
	I0816 12:21:37.323976   11845 main.go:141] libmachine: (addons-966941) DBG | found existing default KVM network
	I0816 12:21:37.324706   11845 main.go:141] libmachine: (addons-966941) DBG | I0816 12:21:37.324555   11867 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1f0}
	I0816 12:21:37.324727   11845 main.go:141] libmachine: (addons-966941) DBG | created network xml: 
	I0816 12:21:37.324742   11845 main.go:141] libmachine: (addons-966941) DBG | <network>
	I0816 12:21:37.324757   11845 main.go:141] libmachine: (addons-966941) DBG |   <name>mk-addons-966941</name>
	I0816 12:21:37.324767   11845 main.go:141] libmachine: (addons-966941) DBG |   <dns enable='no'/>
	I0816 12:21:37.324777   11845 main.go:141] libmachine: (addons-966941) DBG |   
	I0816 12:21:37.324789   11845 main.go:141] libmachine: (addons-966941) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0816 12:21:37.324797   11845 main.go:141] libmachine: (addons-966941) DBG |     <dhcp>
	I0816 12:21:37.324804   11845 main.go:141] libmachine: (addons-966941) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0816 12:21:37.324811   11845 main.go:141] libmachine: (addons-966941) DBG |     </dhcp>
	I0816 12:21:37.324818   11845 main.go:141] libmachine: (addons-966941) DBG |   </ip>
	I0816 12:21:37.324827   11845 main.go:141] libmachine: (addons-966941) DBG |   
	I0816 12:21:37.324843   11845 main.go:141] libmachine: (addons-966941) DBG | </network>
	I0816 12:21:37.324853   11845 main.go:141] libmachine: (addons-966941) DBG | 
	I0816 12:21:37.329745   11845 main.go:141] libmachine: (addons-966941) DBG | trying to create private KVM network mk-addons-966941 192.168.39.0/24...
	I0816 12:21:37.392880   11845 main.go:141] libmachine: (addons-966941) DBG | private KVM network mk-addons-966941 192.168.39.0/24 created
	I0816 12:21:37.392918   11845 main.go:141] libmachine: (addons-966941) DBG | I0816 12:21:37.392843   11867 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19423-3966/.minikube
	I0816 12:21:37.392955   11845 main.go:141] libmachine: (addons-966941) Setting up store path in /home/jenkins/minikube-integration/19423-3966/.minikube/machines/addons-966941 ...
	I0816 12:21:37.392980   11845 main.go:141] libmachine: (addons-966941) Building disk image from file:///home/jenkins/minikube-integration/19423-3966/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso
	I0816 12:21:37.392998   11845 main.go:141] libmachine: (addons-966941) Downloading /home/jenkins/minikube-integration/19423-3966/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19423-3966/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso...
	I0816 12:21:37.651788   11845 main.go:141] libmachine: (addons-966941) DBG | I0816 12:21:37.651616   11867 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/addons-966941/id_rsa...
	I0816 12:21:37.851487   11845 main.go:141] libmachine: (addons-966941) DBG | I0816 12:21:37.851337   11867 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/addons-966941/addons-966941.rawdisk...
	I0816 12:21:37.851522   11845 main.go:141] libmachine: (addons-966941) DBG | Writing magic tar header
	I0816 12:21:37.851537   11845 main.go:141] libmachine: (addons-966941) DBG | Writing SSH key tar header
	I0816 12:21:37.851578   11845 main.go:141] libmachine: (addons-966941) DBG | I0816 12:21:37.851459   11867 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19423-3966/.minikube/machines/addons-966941 ...
	I0816 12:21:37.851598   11845 main.go:141] libmachine: (addons-966941) Setting executable bit set on /home/jenkins/minikube-integration/19423-3966/.minikube/machines/addons-966941 (perms=drwx------)
	I0816 12:21:37.851618   11845 main.go:141] libmachine: (addons-966941) Setting executable bit set on /home/jenkins/minikube-integration/19423-3966/.minikube/machines (perms=drwxr-xr-x)
	I0816 12:21:37.851632   11845 main.go:141] libmachine: (addons-966941) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/addons-966941
	I0816 12:21:37.851642   11845 main.go:141] libmachine: (addons-966941) Setting executable bit set on /home/jenkins/minikube-integration/19423-3966/.minikube (perms=drwxr-xr-x)
	I0816 12:21:37.851655   11845 main.go:141] libmachine: (addons-966941) Setting executable bit set on /home/jenkins/minikube-integration/19423-3966 (perms=drwxrwxr-x)
	I0816 12:21:37.851665   11845 main.go:141] libmachine: (addons-966941) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0816 12:21:37.851677   11845 main.go:141] libmachine: (addons-966941) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-3966/.minikube/machines
	I0816 12:21:37.851694   11845 main.go:141] libmachine: (addons-966941) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-3966/.minikube
	I0816 12:21:37.851708   11845 main.go:141] libmachine: (addons-966941) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-3966
	I0816 12:21:37.851724   11845 main.go:141] libmachine: (addons-966941) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0816 12:21:37.851736   11845 main.go:141] libmachine: (addons-966941) DBG | Checking permissions on dir: /home/jenkins
	I0816 12:21:37.851754   11845 main.go:141] libmachine: (addons-966941) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0816 12:21:37.851765   11845 main.go:141] libmachine: (addons-966941) Creating domain...
	I0816 12:21:37.851777   11845 main.go:141] libmachine: (addons-966941) DBG | Checking permissions on dir: /home
	I0816 12:21:37.851793   11845 main.go:141] libmachine: (addons-966941) DBG | Skipping /home - not owner
	I0816 12:21:37.852714   11845 main.go:141] libmachine: (addons-966941) define libvirt domain using xml: 
	I0816 12:21:37.852747   11845 main.go:141] libmachine: (addons-966941) <domain type='kvm'>
	I0816 12:21:37.852759   11845 main.go:141] libmachine: (addons-966941)   <name>addons-966941</name>
	I0816 12:21:37.852767   11845 main.go:141] libmachine: (addons-966941)   <memory unit='MiB'>4000</memory>
	I0816 12:21:37.852776   11845 main.go:141] libmachine: (addons-966941)   <vcpu>2</vcpu>
	I0816 12:21:37.852785   11845 main.go:141] libmachine: (addons-966941)   <features>
	I0816 12:21:37.852794   11845 main.go:141] libmachine: (addons-966941)     <acpi/>
	I0816 12:21:37.852804   11845 main.go:141] libmachine: (addons-966941)     <apic/>
	I0816 12:21:37.852814   11845 main.go:141] libmachine: (addons-966941)     <pae/>
	I0816 12:21:37.852827   11845 main.go:141] libmachine: (addons-966941)     
	I0816 12:21:37.852839   11845 main.go:141] libmachine: (addons-966941)   </features>
	I0816 12:21:37.852855   11845 main.go:141] libmachine: (addons-966941)   <cpu mode='host-passthrough'>
	I0816 12:21:37.852865   11845 main.go:141] libmachine: (addons-966941)   
	I0816 12:21:37.852874   11845 main.go:141] libmachine: (addons-966941)   </cpu>
	I0816 12:21:37.852885   11845 main.go:141] libmachine: (addons-966941)   <os>
	I0816 12:21:37.852893   11845 main.go:141] libmachine: (addons-966941)     <type>hvm</type>
	I0816 12:21:37.852903   11845 main.go:141] libmachine: (addons-966941)     <boot dev='cdrom'/>
	I0816 12:21:37.852936   11845 main.go:141] libmachine: (addons-966941)     <boot dev='hd'/>
	I0816 12:21:37.852944   11845 main.go:141] libmachine: (addons-966941)     <bootmenu enable='no'/>
	I0816 12:21:37.852955   11845 main.go:141] libmachine: (addons-966941)   </os>
	I0816 12:21:37.852965   11845 main.go:141] libmachine: (addons-966941)   <devices>
	I0816 12:21:37.852977   11845 main.go:141] libmachine: (addons-966941)     <disk type='file' device='cdrom'>
	I0816 12:21:37.852990   11845 main.go:141] libmachine: (addons-966941)       <source file='/home/jenkins/minikube-integration/19423-3966/.minikube/machines/addons-966941/boot2docker.iso'/>
	I0816 12:21:37.853017   11845 main.go:141] libmachine: (addons-966941)       <target dev='hdc' bus='scsi'/>
	I0816 12:21:37.853039   11845 main.go:141] libmachine: (addons-966941)       <readonly/>
	I0816 12:21:37.853047   11845 main.go:141] libmachine: (addons-966941)     </disk>
	I0816 12:21:37.853052   11845 main.go:141] libmachine: (addons-966941)     <disk type='file' device='disk'>
	I0816 12:21:37.853061   11845 main.go:141] libmachine: (addons-966941)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0816 12:21:37.853073   11845 main.go:141] libmachine: (addons-966941)       <source file='/home/jenkins/minikube-integration/19423-3966/.minikube/machines/addons-966941/addons-966941.rawdisk'/>
	I0816 12:21:37.853082   11845 main.go:141] libmachine: (addons-966941)       <target dev='hda' bus='virtio'/>
	I0816 12:21:37.853087   11845 main.go:141] libmachine: (addons-966941)     </disk>
	I0816 12:21:37.853095   11845 main.go:141] libmachine: (addons-966941)     <interface type='network'>
	I0816 12:21:37.853100   11845 main.go:141] libmachine: (addons-966941)       <source network='mk-addons-966941'/>
	I0816 12:21:37.853107   11845 main.go:141] libmachine: (addons-966941)       <model type='virtio'/>
	I0816 12:21:37.853114   11845 main.go:141] libmachine: (addons-966941)     </interface>
	I0816 12:21:37.853125   11845 main.go:141] libmachine: (addons-966941)     <interface type='network'>
	I0816 12:21:37.853133   11845 main.go:141] libmachine: (addons-966941)       <source network='default'/>
	I0816 12:21:37.853138   11845 main.go:141] libmachine: (addons-966941)       <model type='virtio'/>
	I0816 12:21:37.853144   11845 main.go:141] libmachine: (addons-966941)     </interface>
	I0816 12:21:37.853148   11845 main.go:141] libmachine: (addons-966941)     <serial type='pty'>
	I0816 12:21:37.853156   11845 main.go:141] libmachine: (addons-966941)       <target port='0'/>
	I0816 12:21:37.853161   11845 main.go:141] libmachine: (addons-966941)     </serial>
	I0816 12:21:37.853168   11845 main.go:141] libmachine: (addons-966941)     <console type='pty'>
	I0816 12:21:37.853180   11845 main.go:141] libmachine: (addons-966941)       <target type='serial' port='0'/>
	I0816 12:21:37.853187   11845 main.go:141] libmachine: (addons-966941)     </console>
	I0816 12:21:37.853192   11845 main.go:141] libmachine: (addons-966941)     <rng model='virtio'>
	I0816 12:21:37.853206   11845 main.go:141] libmachine: (addons-966941)       <backend model='random'>/dev/random</backend>
	I0816 12:21:37.853218   11845 main.go:141] libmachine: (addons-966941)     </rng>
	I0816 12:21:37.853226   11845 main.go:141] libmachine: (addons-966941)     
	I0816 12:21:37.853235   11845 main.go:141] libmachine: (addons-966941)     
	I0816 12:21:37.853244   11845 main.go:141] libmachine: (addons-966941)   </devices>
	I0816 12:21:37.853253   11845 main.go:141] libmachine: (addons-966941) </domain>
	I0816 12:21:37.853262   11845 main.go:141] libmachine: (addons-966941) 
	I0816 12:21:37.859924   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:83:6d:e5 in network default
	I0816 12:21:37.860536   11845 main.go:141] libmachine: (addons-966941) Ensuring networks are active...
	I0816 12:21:37.860557   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:21:37.861280   11845 main.go:141] libmachine: (addons-966941) Ensuring network default is active
	I0816 12:21:37.861544   11845 main.go:141] libmachine: (addons-966941) Ensuring network mk-addons-966941 is active
	I0816 12:21:37.862039   11845 main.go:141] libmachine: (addons-966941) Getting domain xml...
	I0816 12:21:37.862702   11845 main.go:141] libmachine: (addons-966941) Creating domain...
	I0816 12:21:39.226117   11845 main.go:141] libmachine: (addons-966941) Waiting to get IP...
	I0816 12:21:39.226798   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:21:39.227181   11845 main.go:141] libmachine: (addons-966941) DBG | unable to find current IP address of domain addons-966941 in network mk-addons-966941
	I0816 12:21:39.227209   11845 main.go:141] libmachine: (addons-966941) DBG | I0816 12:21:39.227123   11867 retry.go:31] will retry after 212.176895ms: waiting for machine to come up
	I0816 12:21:39.440410   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:21:39.440876   11845 main.go:141] libmachine: (addons-966941) DBG | unable to find current IP address of domain addons-966941 in network mk-addons-966941
	I0816 12:21:39.440898   11845 main.go:141] libmachine: (addons-966941) DBG | I0816 12:21:39.440829   11867 retry.go:31] will retry after 318.628327ms: waiting for machine to come up
	I0816 12:21:39.761242   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:21:39.761693   11845 main.go:141] libmachine: (addons-966941) DBG | unable to find current IP address of domain addons-966941 in network mk-addons-966941
	I0816 12:21:39.761725   11845 main.go:141] libmachine: (addons-966941) DBG | I0816 12:21:39.761638   11867 retry.go:31] will retry after 326.446143ms: waiting for machine to come up
	I0816 12:21:40.090044   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:21:40.090529   11845 main.go:141] libmachine: (addons-966941) DBG | unable to find current IP address of domain addons-966941 in network mk-addons-966941
	I0816 12:21:40.090562   11845 main.go:141] libmachine: (addons-966941) DBG | I0816 12:21:40.090475   11867 retry.go:31] will retry after 510.023741ms: waiting for machine to come up
	I0816 12:21:40.601826   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:21:40.602271   11845 main.go:141] libmachine: (addons-966941) DBG | unable to find current IP address of domain addons-966941 in network mk-addons-966941
	I0816 12:21:40.602307   11845 main.go:141] libmachine: (addons-966941) DBG | I0816 12:21:40.602216   11867 retry.go:31] will retry after 470.811839ms: waiting for machine to come up
	I0816 12:21:41.074771   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:21:41.075149   11845 main.go:141] libmachine: (addons-966941) DBG | unable to find current IP address of domain addons-966941 in network mk-addons-966941
	I0816 12:21:41.075179   11845 main.go:141] libmachine: (addons-966941) DBG | I0816 12:21:41.075102   11867 retry.go:31] will retry after 951.863255ms: waiting for machine to come up
	I0816 12:21:42.028898   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:21:42.029352   11845 main.go:141] libmachine: (addons-966941) DBG | unable to find current IP address of domain addons-966941 in network mk-addons-966941
	I0816 12:21:42.029387   11845 main.go:141] libmachine: (addons-966941) DBG | I0816 12:21:42.029306   11867 retry.go:31] will retry after 738.943948ms: waiting for machine to come up
	I0816 12:21:42.770285   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:21:42.770676   11845 main.go:141] libmachine: (addons-966941) DBG | unable to find current IP address of domain addons-966941 in network mk-addons-966941
	I0816 12:21:42.770700   11845 main.go:141] libmachine: (addons-966941) DBG | I0816 12:21:42.770639   11867 retry.go:31] will retry after 1.372347115s: waiting for machine to come up
	I0816 12:21:44.145005   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:21:44.145379   11845 main.go:141] libmachine: (addons-966941) DBG | unable to find current IP address of domain addons-966941 in network mk-addons-966941
	I0816 12:21:44.145401   11845 main.go:141] libmachine: (addons-966941) DBG | I0816 12:21:44.145330   11867 retry.go:31] will retry after 1.259425595s: waiting for machine to come up
	I0816 12:21:45.406828   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:21:45.407302   11845 main.go:141] libmachine: (addons-966941) DBG | unable to find current IP address of domain addons-966941 in network mk-addons-966941
	I0816 12:21:45.407353   11845 main.go:141] libmachine: (addons-966941) DBG | I0816 12:21:45.407275   11867 retry.go:31] will retry after 1.739503164s: waiting for machine to come up
	I0816 12:21:47.147804   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:21:47.148256   11845 main.go:141] libmachine: (addons-966941) DBG | unable to find current IP address of domain addons-966941 in network mk-addons-966941
	I0816 12:21:47.148293   11845 main.go:141] libmachine: (addons-966941) DBG | I0816 12:21:47.148225   11867 retry.go:31] will retry after 2.662184372s: waiting for machine to come up
	I0816 12:21:49.814022   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:21:49.814419   11845 main.go:141] libmachine: (addons-966941) DBG | unable to find current IP address of domain addons-966941 in network mk-addons-966941
	I0816 12:21:49.814444   11845 main.go:141] libmachine: (addons-966941) DBG | I0816 12:21:49.814375   11867 retry.go:31] will retry after 2.650973984s: waiting for machine to come up
	I0816 12:21:52.466479   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:21:52.466900   11845 main.go:141] libmachine: (addons-966941) DBG | unable to find current IP address of domain addons-966941 in network mk-addons-966941
	I0816 12:21:52.466929   11845 main.go:141] libmachine: (addons-966941) DBG | I0816 12:21:52.466855   11867 retry.go:31] will retry after 3.024826315s: waiting for machine to come up
	I0816 12:21:55.494960   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:21:55.495405   11845 main.go:141] libmachine: (addons-966941) DBG | unable to find current IP address of domain addons-966941 in network mk-addons-966941
	I0816 12:21:55.495425   11845 main.go:141] libmachine: (addons-966941) DBG | I0816 12:21:55.495347   11867 retry.go:31] will retry after 5.305855896s: waiting for machine to come up
	I0816 12:22:00.805546   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:00.805964   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has current primary IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:00.805981   11845 main.go:141] libmachine: (addons-966941) Found IP for machine: 192.168.39.129
	I0816 12:22:00.805993   11845 main.go:141] libmachine: (addons-966941) Reserving static IP address...
	I0816 12:22:00.806435   11845 main.go:141] libmachine: (addons-966941) DBG | unable to find host DHCP lease matching {name: "addons-966941", mac: "52:54:00:72:dd:30", ip: "192.168.39.129"} in network mk-addons-966941
	I0816 12:22:00.875078   11845 main.go:141] libmachine: (addons-966941) Reserved static IP address: 192.168.39.129
	I0816 12:22:00.875104   11845 main.go:141] libmachine: (addons-966941) Waiting for SSH to be available...
	I0816 12:22:00.875115   11845 main.go:141] libmachine: (addons-966941) DBG | Getting to WaitForSSH function...
	I0816 12:22:00.877555   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:00.877959   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:minikube Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:00.877990   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:00.878211   11845 main.go:141] libmachine: (addons-966941) DBG | Using SSH client type: external
	I0816 12:22:00.878238   11845 main.go:141] libmachine: (addons-966941) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/addons-966941/id_rsa (-rw-------)
	I0816 12:22:00.878270   11845 main.go:141] libmachine: (addons-966941) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.129 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-3966/.minikube/machines/addons-966941/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 12:22:00.878284   11845 main.go:141] libmachine: (addons-966941) DBG | About to run SSH command:
	I0816 12:22:00.878297   11845 main.go:141] libmachine: (addons-966941) DBG | exit 0
	I0816 12:22:01.012784   11845 main.go:141] libmachine: (addons-966941) DBG | SSH cmd err, output: <nil>: 
	I0816 12:22:01.013045   11845 main.go:141] libmachine: (addons-966941) KVM machine creation complete!
	I0816 12:22:01.013338   11845 main.go:141] libmachine: (addons-966941) Calling .GetConfigRaw
	I0816 12:22:01.013852   11845 main.go:141] libmachine: (addons-966941) Calling .DriverName
	I0816 12:22:01.014047   11845 main.go:141] libmachine: (addons-966941) Calling .DriverName
	I0816 12:22:01.014204   11845 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0816 12:22:01.014221   11845 main.go:141] libmachine: (addons-966941) Calling .GetState
	I0816 12:22:01.015454   11845 main.go:141] libmachine: Detecting operating system of created instance...
	I0816 12:22:01.015470   11845 main.go:141] libmachine: Waiting for SSH to be available...
	I0816 12:22:01.015477   11845 main.go:141] libmachine: Getting to WaitForSSH function...
	I0816 12:22:01.015483   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHHostname
	I0816 12:22:01.017805   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:01.018107   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-966941 Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:01.018132   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:01.018246   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHPort
	I0816 12:22:01.018415   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHKeyPath
	I0816 12:22:01.018561   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHKeyPath
	I0816 12:22:01.018687   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHUsername
	I0816 12:22:01.018833   11845 main.go:141] libmachine: Using SSH client type: native
	I0816 12:22:01.018994   11845 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0816 12:22:01.019004   11845 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0816 12:22:01.124103   11845 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 12:22:01.124123   11845 main.go:141] libmachine: Detecting the provisioner...
	I0816 12:22:01.124130   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHHostname
	I0816 12:22:01.126664   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:01.126968   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-966941 Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:01.126996   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:01.127101   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHPort
	I0816 12:22:01.127297   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHKeyPath
	I0816 12:22:01.127466   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHKeyPath
	I0816 12:22:01.127627   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHUsername
	I0816 12:22:01.127772   11845 main.go:141] libmachine: Using SSH client type: native
	I0816 12:22:01.127964   11845 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0816 12:22:01.127979   11845 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0816 12:22:01.237557   11845 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0816 12:22:01.237613   11845 main.go:141] libmachine: found compatible host: buildroot
	I0816 12:22:01.237623   11845 main.go:141] libmachine: Provisioning with buildroot...
	I0816 12:22:01.237634   11845 main.go:141] libmachine: (addons-966941) Calling .GetMachineName
	I0816 12:22:01.237834   11845 buildroot.go:166] provisioning hostname "addons-966941"
	I0816 12:22:01.237855   11845 main.go:141] libmachine: (addons-966941) Calling .GetMachineName
	I0816 12:22:01.238044   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHHostname
	I0816 12:22:01.240394   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:01.240712   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-966941 Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:01.240740   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:01.240880   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHPort
	I0816 12:22:01.241039   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHKeyPath
	I0816 12:22:01.241198   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHKeyPath
	I0816 12:22:01.241310   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHUsername
	I0816 12:22:01.241479   11845 main.go:141] libmachine: Using SSH client type: native
	I0816 12:22:01.241630   11845 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0816 12:22:01.241643   11845 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-966941 && echo "addons-966941" | sudo tee /etc/hostname
	I0816 12:22:01.363855   11845 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-966941
	
	I0816 12:22:01.363878   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHHostname
	I0816 12:22:01.366629   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:01.367013   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-966941 Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:01.367046   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:01.367263   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHPort
	I0816 12:22:01.367451   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHKeyPath
	I0816 12:22:01.367607   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHKeyPath
	I0816 12:22:01.367703   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHUsername
	I0816 12:22:01.367869   11845 main.go:141] libmachine: Using SSH client type: native
	I0816 12:22:01.368066   11845 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0816 12:22:01.368085   11845 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-966941' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-966941/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-966941' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 12:22:01.487413   11845 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 12:22:01.487443   11845 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-3966/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-3966/.minikube}
	I0816 12:22:01.487478   11845 buildroot.go:174] setting up certificates
	I0816 12:22:01.487488   11845 provision.go:84] configureAuth start
	I0816 12:22:01.487502   11845 main.go:141] libmachine: (addons-966941) Calling .GetMachineName
	I0816 12:22:01.487758   11845 main.go:141] libmachine: (addons-966941) Calling .GetIP
	I0816 12:22:01.490514   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:01.490908   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-966941 Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:01.490974   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:01.491063   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHHostname
	I0816 12:22:01.493334   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:01.493680   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-966941 Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:01.493706   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:01.493838   11845 provision.go:143] copyHostCerts
	I0816 12:22:01.493896   11845 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem (1123 bytes)
	I0816 12:22:01.494044   11845 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem (1675 bytes)
	I0816 12:22:01.494129   11845 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem (1082 bytes)
	I0816 12:22:01.494202   11845 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem org=jenkins.addons-966941 san=[127.0.0.1 192.168.39.129 addons-966941 localhost minikube]
	I0816 12:22:01.559551   11845 provision.go:177] copyRemoteCerts
	I0816 12:22:01.559598   11845 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 12:22:01.559617   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHHostname
	I0816 12:22:01.562323   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:01.562653   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-966941 Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:01.562676   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:01.562833   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHPort
	I0816 12:22:01.563019   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHKeyPath
	I0816 12:22:01.563143   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHUsername
	I0816 12:22:01.563306   11845 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/addons-966941/id_rsa Username:docker}
	I0816 12:22:01.646652   11845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 12:22:01.670552   11845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0816 12:22:01.693669   11845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 12:22:01.716481   11845 provision.go:87] duration metric: took 228.980328ms to configureAuth
	I0816 12:22:01.716513   11845 buildroot.go:189] setting minikube options for container-runtime
	I0816 12:22:01.716691   11845 config.go:182] Loaded profile config "addons-966941": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 12:22:01.716772   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHHostname
	I0816 12:22:01.719693   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:01.720061   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-966941 Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:01.720087   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:01.720257   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHPort
	I0816 12:22:01.720424   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHKeyPath
	I0816 12:22:01.720577   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHKeyPath
	I0816 12:22:01.720764   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHUsername
	I0816 12:22:01.720918   11845 main.go:141] libmachine: Using SSH client type: native
	I0816 12:22:01.721143   11845 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0816 12:22:01.721159   11845 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 12:22:01.985305   11845 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 12:22:01.985329   11845 main.go:141] libmachine: Checking connection to Docker...
	I0816 12:22:01.985336   11845 main.go:141] libmachine: (addons-966941) Calling .GetURL
	I0816 12:22:01.986661   11845 main.go:141] libmachine: (addons-966941) DBG | Using libvirt version 6000000
	I0816 12:22:01.988765   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:01.989112   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-966941 Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:01.989137   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:01.989280   11845 main.go:141] libmachine: Docker is up and running!
	I0816 12:22:01.989294   11845 main.go:141] libmachine: Reticulating splines...
	I0816 12:22:01.989301   11845 client.go:171] duration metric: took 25.094181306s to LocalClient.Create
	I0816 12:22:01.989329   11845 start.go:167] duration metric: took 25.094258123s to libmachine.API.Create "addons-966941"
	I0816 12:22:01.989341   11845 start.go:293] postStartSetup for "addons-966941" (driver="kvm2")
	I0816 12:22:01.989353   11845 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 12:22:01.989376   11845 main.go:141] libmachine: (addons-966941) Calling .DriverName
	I0816 12:22:01.989570   11845 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 12:22:01.989598   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHHostname
	I0816 12:22:01.991457   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:01.991717   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-966941 Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:01.991739   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:01.991830   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHPort
	I0816 12:22:01.992009   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHKeyPath
	I0816 12:22:01.992155   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHUsername
	I0816 12:22:01.992305   11845 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/addons-966941/id_rsa Username:docker}
	I0816 12:22:02.078668   11845 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 12:22:02.082932   11845 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 12:22:02.082954   11845 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/addons for local assets ...
	I0816 12:22:02.083058   11845 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/files for local assets ...
	I0816 12:22:02.083083   11845 start.go:296] duration metric: took 93.736523ms for postStartSetup
	I0816 12:22:02.083113   11845 main.go:141] libmachine: (addons-966941) Calling .GetConfigRaw
	I0816 12:22:02.083715   11845 main.go:141] libmachine: (addons-966941) Calling .GetIP
	I0816 12:22:02.086531   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:02.086836   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-966941 Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:02.086868   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:02.087038   11845 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/config.json ...
	I0816 12:22:02.087204   11845 start.go:128] duration metric: took 25.209609244s to createHost
	I0816 12:22:02.087223   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHHostname
	I0816 12:22:02.089238   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:02.089524   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-966941 Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:02.089545   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:02.089668   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHPort
	I0816 12:22:02.089844   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHKeyPath
	I0816 12:22:02.089994   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHKeyPath
	I0816 12:22:02.090126   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHUsername
	I0816 12:22:02.090281   11845 main.go:141] libmachine: Using SSH client type: native
	I0816 12:22:02.090458   11845 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0816 12:22:02.090472   11845 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 12:22:02.197425   11845 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723810922.174464922
	
	I0816 12:22:02.197453   11845 fix.go:216] guest clock: 1723810922.174464922
	I0816 12:22:02.197461   11845 fix.go:229] Guest: 2024-08-16 12:22:02.174464922 +0000 UTC Remote: 2024-08-16 12:22:02.087214216 +0000 UTC m=+25.306065307 (delta=87.250706ms)
	I0816 12:22:02.197495   11845 fix.go:200] guest clock delta is within tolerance: 87.250706ms
	I0816 12:22:02.197502   11845 start.go:83] releasing machines lock for "addons-966941", held for 25.319977694s
	I0816 12:22:02.197526   11845 main.go:141] libmachine: (addons-966941) Calling .DriverName
	I0816 12:22:02.197792   11845 main.go:141] libmachine: (addons-966941) Calling .GetIP
	I0816 12:22:02.200291   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:02.200584   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-966941 Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:02.200611   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:02.200753   11845 main.go:141] libmachine: (addons-966941) Calling .DriverName
	I0816 12:22:02.201288   11845 main.go:141] libmachine: (addons-966941) Calling .DriverName
	I0816 12:22:02.201467   11845 main.go:141] libmachine: (addons-966941) Calling .DriverName
	I0816 12:22:02.201542   11845 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 12:22:02.201591   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHHostname
	I0816 12:22:02.201679   11845 ssh_runner.go:195] Run: cat /version.json
	I0816 12:22:02.201697   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHHostname
	I0816 12:22:02.204038   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:02.204199   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:02.204351   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-966941 Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:02.204376   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:02.204480   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHPort
	I0816 12:22:02.204573   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-966941 Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:02.204599   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:02.204622   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHKeyPath
	I0816 12:22:02.204747   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHPort
	I0816 12:22:02.204804   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHUsername
	I0816 12:22:02.204893   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHKeyPath
	I0816 12:22:02.204969   11845 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/addons-966941/id_rsa Username:docker}
	I0816 12:22:02.205046   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHUsername
	I0816 12:22:02.205194   11845 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/addons-966941/id_rsa Username:docker}
	I0816 12:22:02.282405   11845 ssh_runner.go:195] Run: systemctl --version
	I0816 12:22:02.310959   11845 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 12:22:02.463048   11845 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 12:22:02.469095   11845 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 12:22:02.469163   11845 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 12:22:02.484875   11845 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 12:22:02.484896   11845 start.go:495] detecting cgroup driver to use...
	I0816 12:22:02.484968   11845 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 12:22:02.500442   11845 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 12:22:02.513648   11845 docker.go:217] disabling cri-docker service (if available) ...
	I0816 12:22:02.513694   11845 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 12:22:02.526809   11845 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 12:22:02.539950   11845 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 12:22:02.654923   11845 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 12:22:02.814398   11845 docker.go:233] disabling docker service ...
	I0816 12:22:02.814468   11845 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 12:22:02.828862   11845 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 12:22:02.841976   11845 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 12:22:02.968630   11845 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 12:22:03.085351   11845 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 12:22:03.098835   11845 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 12:22:03.117187   11845 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 12:22:03.117258   11845 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:22:03.127227   11845 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 12:22:03.127281   11845 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:22:03.137242   11845 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:22:03.146828   11845 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:22:03.156523   11845 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 12:22:03.166478   11845 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:22:03.176319   11845 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:22:03.193240   11845 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:22:03.203596   11845 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 12:22:03.212489   11845 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 12:22:03.212530   11845 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 12:22:03.224505   11845 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 12:22:03.233622   11845 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 12:22:03.348164   11845 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 12:22:03.482079   11845 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 12:22:03.482211   11845 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 12:22:03.486809   11845 start.go:563] Will wait 60s for crictl version
	I0816 12:22:03.486867   11845 ssh_runner.go:195] Run: which crictl
	I0816 12:22:03.490519   11845 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 12:22:03.527537   11845 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 12:22:03.527664   11845 ssh_runner.go:195] Run: crio --version
	I0816 12:22:03.554262   11845 ssh_runner.go:195] Run: crio --version
	I0816 12:22:03.590661   11845 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 12:22:03.591739   11845 main.go:141] libmachine: (addons-966941) Calling .GetIP
	I0816 12:22:03.594182   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:03.594464   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-966941 Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:03.594492   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:03.594670   11845 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0816 12:22:03.598688   11845 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 12:22:03.610921   11845 kubeadm.go:883] updating cluster {Name:addons-966941 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:addons-966941 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 12:22:03.611044   11845 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 12:22:03.611103   11845 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 12:22:03.645533   11845 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 12:22:03.645614   11845 ssh_runner.go:195] Run: which lz4
	I0816 12:22:03.649556   11845 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 12:22:03.653575   11845 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 12:22:03.653599   11845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0816 12:22:04.891048   11845 crio.go:462] duration metric: took 1.241534232s to copy over tarball
	I0816 12:22:04.891116   11845 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 12:22:06.973093   11845 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.081946785s)
	I0816 12:22:06.973131   11845 crio.go:469] duration metric: took 2.082059301s to extract the tarball
	I0816 12:22:06.973141   11845 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 12:22:07.009569   11845 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 12:22:07.052109   11845 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 12:22:07.052137   11845 cache_images.go:84] Images are preloaded, skipping loading
	I0816 12:22:07.052146   11845 kubeadm.go:934] updating node { 192.168.39.129 8443 v1.31.0 crio true true} ...
	I0816 12:22:07.052269   11845 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-966941 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.129
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-966941 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 12:22:07.052339   11845 ssh_runner.go:195] Run: crio config
	I0816 12:22:07.096889   11845 cni.go:84] Creating CNI manager for ""
	I0816 12:22:07.096924   11845 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 12:22:07.096936   11845 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 12:22:07.096963   11845 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.129 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-966941 NodeName:addons-966941 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.129"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.129 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 12:22:07.097101   11845 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.129
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-966941"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.129
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.129"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 12:22:07.097171   11845 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 12:22:07.107194   11845 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 12:22:07.107260   11845 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 12:22:07.116707   11845 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0816 12:22:07.132780   11845 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 12:22:07.148546   11845 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0816 12:22:07.164109   11845 ssh_runner.go:195] Run: grep 192.168.39.129	control-plane.minikube.internal$ /etc/hosts
	I0816 12:22:07.167796   11845 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.129	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 12:22:07.179782   11845 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 12:22:07.286913   11845 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 12:22:07.302836   11845 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941 for IP: 192.168.39.129
	I0816 12:22:07.302855   11845 certs.go:194] generating shared ca certs ...
	I0816 12:22:07.302870   11845 certs.go:226] acquiring lock for ca certs: {Name:mkdb46755373c135acf8239d2d4352f4b0b3d1d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:22:07.302995   11845 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key
	I0816 12:22:07.515227   11845 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt ...
	I0816 12:22:07.515252   11845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt: {Name:mkf4a08bf4f9517231e76adaa006f3cfec5b8c3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:22:07.515400   11845 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key ...
	I0816 12:22:07.515410   11845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key: {Name:mkeb561bc804238c8341bd7caa5e937264af6e02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:22:07.515479   11845 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key
	I0816 12:22:07.665289   11845 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.crt ...
	I0816 12:22:07.665313   11845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.crt: {Name:mk6e24ac0958fb888c7b45e9b9ff4f9b47a400f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:22:07.665469   11845 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key ...
	I0816 12:22:07.665479   11845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key: {Name:mk52e12681516341823900b431cf27eff2c25926 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:22:07.665544   11845 certs.go:256] generating profile certs ...
	I0816 12:22:07.665593   11845 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/client.key
	I0816 12:22:07.665608   11845 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/client.crt with IP's: []
	I0816 12:22:07.869665   11845 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/client.crt ...
	I0816 12:22:07.869706   11845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/client.crt: {Name:mk4fd6c50a34763252e9be1fa8164abc03f798c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:22:07.869941   11845 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/client.key ...
	I0816 12:22:07.869962   11845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/client.key: {Name:mke945e0b84d1e635d8998ea4f5f2312ee99d533 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:22:07.870064   11845 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/apiserver.key.b0466fab
	I0816 12:22:07.870089   11845 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/apiserver.crt.b0466fab with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.129]
	I0816 12:22:08.155629   11845 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/apiserver.crt.b0466fab ...
	I0816 12:22:08.155657   11845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/apiserver.crt.b0466fab: {Name:mkc9e55f65455e2a2112f379dd348dcb607f2c14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:22:08.155840   11845 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/apiserver.key.b0466fab ...
	I0816 12:22:08.155861   11845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/apiserver.key.b0466fab: {Name:mk3c3c0d79dd1f95b45b87062287113b27972793 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:22:08.155955   11845 certs.go:381] copying /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/apiserver.crt.b0466fab -> /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/apiserver.crt
	I0816 12:22:08.156046   11845 certs.go:385] copying /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/apiserver.key.b0466fab -> /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/apiserver.key
	I0816 12:22:08.156115   11845 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/proxy-client.key
	I0816 12:22:08.156137   11845 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/proxy-client.crt with IP's: []
	I0816 12:22:08.256661   11845 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/proxy-client.crt ...
	I0816 12:22:08.256690   11845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/proxy-client.crt: {Name:mke2f3cbe449def9dd50c5af26d075a11d855b7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:22:08.256872   11845 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/proxy-client.key ...
	I0816 12:22:08.256889   11845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/proxy-client.key: {Name:mk8c52ea3248ce4904141d7f91b4cbbec73df04d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:22:08.257118   11845 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 12:22:08.257157   11845 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem (1082 bytes)
	I0816 12:22:08.257177   11845 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem (1123 bytes)
	I0816 12:22:08.257202   11845 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem (1675 bytes)
	I0816 12:22:08.257768   11845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 12:22:08.283301   11845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 12:22:08.306394   11845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 12:22:08.329281   11845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 12:22:08.351799   11845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0816 12:22:08.374345   11845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 12:22:08.397056   11845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 12:22:08.426561   11845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 12:22:08.450869   11845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 12:22:08.474499   11845 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 12:22:08.490373   11845 ssh_runner.go:195] Run: openssl version
	I0816 12:22:08.495969   11845 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 12:22:08.506077   11845 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 12:22:08.510159   11845 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 12:22 /usr/share/ca-certificates/minikubeCA.pem
	I0816 12:22:08.510206   11845 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 12:22:08.515693   11845 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 12:22:08.525583   11845 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 12:22:08.529444   11845 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0816 12:22:08.529487   11845 kubeadm.go:392] StartCluster: {Name:addons-966941 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:addons-966941 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 12:22:08.529553   11845 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 12:22:08.529606   11845 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 12:22:08.562869   11845 cri.go:89] found id: ""
	I0816 12:22:08.562938   11845 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 12:22:08.572825   11845 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 12:22:08.581820   11845 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 12:22:08.590928   11845 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 12:22:08.590947   11845 kubeadm.go:157] found existing configuration files:
	
	I0816 12:22:08.590988   11845 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 12:22:08.599064   11845 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 12:22:08.599122   11845 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 12:22:08.608131   11845 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 12:22:08.616329   11845 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 12:22:08.616379   11845 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 12:22:08.624988   11845 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 12:22:08.633413   11845 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 12:22:08.633466   11845 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 12:22:08.641933   11845 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 12:22:08.650507   11845 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 12:22:08.650560   11845 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 12:22:08.659696   11845 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 12:22:08.716290   11845 kubeadm.go:310] W0816 12:22:08.700218     830 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 12:22:08.716871   11845 kubeadm.go:310] W0816 12:22:08.700996     830 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 12:22:08.833875   11845 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 12:22:18.614213   11845 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0816 12:22:18.614280   11845 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 12:22:18.614382   11845 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 12:22:18.614524   11845 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 12:22:18.614640   11845 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0816 12:22:18.614735   11845 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 12:22:18.616409   11845 out.go:235]   - Generating certificates and keys ...
	I0816 12:22:18.616487   11845 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 12:22:18.616549   11845 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 12:22:18.616609   11845 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0816 12:22:18.616659   11845 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0816 12:22:18.616710   11845 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0816 12:22:18.616773   11845 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0816 12:22:18.616861   11845 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0816 12:22:18.617015   11845 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-966941 localhost] and IPs [192.168.39.129 127.0.0.1 ::1]
	I0816 12:22:18.617100   11845 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0816 12:22:18.617228   11845 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-966941 localhost] and IPs [192.168.39.129 127.0.0.1 ::1]
	I0816 12:22:18.617319   11845 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0816 12:22:18.617419   11845 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0816 12:22:18.617522   11845 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0816 12:22:18.617602   11845 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 12:22:18.617682   11845 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 12:22:18.617768   11845 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0816 12:22:18.617846   11845 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 12:22:18.617938   11845 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 12:22:18.618005   11845 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 12:22:18.618122   11845 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 12:22:18.618223   11845 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 12:22:18.619736   11845 out.go:235]   - Booting up control plane ...
	I0816 12:22:18.619829   11845 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 12:22:18.619917   11845 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 12:22:18.619992   11845 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 12:22:18.620091   11845 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 12:22:18.620169   11845 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 12:22:18.620203   11845 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 12:22:18.620315   11845 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0816 12:22:18.620423   11845 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0816 12:22:18.620512   11845 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 505.296739ms
	I0816 12:22:18.620583   11845 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0816 12:22:18.620635   11845 kubeadm.go:310] [api-check] The API server is healthy after 5.001574902s
	I0816 12:22:18.620731   11845 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0816 12:22:18.620839   11845 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0816 12:22:18.620888   11845 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0816 12:22:18.621055   11845 kubeadm.go:310] [mark-control-plane] Marking the node addons-966941 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0816 12:22:18.621112   11845 kubeadm.go:310] [bootstrap-token] Using token: 7fq1v5.5ofnkq5fbptaxy8o
	I0816 12:22:18.622308   11845 out.go:235]   - Configuring RBAC rules ...
	I0816 12:22:18.622392   11845 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0816 12:22:18.622465   11845 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0816 12:22:18.622584   11845 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0816 12:22:18.622754   11845 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0816 12:22:18.622901   11845 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0816 12:22:18.623039   11845 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0816 12:22:18.623164   11845 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0816 12:22:18.623208   11845 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0816 12:22:18.623282   11845 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0816 12:22:18.623292   11845 kubeadm.go:310] 
	I0816 12:22:18.623377   11845 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0816 12:22:18.623387   11845 kubeadm.go:310] 
	I0816 12:22:18.623477   11845 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0816 12:22:18.623484   11845 kubeadm.go:310] 
	I0816 12:22:18.623505   11845 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0816 12:22:18.623559   11845 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0816 12:22:18.623606   11845 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0816 12:22:18.623612   11845 kubeadm.go:310] 
	I0816 12:22:18.623656   11845 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0816 12:22:18.623661   11845 kubeadm.go:310] 
	I0816 12:22:18.623703   11845 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0816 12:22:18.623709   11845 kubeadm.go:310] 
	I0816 12:22:18.623752   11845 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0816 12:22:18.623817   11845 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0816 12:22:18.623880   11845 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0816 12:22:18.623886   11845 kubeadm.go:310] 
	I0816 12:22:18.623974   11845 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0816 12:22:18.624081   11845 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0816 12:22:18.624096   11845 kubeadm.go:310] 
	I0816 12:22:18.624198   11845 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 7fq1v5.5ofnkq5fbptaxy8o \
	I0816 12:22:18.624320   11845 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4dc33a2efd2478f5cbf5aa38b4f971f510c5b41ce450063af834610254851641 \
	I0816 12:22:18.624348   11845 kubeadm.go:310] 	--control-plane 
	I0816 12:22:18.624356   11845 kubeadm.go:310] 
	I0816 12:22:18.624466   11845 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0816 12:22:18.624476   11845 kubeadm.go:310] 
	I0816 12:22:18.624577   11845 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 7fq1v5.5ofnkq5fbptaxy8o \
	I0816 12:22:18.624725   11845 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4dc33a2efd2478f5cbf5aa38b4f971f510c5b41ce450063af834610254851641 
	I0816 12:22:18.624738   11845 cni.go:84] Creating CNI manager for ""
	I0816 12:22:18.624744   11845 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 12:22:18.626158   11845 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 12:22:18.627164   11845 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 12:22:18.637696   11845 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 12:22:18.656322   11845 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 12:22:18.656398   11845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 12:22:18.656402   11845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-966941 minikube.k8s.io/updated_at=2024_08_16T12_22_18_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ab84f9bc76071a77c857a14f5c66dccc01002b05 minikube.k8s.io/name=addons-966941 minikube.k8s.io/primary=true
	I0816 12:22:18.798830   11845 ops.go:34] apiserver oom_adj: -16
	I0816 12:22:18.798952   11845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 12:22:19.299911   11845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 12:22:19.799392   11845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 12:22:20.299457   11845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 12:22:20.800128   11845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 12:22:21.300064   11845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 12:22:21.799276   11845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 12:22:22.299033   11845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 12:22:22.799901   11845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 12:22:23.299827   11845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 12:22:23.468244   11845 kubeadm.go:1113] duration metric: took 4.811897383s to wait for elevateKubeSystemPrivileges
	I0816 12:22:23.468276   11845 kubeadm.go:394] duration metric: took 14.938792629s to StartCluster
	I0816 12:22:23.468295   11845 settings.go:142] acquiring lock: {Name:mk96ffc9331ceacd8f3c1c33a59b38e047898a0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:22:23.468438   11845 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 12:22:23.468766   11845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/kubeconfig: {Name:mk102d7d0e1fecb6a50b9d6c1ee82dcff7f7a898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:22:23.468983   11845 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 12:22:23.468998   11845 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 12:22:23.469056   11845 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0816 12:22:23.469150   11845 addons.go:69] Setting yakd=true in profile "addons-966941"
	I0816 12:22:23.469189   11845 addons.go:234] Setting addon yakd=true in "addons-966941"
	I0816 12:22:23.469225   11845 host.go:66] Checking if "addons-966941" exists ...
	I0816 12:22:23.469235   11845 config.go:182] Loaded profile config "addons-966941": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 12:22:23.469291   11845 addons.go:69] Setting inspektor-gadget=true in profile "addons-966941"
	I0816 12:22:23.469321   11845 addons.go:234] Setting addon inspektor-gadget=true in "addons-966941"
	I0816 12:22:23.469355   11845 host.go:66] Checking if "addons-966941" exists ...
	I0816 12:22:23.469436   11845 addons.go:69] Setting storage-provisioner=true in profile "addons-966941"
	I0816 12:22:23.469469   11845 addons.go:234] Setting addon storage-provisioner=true in "addons-966941"
	I0816 12:22:23.469502   11845 host.go:66] Checking if "addons-966941" exists ...
	I0816 12:22:23.469661   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.469687   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.469713   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.469741   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.469929   11845 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-966941"
	I0816 12:22:23.469946   11845 addons.go:69] Setting registry=true in profile "addons-966941"
	I0816 12:22:23.469965   11845 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-966941"
	I0816 12:22:23.469963   11845 addons.go:69] Setting volcano=true in profile "addons-966941"
	I0816 12:22:23.469972   11845 addons.go:234] Setting addon registry=true in "addons-966941"
	I0816 12:22:23.469970   11845 addons.go:69] Setting volumesnapshots=true in profile "addons-966941"
	I0816 12:22:23.469996   11845 addons.go:234] Setting addon volcano=true in "addons-966941"
	I0816 12:22:23.469998   11845 host.go:66] Checking if "addons-966941" exists ...
	I0816 12:22:23.469932   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.470019   11845 addons.go:234] Setting addon volumesnapshots=true in "addons-966941"
	I0816 12:22:23.470025   11845 host.go:66] Checking if "addons-966941" exists ...
	I0816 12:22:23.470047   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.470052   11845 host.go:66] Checking if "addons-966941" exists ...
	I0816 12:22:23.470339   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.470358   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.470387   11845 addons.go:69] Setting metrics-server=true in profile "addons-966941"
	I0816 12:22:23.470395   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.470413   11845 addons.go:234] Setting addon metrics-server=true in "addons-966941"
	I0816 12:22:23.470424   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.470435   11845 host.go:66] Checking if "addons-966941" exists ...
	I0816 12:22:23.470435   11845 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-966941"
	I0816 12:22:23.470446   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.470468   11845 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-966941"
	I0816 12:22:23.470751   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.470791   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.470795   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.470801   11845 addons.go:69] Setting gcp-auth=true in profile "addons-966941"
	I0816 12:22:23.470821   11845 mustload.go:65] Loading cluster: addons-966941
	I0816 12:22:23.469998   11845 host.go:66] Checking if "addons-966941" exists ...
	I0816 12:22:23.470838   11845 addons.go:69] Setting helm-tiller=true in profile "addons-966941"
	I0816 12:22:23.470860   11845 addons.go:234] Setting addon helm-tiller=true in "addons-966941"
	I0816 12:22:23.470881   11845 host.go:66] Checking if "addons-966941" exists ...
	I0816 12:22:23.471196   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.471198   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.471217   11845 addons.go:69] Setting cloud-spanner=true in profile "addons-966941"
	I0816 12:22:23.471221   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.471240   11845 addons.go:234] Setting addon cloud-spanner=true in "addons-966941"
	I0816 12:22:23.471262   11845 host.go:66] Checking if "addons-966941" exists ...
	I0816 12:22:23.471276   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.471341   11845 addons.go:69] Setting ingress-dns=true in profile "addons-966941"
	I0816 12:22:23.471362   11845 addons.go:234] Setting addon ingress-dns=true in "addons-966941"
	I0816 12:22:23.471546   11845 host.go:66] Checking if "addons-966941" exists ...
	I0816 12:22:23.471597   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.471622   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.471663   11845 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-966941"
	I0816 12:22:23.471705   11845 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-966941"
	I0816 12:22:23.471861   11845 addons.go:69] Setting default-storageclass=true in profile "addons-966941"
	I0816 12:22:23.471885   11845 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-966941"
	I0816 12:22:23.471910   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.471941   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.470824   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.470428   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.472380   11845 host.go:66] Checking if "addons-966941" exists ...
	I0816 12:22:23.474283   11845 config.go:182] Loaded profile config "addons-966941": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 12:22:23.474634   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.474661   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.475322   11845 out.go:177] * Verifying Kubernetes components...
	I0816 12:22:23.477124   11845 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 12:22:23.470832   11845 addons.go:69] Setting ingress=true in profile "addons-966941"
	I0816 12:22:23.477383   11845 addons.go:234] Setting addon ingress=true in "addons-966941"
	I0816 12:22:23.477443   11845 host.go:66] Checking if "addons-966941" exists ...
	I0816 12:22:23.477913   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.477954   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.490375   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44741
	I0816 12:22:23.491344   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.498560   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32807
	I0816 12:22:23.498677   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.498696   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.499177   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.499789   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.499828   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.500324   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.500413   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34147
	I0816 12:22:23.500495   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46455
	I0816 12:22:23.501008   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.501116   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.501130   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.501141   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.501449   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.501580   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.501594   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.501939   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.501948   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.501968   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.502542   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.502574   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.502813   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.502824   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.503235   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.503800   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.503832   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.503857   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37673
	I0816 12:22:23.504208   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.504239   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40709
	I0816 12:22:23.504638   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.504664   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.505150   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.507689   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37431
	I0816 12:22:23.513139   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45705
	I0816 12:22:23.513167   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33073
	I0816 12:22:23.513252   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41973
	I0816 12:22:23.513505   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.513550   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.513577   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.513654   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.513673   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.514105   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.514121   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.514247   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.514258   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.514376   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.514398   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.514835   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.514918   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.514957   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.515072   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.515085   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.515138   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.515726   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.515745   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.516126   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.516158   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.516855   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.516885   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.520882   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.520931   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.521174   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.521253   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.521394   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.521407   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.521783   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.521810   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.522169   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.522466   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.522499   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.523108   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.523151   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.524442   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45389
	I0816 12:22:23.537094   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35145
	I0816 12:22:23.537511   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.537955   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.537975   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.538312   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.538503   11845 main.go:141] libmachine: (addons-966941) Calling .GetState
	I0816 12:22:23.539797   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33367
	I0816 12:22:23.540209   11845 main.go:141] libmachine: (addons-966941) Calling .DriverName
	I0816 12:22:23.540368   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.541506   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36461
	I0816 12:22:23.541599   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.541618   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.541720   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38951
	I0816 12:22:23.542155   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.542498   11845 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0816 12:22:23.542709   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.542727   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.542796   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.543450   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.543494   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.543505   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.543811   11845 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0816 12:22:23.543826   11845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0816 12:22:23.543842   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHHostname
	I0816 12:22:23.544082   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.544483   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.544513   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.545195   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.545227   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.545819   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.545995   11845 main.go:141] libmachine: (addons-966941) Calling .GetState
	I0816 12:22:23.547492   11845 main.go:141] libmachine: (addons-966941) Calling .DriverName
	I0816 12:22:23.547991   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:23.548510   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-966941 Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:23.548531   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:23.548692   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHPort
	I0816 12:22:23.548830   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHKeyPath
	I0816 12:22:23.549389   11845 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0816 12:22:23.549427   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHUsername
	I0816 12:22:23.549591   11845 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/addons-966941/id_rsa Username:docker}
	I0816 12:22:23.549610   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.550219   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.550246   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.550473   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.550591   11845 main.go:141] libmachine: (addons-966941) Calling .GetState
	I0816 12:22:23.551103   11845 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0816 12:22:23.551119   11845 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0816 12:22:23.551136   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHHostname
	I0816 12:22:23.552752   11845 main.go:141] libmachine: (addons-966941) Calling .DriverName
	I0816 12:22:23.554172   11845 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0816 12:22:23.554668   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:23.554903   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-966941 Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:23.554928   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:23.555234   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHPort
	I0816 12:22:23.555396   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHKeyPath
	I0816 12:22:23.555429   11845 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0816 12:22:23.555439   11845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0816 12:22:23.555454   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHHostname
	I0816 12:22:23.555492   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHUsername
	I0816 12:22:23.555963   11845 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/addons-966941/id_rsa Username:docker}
	I0816 12:22:23.556269   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38987
	I0816 12:22:23.557218   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.557746   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.557771   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.558100   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.559138   11845 main.go:141] libmachine: (addons-966941) Calling .GetState
	I0816 12:22:23.559571   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:23.560121   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45811
	I0816 12:22:23.560336   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-966941 Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:23.560353   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:23.560430   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHPort
	I0816 12:22:23.560581   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHKeyPath
	I0816 12:22:23.560644   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.560915   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHUsername
	I0816 12:22:23.561051   11845 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/addons-966941/id_rsa Username:docker}
	I0816 12:22:23.561411   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.561428   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.561795   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.561975   11845 main.go:141] libmachine: (addons-966941) Calling .GetState
	I0816 12:22:23.563802   11845 main.go:141] libmachine: (addons-966941) Calling .DriverName
	I0816 12:22:23.565590   11845 addons.go:234] Setting addon default-storageclass=true in "addons-966941"
	I0816 12:22:23.565637   11845 host.go:66] Checking if "addons-966941" exists ...
	I0816 12:22:23.565771   11845 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 12:22:23.565914   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37253
	I0816 12:22:23.566027   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.566058   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.566268   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.566724   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.566741   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.567118   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.567151   11845 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 12:22:23.567165   11845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 12:22:23.567183   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHHostname
	I0816 12:22:23.567333   11845 main.go:141] libmachine: (addons-966941) Calling .GetState
	I0816 12:22:23.569799   11845 main.go:141] libmachine: (addons-966941) Calling .DriverName
	I0816 12:22:23.571449   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:23.571680   11845 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0816 12:22:23.571968   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-966941 Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:23.571986   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:23.572210   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHPort
	I0816 12:22:23.572378   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHKeyPath
	I0816 12:22:23.572523   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHUsername
	I0816 12:22:23.572635   11845 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/addons-966941/id_rsa Username:docker}
	I0816 12:22:23.573906   11845 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0816 12:22:23.575359   11845 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0816 12:22:23.575492   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41217
	I0816 12:22:23.576076   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.576575   11845 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0816 12:22:23.576596   11845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0816 12:22:23.576615   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHHostname
	I0816 12:22:23.576636   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.576653   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.577037   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.577195   11845 main.go:141] libmachine: (addons-966941) Calling .GetState
	I0816 12:22:23.577857   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34779
	I0816 12:22:23.578801   11845 main.go:141] libmachine: (addons-966941) Calling .DriverName
	I0816 12:22:23.578860   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.579381   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.579398   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.579946   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.580242   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42409
	I0816 12:22:23.580494   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38511
	I0816 12:22:23.580647   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.580659   11845 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0816 12:22:23.580738   11845 main.go:141] libmachine: (addons-966941) Calling .GetState
	I0816 12:22:23.581129   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.581147   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.581471   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.581647   11845 main.go:141] libmachine: (addons-966941) Calling .GetState
	I0816 12:22:23.582173   11845 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0816 12:22:23.582201   11845 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0816 12:22:23.582220   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHHostname
	I0816 12:22:23.582373   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.582472   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45795
	I0816 12:22:23.582829   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.584158   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.584175   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.584226   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:23.584245   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-966941 Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:23.584263   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:23.584291   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHPort
	I0816 12:22:23.584391   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.584401   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.584754   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.584985   11845 main.go:141] libmachine: (addons-966941) Calling .GetState
	I0816 12:22:23.585825   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHKeyPath
	I0816 12:22:23.585929   11845 main.go:141] libmachine: (addons-966941) Calling .DriverName
	I0816 12:22:23.585937   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46723
	I0816 12:22:23.585701   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.586687   11845 main.go:141] libmachine: (addons-966941) Calling .DriverName
	I0816 12:22:23.586744   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHUsername
	I0816 12:22:23.586785   11845 main.go:141] libmachine: (addons-966941) Calling .GetState
	I0816 12:22:23.587317   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43489
	I0816 12:22:23.587364   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.587604   11845 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/addons-966941/id_rsa Username:docker}
	I0816 12:22:23.587984   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.588161   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.588180   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.588758   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32917
	I0816 12:22:23.588835   11845 main.go:141] libmachine: (addons-966941) Calling .DriverName
	I0816 12:22:23.588962   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.588970   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.589153   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.589190   11845 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0816 12:22:23.589295   11845 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0816 12:22:23.589371   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.589571   11845 main.go:141] libmachine: (addons-966941) Calling .GetState
	I0816 12:22:23.589572   11845 main.go:141] libmachine: (addons-966941) Calling .GetState
	I0816 12:22:23.589781   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.590236   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.590258   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.590350   11845 out.go:177]   - Using image docker.io/registry:2.8.3
	I0816 12:22:23.590463   11845 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 12:22:23.590474   11845 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 12:22:23.590490   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHHostname
	I0816 12:22:23.590574   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.590751   11845 main.go:141] libmachine: (addons-966941) Calling .GetState
	I0816 12:22:23.591455   11845 main.go:141] libmachine: (addons-966941) Calling .DriverName
	I0816 12:22:23.592456   11845 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0816 12:22:23.592480   11845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0816 12:22:23.592499   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHHostname
	I0816 12:22:23.592573   11845 main.go:141] libmachine: (addons-966941) Calling .DriverName
	I0816 12:22:23.592619   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:23.592627   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:23.592768   11845 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0816 12:22:23.593757   11845 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-966941"
	I0816 12:22:23.593904   11845 host.go:66] Checking if "addons-966941" exists ...
	I0816 12:22:23.593996   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33551
	I0816 12:22:23.594290   11845 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0816 12:22:23.594295   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.594304   11845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0816 12:22:23.594320   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHHostname
	I0816 12:22:23.594339   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.594436   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.595656   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:23.595673   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:23.595747   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:23.595829   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:23.595889   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:23.596017   11845 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0816 12:22:23.596929   11845 main.go:141] libmachine: (addons-966941) DBG | Closing plugin on server side
	I0816 12:22:23.596957   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:23.596970   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:23.596959   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45727
	W0816 12:22:23.597047   11845 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0816 12:22:23.597104   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHPort
	I0816 12:22:23.597131   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:23.597161   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-966941 Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:23.597178   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:23.597411   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHKeyPath
	I0816 12:22:23.597477   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.597559   11845 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0816 12:22:23.597571   11845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0816 12:22:23.597587   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHHostname
	I0816 12:22:23.597650   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHUsername
	I0816 12:22:23.597860   11845 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/addons-966941/id_rsa Username:docker}
	I0816 12:22:23.598023   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-966941 Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:23.598042   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:23.598456   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42467
	I0816 12:22:23.598537   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHPort
	I0816 12:22:23.598543   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.598555   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.598887   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.598905   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHKeyPath
	I0816 12:22:23.599104   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHUsername
	I0816 12:22:23.599248   11845 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/addons-966941/id_rsa Username:docker}
	I0816 12:22:23.599623   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.599631   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.599646   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.600188   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.601018   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.600452   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.601051   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.600853   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:23.601295   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.601362   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-966941 Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:23.601379   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:23.601407   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.601671   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHPort
	I0816 12:22:23.601805   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHKeyPath
	I0816 12:22:23.601805   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.601833   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.601983   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHUsername
	I0816 12:22:23.602118   11845 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/addons-966941/id_rsa Username:docker}
	I0816 12:22:23.602237   11845 main.go:141] libmachine: (addons-966941) Calling .GetState
	I0816 12:22:23.602283   11845 host.go:66] Checking if "addons-966941" exists ...
	I0816 12:22:23.602640   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.602672   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.604167   11845 main.go:141] libmachine: (addons-966941) Calling .DriverName
	I0816 12:22:23.604749   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:23.605344   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:23.605807   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-966941 Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:23.605843   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:23.605865   11845 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0816 12:22:23.606165   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-966941 Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:23.606187   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:23.606418   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHPort
	I0816 12:22:23.606478   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHPort
	I0816 12:22:23.606727   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHKeyPath
	I0816 12:22:23.606737   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHKeyPath
	I0816 12:22:23.606846   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHUsername
	I0816 12:22:23.606901   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHUsername
	I0816 12:22:23.606999   11845 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0816 12:22:23.607012   11845 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0816 12:22:23.607017   11845 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/addons-966941/id_rsa Username:docker}
	I0816 12:22:23.607025   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHHostname
	I0816 12:22:23.607072   11845 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/addons-966941/id_rsa Username:docker}
	I0816 12:22:23.609876   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:23.610272   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-966941 Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:23.610294   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:23.610452   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHPort
	I0816 12:22:23.610615   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHKeyPath
	I0816 12:22:23.610730   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHUsername
	I0816 12:22:23.610834   11845 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/addons-966941/id_rsa Username:docker}
	W0816 12:22:23.612590   11845 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:42514->192.168.39.129:22: read: connection reset by peer
	I0816 12:22:23.612616   11845 retry.go:31] will retry after 194.925376ms: ssh: handshake failed: read tcp 192.168.39.1:42514->192.168.39.129:22: read: connection reset by peer
	I0816 12:22:23.621657   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46533
	I0816 12:22:23.621877   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38069
	I0816 12:22:23.622012   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43389
	I0816 12:22:23.622136   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.622238   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.622397   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.622656   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.622678   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.622802   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.622819   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.623162   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.623181   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.623288   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.623306   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.623461   11845 main.go:141] libmachine: (addons-966941) Calling .GetState
	I0816 12:22:23.623466   11845 main.go:141] libmachine: (addons-966941) Calling .GetState
	I0816 12:22:23.623615   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.624316   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:23.624341   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:23.625197   11845 main.go:141] libmachine: (addons-966941) Calling .DriverName
	I0816 12:22:23.625434   11845 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 12:22:23.625446   11845 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 12:22:23.625463   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHHostname
	I0816 12:22:23.625510   11845 main.go:141] libmachine: (addons-966941) Calling .DriverName
	I0816 12:22:23.627185   11845 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0816 12:22:23.628405   11845 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0816 12:22:23.628762   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:23.629316   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-966941 Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:23.629346   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:23.629526   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHPort
	I0816 12:22:23.629724   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHKeyPath
	I0816 12:22:23.629805   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46189
	I0816 12:22:23.629978   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHUsername
	I0816 12:22:23.630104   11845 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/addons-966941/id_rsa Username:docker}
	I0816 12:22:23.630387   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.630906   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.630920   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.630956   11845 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0816 12:22:23.631192   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.631374   11845 main.go:141] libmachine: (addons-966941) Calling .DriverName
	I0816 12:22:23.633515   11845 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0816 12:22:23.634544   11845 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0816 12:22:23.635576   11845 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0816 12:22:23.636748   11845 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0816 12:22:23.637897   11845 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0816 12:22:23.638825   11845 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0816 12:22:23.638844   11845 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0816 12:22:23.638866   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHHostname
	I0816 12:22:23.642288   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:23.642741   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-966941 Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:23.642759   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:23.642793   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33591
	I0816 12:22:23.642994   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHPort
	I0816 12:22:23.643190   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHKeyPath
	I0816 12:22:23.643265   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:23.643330   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHUsername
	I0816 12:22:23.643478   11845 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/addons-966941/id_rsa Username:docker}
	I0816 12:22:23.643830   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:23.643845   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:23.644107   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:23.644284   11845 main.go:141] libmachine: (addons-966941) Calling .GetState
	I0816 12:22:23.646627   11845 main.go:141] libmachine: (addons-966941) Calling .DriverName
	I0816 12:22:23.648447   11845 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0816 12:22:23.649649   11845 out.go:177]   - Using image docker.io/busybox:stable
	I0816 12:22:23.650774   11845 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0816 12:22:23.650791   11845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0816 12:22:23.650809   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHHostname
	I0816 12:22:23.653467   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:23.653851   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-966941 Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:23.653873   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:23.654039   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHPort
	I0816 12:22:23.654217   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHKeyPath
	I0816 12:22:23.654369   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHUsername
	I0816 12:22:23.654521   11845 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/addons-966941/id_rsa Username:docker}
	I0816 12:22:23.941405   11845 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 12:22:23.941431   11845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0816 12:22:24.022139   11845 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0816 12:22:24.022163   11845 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0816 12:22:24.023642   11845 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0816 12:22:24.023658   11845 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0816 12:22:24.043309   11845 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0816 12:22:24.043329   11845 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0816 12:22:24.094527   11845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 12:22:24.096079   11845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0816 12:22:24.099657   11845 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0816 12:22:24.099671   11845 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0816 12:22:24.110544   11845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 12:22:24.122528   11845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0816 12:22:24.124784   11845 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0816 12:22:24.124800   11845 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0816 12:22:24.146421   11845 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 12:22:24.146442   11845 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 12:22:24.148072   11845 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 12:22:24.148158   11845 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0816 12:22:24.158647   11845 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0816 12:22:24.158659   11845 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0816 12:22:24.166111   11845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0816 12:22:24.182153   11845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0816 12:22:24.205113   11845 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0816 12:22:24.205135   11845 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0816 12:22:24.225502   11845 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0816 12:22:24.225529   11845 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0816 12:22:24.226264   11845 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0816 12:22:24.226285   11845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0816 12:22:24.228254   11845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0816 12:22:24.280203   11845 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0816 12:22:24.280231   11845 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0816 12:22:24.372348   11845 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 12:22:24.372369   11845 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 12:22:24.400643   11845 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0816 12:22:24.400668   11845 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0816 12:22:24.404851   11845 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0816 12:22:24.404873   11845 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0816 12:22:24.496122   11845 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0816 12:22:24.496149   11845 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0816 12:22:24.512916   11845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0816 12:22:24.515105   11845 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0816 12:22:24.515128   11845 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0816 12:22:24.521305   11845 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0816 12:22:24.521323   11845 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0816 12:22:24.583410   11845 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0816 12:22:24.583436   11845 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0816 12:22:24.621221   11845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 12:22:24.665459   11845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0816 12:22:24.704783   11845 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0816 12:22:24.704807   11845 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0816 12:22:24.716503   11845 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0816 12:22:24.716521   11845 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0816 12:22:24.784729   11845 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0816 12:22:24.784758   11845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0816 12:22:24.829846   11845 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0816 12:22:24.829872   11845 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0816 12:22:24.925868   11845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0816 12:22:24.938254   11845 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0816 12:22:24.938275   11845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0816 12:22:24.963895   11845 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0816 12:22:24.963919   11845 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0816 12:22:25.097380   11845 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0816 12:22:25.097405   11845 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0816 12:22:25.154382   11845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0816 12:22:25.238674   11845 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0816 12:22:25.238700   11845 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0816 12:22:25.318146   11845 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0816 12:22:25.318172   11845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0816 12:22:25.518885   11845 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0816 12:22:25.518912   11845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0816 12:22:25.615746   11845 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0816 12:22:25.615779   11845 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0816 12:22:25.761737   11845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0816 12:22:25.815462   11845 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0816 12:22:25.815484   11845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0816 12:22:26.087710   11845 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0816 12:22:26.087732   11845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0816 12:22:26.433359   11845 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0816 12:22:26.433379   11845 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0816 12:22:26.867961   11845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0816 12:22:28.504924   11845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.408815225s)
	I0816 12:22:28.504971   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:28.504984   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:28.504989   11845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.394415434s)
	I0816 12:22:28.504997   11845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.410438542s)
	I0816 12:22:28.505026   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:28.505043   11845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.382495202s)
	I0816 12:22:28.505045   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:28.505062   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:28.505072   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:28.505075   11845 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.356960751s)
	I0816 12:22:28.505122   11845 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.356940327s)
	I0816 12:22:28.505136   11845 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0816 12:22:28.505205   11845 main.go:141] libmachine: (addons-966941) DBG | Closing plugin on server side
	I0816 12:22:28.505244   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:28.505253   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:28.505261   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:28.505267   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:28.505340   11845 main.go:141] libmachine: (addons-966941) DBG | Closing plugin on server side
	I0816 12:22:28.505365   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:28.505384   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:28.505393   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:28.505392   11845 main.go:141] libmachine: (addons-966941) DBG | Closing plugin on server side
	I0816 12:22:28.505400   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:28.505370   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:28.505413   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:28.505422   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:28.505430   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:28.505485   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:28.505494   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:28.505612   11845 main.go:141] libmachine: (addons-966941) DBG | Closing plugin on server side
	I0816 12:22:28.505622   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:28.505630   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:28.505635   11845 main.go:141] libmachine: (addons-966941) DBG | Closing plugin on server side
	I0816 12:22:28.505661   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:28.505668   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:28.505677   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:28.505683   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:28.506274   11845 main.go:141] libmachine: (addons-966941) DBG | Closing plugin on server side
	I0816 12:22:28.506303   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:28.506310   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:28.506535   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:28.506552   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:28.506985   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:28.506998   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:28.507017   11845 main.go:141] libmachine: (addons-966941) DBG | Closing plugin on server side
	I0816 12:22:28.507952   11845 node_ready.go:35] waiting up to 6m0s for node "addons-966941" to be "Ready" ...
	I0816 12:22:28.548105   11845 node_ready.go:49] node "addons-966941" has status "Ready":"True"
	I0816 12:22:28.548125   11845 node_ready.go:38] duration metric: took 40.150541ms for node "addons-966941" to be "Ready" ...
	I0816 12:22:28.548135   11845 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 12:22:28.559837   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:28.559859   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:28.560067   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:28.560081   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:28.589970   11845 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-jmsfb" in "kube-system" namespace to be "Ready" ...
	I0816 12:22:28.648867   11845 pod_ready.go:93] pod "coredns-6f6b679f8f-jmsfb" in "kube-system" namespace has status "Ready":"True"
	I0816 12:22:28.648893   11845 pod_ready.go:82] duration metric: took 58.898081ms for pod "coredns-6f6b679f8f-jmsfb" in "kube-system" namespace to be "Ready" ...
	I0816 12:22:28.648918   11845 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-xvjnw" in "kube-system" namespace to be "Ready" ...
	I0816 12:22:28.768413   11845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.586229371s)
	I0816 12:22:28.768467   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:28.768481   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:28.768413   11845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.602267797s)
	I0816 12:22:28.768534   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:28.768547   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:28.768786   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:28.768805   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:28.768817   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:28.768825   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:28.770140   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:28.770146   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:28.770158   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:28.770162   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:28.770140   11845 main.go:141] libmachine: (addons-966941) DBG | Closing plugin on server side
	I0816 12:22:28.770136   11845 main.go:141] libmachine: (addons-966941) DBG | Closing plugin on server side
	I0816 12:22:28.770172   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:28.770229   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:28.770407   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:28.770420   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:28.820892   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:28.820930   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:28.821214   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:28.821233   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:29.050091   11845 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-966941" context rescaled to 1 replicas
	I0816 12:22:29.154870   11845 pod_ready.go:93] pod "coredns-6f6b679f8f-xvjnw" in "kube-system" namespace has status "Ready":"True"
	I0816 12:22:29.154891   11845 pod_ready.go:82] duration metric: took 505.964599ms for pod "coredns-6f6b679f8f-xvjnw" in "kube-system" namespace to be "Ready" ...
	I0816 12:22:29.154902   11845 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-966941" in "kube-system" namespace to be "Ready" ...
	I0816 12:22:30.682641   11845 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0816 12:22:30.682678   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHHostname
	I0816 12:22:30.685680   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:30.686039   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-966941 Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:30.686068   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:30.686248   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHPort
	I0816 12:22:30.686471   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHKeyPath
	I0816 12:22:30.686628   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHUsername
	I0816 12:22:30.686781   11845 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/addons-966941/id_rsa Username:docker}
	I0816 12:22:31.049976   11845 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0816 12:22:31.194037   11845 pod_ready.go:103] pod "etcd-addons-966941" in "kube-system" namespace has status "Ready":"False"
	I0816 12:22:31.333523   11845 addons.go:234] Setting addon gcp-auth=true in "addons-966941"
	I0816 12:22:31.333574   11845 host.go:66] Checking if "addons-966941" exists ...
	I0816 12:22:31.333893   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:31.333926   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:31.349282   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38205
	I0816 12:22:31.349713   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:31.350164   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:31.350184   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:31.350508   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:31.351001   11845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:22:31.351032   11845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:22:31.367339   11845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41783
	I0816 12:22:31.367754   11845 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:22:31.368298   11845 main.go:141] libmachine: Using API Version  1
	I0816 12:22:31.368322   11845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:22:31.368611   11845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:22:31.368842   11845 main.go:141] libmachine: (addons-966941) Calling .GetState
	I0816 12:22:31.370404   11845 main.go:141] libmachine: (addons-966941) Calling .DriverName
	I0816 12:22:31.370635   11845 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0816 12:22:31.370662   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHHostname
	I0816 12:22:31.373350   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:31.373773   11845 main.go:141] libmachine: (addons-966941) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:dd:30", ip: ""} in network mk-addons-966941: {Iface:virbr1 ExpiryTime:2024-08-16 13:21:52 +0000 UTC Type:0 Mac:52:54:00:72:dd:30 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-966941 Clientid:01:52:54:00:72:dd:30}
	I0816 12:22:31.373801   11845 main.go:141] libmachine: (addons-966941) DBG | domain addons-966941 has defined IP address 192.168.39.129 and MAC address 52:54:00:72:dd:30 in network mk-addons-966941
	I0816 12:22:31.373978   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHPort
	I0816 12:22:31.374172   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHKeyPath
	I0816 12:22:31.374360   11845 main.go:141] libmachine: (addons-966941) Calling .GetSSHUsername
	I0816 12:22:31.374531   11845 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/addons-966941/id_rsa Username:docker}
	I0816 12:22:31.703903   11845 pod_ready.go:93] pod "etcd-addons-966941" in "kube-system" namespace has status "Ready":"True"
	I0816 12:22:31.703927   11845 pod_ready.go:82] duration metric: took 2.549017221s for pod "etcd-addons-966941" in "kube-system" namespace to be "Ready" ...
	I0816 12:22:31.703940   11845 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-966941" in "kube-system" namespace to be "Ready" ...
	I0816 12:22:31.722356   11845 pod_ready.go:93] pod "kube-apiserver-addons-966941" in "kube-system" namespace has status "Ready":"True"
	I0816 12:22:31.722378   11845 pod_ready.go:82] duration metric: took 18.43144ms for pod "kube-apiserver-addons-966941" in "kube-system" namespace to be "Ready" ...
	I0816 12:22:31.722396   11845 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-966941" in "kube-system" namespace to be "Ready" ...
	I0816 12:22:31.739275   11845 pod_ready.go:93] pod "kube-controller-manager-addons-966941" in "kube-system" namespace has status "Ready":"True"
	I0816 12:22:31.739298   11845 pod_ready.go:82] duration metric: took 16.893964ms for pod "kube-controller-manager-addons-966941" in "kube-system" namespace to be "Ready" ...
	I0816 12:22:31.739312   11845 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qnd5q" in "kube-system" namespace to be "Ready" ...
	I0816 12:22:31.779959   11845 pod_ready.go:93] pod "kube-proxy-qnd5q" in "kube-system" namespace has status "Ready":"True"
	I0816 12:22:31.779981   11845 pod_ready.go:82] duration metric: took 40.66068ms for pod "kube-proxy-qnd5q" in "kube-system" namespace to be "Ready" ...
	I0816 12:22:31.779993   11845 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-966941" in "kube-system" namespace to be "Ready" ...
	I0816 12:22:32.614978   11845 pod_ready.go:93] pod "kube-scheduler-addons-966941" in "kube-system" namespace has status "Ready":"True"
	I0816 12:22:32.615001   11845 pod_ready.go:82] duration metric: took 835.000712ms for pod "kube-scheduler-addons-966941" in "kube-system" namespace to be "Ready" ...
	I0816 12:22:32.615015   11845 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-t2vgg" in "kube-system" namespace to be "Ready" ...
	I0816 12:22:32.987059   11845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.758774581s)
	I0816 12:22:32.987106   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:32.987113   11845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.474166804s)
	I0816 12:22:32.987119   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:32.987151   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:32.987162   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:32.987191   11845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.365943264s)
	I0816 12:22:32.987213   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:32.987226   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:32.987250   11845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.321765642s)
	I0816 12:22:32.987272   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:32.987282   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:32.987293   11845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.06138814s)
	I0816 12:22:32.987323   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:32.987338   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:32.987425   11845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.833001652s)
	W0816 12:22:32.987462   11845 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0816 12:22:32.987492   11845 retry.go:31] will retry after 159.215338ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0816 12:22:32.987576   11845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.225805494s)
	I0816 12:22:32.987601   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:32.987611   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:32.987664   11845 main.go:141] libmachine: (addons-966941) DBG | Closing plugin on server side
	I0816 12:22:32.987667   11845 main.go:141] libmachine: (addons-966941) DBG | Closing plugin on server side
	I0816 12:22:32.987671   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:32.987681   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:32.987685   11845 main.go:141] libmachine: (addons-966941) DBG | Closing plugin on server side
	I0816 12:22:32.987688   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:32.987697   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:32.987705   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:32.987711   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:32.987720   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:32.987731   11845 main.go:141] libmachine: (addons-966941) DBG | Closing plugin on server side
	I0816 12:22:32.987735   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:32.987743   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:32.987746   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:32.987749   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:32.987752   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:32.987755   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:32.987760   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:32.987762   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:32.987767   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:32.987690   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:32.987792   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:32.989391   11845 main.go:141] libmachine: (addons-966941) DBG | Closing plugin on server side
	I0816 12:22:32.989415   11845 main.go:141] libmachine: (addons-966941) DBG | Closing plugin on server side
	I0816 12:22:32.989437   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:32.989444   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:32.989453   11845 addons.go:475] Verifying addon metrics-server=true in "addons-966941"
	I0816 12:22:32.989464   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:32.989477   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:32.989486   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:32.989509   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:32.989564   11845 main.go:141] libmachine: (addons-966941) DBG | Closing plugin on server side
	I0816 12:22:32.989601   11845 main.go:141] libmachine: (addons-966941) DBG | Closing plugin on server side
	I0816 12:22:32.989621   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:32.989630   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:32.989639   11845 addons.go:475] Verifying addon ingress=true in "addons-966941"
	I0816 12:22:32.989682   11845 main.go:141] libmachine: (addons-966941) DBG | Closing plugin on server side
	I0816 12:22:32.989442   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:32.989701   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:32.989707   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:32.989711   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:32.989904   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:32.989918   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:32.989715   11845 addons.go:475] Verifying addon registry=true in "addons-966941"
	I0816 12:22:32.989742   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:32.990014   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:32.990861   11845 main.go:141] libmachine: (addons-966941) DBG | Closing plugin on server side
	I0816 12:22:32.990891   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:32.991233   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:32.992552   11845 out.go:177] * Verifying ingress addon...
	I0816 12:22:32.992557   11845 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-966941 service yakd-dashboard -n yakd-dashboard
	
	I0816 12:22:32.992552   11845 out.go:177] * Verifying registry addon...
	I0816 12:22:32.994649   11845 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0816 12:22:32.994649   11845 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0816 12:22:33.011466   11845 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0816 12:22:33.011491   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:33.011619   11845 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0816 12:22:33.011631   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:33.147161   11845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0816 12:22:33.525848   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:33.528790   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:33.864826   11845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.996816198s)
	I0816 12:22:33.864845   11845 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.494187916s)
	I0816 12:22:33.864883   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:33.864897   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:33.865221   11845 main.go:141] libmachine: (addons-966941) DBG | Closing plugin on server side
	I0816 12:22:33.865254   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:33.865265   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:33.865274   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:33.865287   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:33.865512   11845 main.go:141] libmachine: (addons-966941) DBG | Closing plugin on server side
	I0816 12:22:33.865529   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:33.865541   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:33.865556   11845 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-966941"
	I0816 12:22:33.866611   11845 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0816 12:22:33.866674   11845 out.go:177] * Verifying csi-hostpath-driver addon...
	I0816 12:22:33.868781   11845 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0816 12:22:33.869470   11845 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0816 12:22:33.870349   11845 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0816 12:22:33.870371   11845 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0816 12:22:33.886491   11845 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0816 12:22:33.886518   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:33.972096   11845 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0816 12:22:33.972120   11845 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0816 12:22:34.006271   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:34.006533   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:34.050901   11845 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0816 12:22:34.050928   11845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0816 12:22:34.109041   11845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0816 12:22:34.387980   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:34.500505   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:34.500624   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:34.628445   11845 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-t2vgg" in "kube-system" namespace has status "Ready":"False"
	I0816 12:22:34.874499   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:35.000252   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:35.000615   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:35.042979   11845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.895774601s)
	I0816 12:22:35.043031   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:35.043053   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:35.043330   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:35.043349   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:35.043368   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:35.043380   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:35.043664   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:35.043684   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:35.392167   11845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.283088349s)
	I0816 12:22:35.392218   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:35.392235   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:35.392586   11845 main.go:141] libmachine: (addons-966941) DBG | Closing plugin on server side
	I0816 12:22:35.392625   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:35.392635   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:35.392644   11845 main.go:141] libmachine: Making call to close driver server
	I0816 12:22:35.392655   11845 main.go:141] libmachine: (addons-966941) Calling .Close
	I0816 12:22:35.392887   11845 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:22:35.392951   11845 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:22:35.392957   11845 main.go:141] libmachine: (addons-966941) DBG | Closing plugin on server side
	I0816 12:22:35.393995   11845 addons.go:475] Verifying addon gcp-auth=true in "addons-966941"
	I0816 12:22:35.395823   11845 out.go:177] * Verifying gcp-auth addon...
	I0816 12:22:35.397753   11845 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0816 12:22:35.408487   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:35.438162   11845 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0816 12:22:35.438181   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:35.501852   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:35.502770   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:35.883274   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:35.902727   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:36.001402   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:36.002596   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:36.374254   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:36.401413   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:36.501446   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:36.501636   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:36.874904   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:36.901374   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:36.999651   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:37.000010   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:37.121105   11845 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-t2vgg" in "kube-system" namespace has status "Ready":"False"
	I0816 12:22:37.374753   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:37.400837   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:37.499949   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:37.500302   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:37.879033   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:37.974183   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:37.999307   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:37.999500   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:38.572222   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:38.572530   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:38.574170   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:38.577558   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:38.874998   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:38.901220   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:39.000196   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:39.000503   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:39.122469   11845 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-t2vgg" in "kube-system" namespace has status "Ready":"False"
	I0816 12:22:39.374716   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:39.400957   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:39.498852   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:39.499125   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:39.873798   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:39.901943   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:40.000064   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:40.000345   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:40.375439   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:40.401389   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:40.503091   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:40.503300   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:40.879224   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:40.902218   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:41.000640   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:41.001241   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:41.399213   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:41.403044   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:41.499321   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:41.499961   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:41.621620   11845 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-t2vgg" in "kube-system" namespace has status "Ready":"False"
	I0816 12:22:41.874315   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:41.901485   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:41.998390   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:41.998549   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:42.375934   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:42.401266   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:42.498921   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:42.499200   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:42.875387   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:42.902785   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:42.998900   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:42.999872   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:43.374824   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:43.401088   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:43.498446   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:43.498725   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:43.875379   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:43.901903   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:43.999281   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:43.999377   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:44.122122   11845 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-t2vgg" in "kube-system" namespace has status "Ready":"False"
	I0816 12:22:44.374353   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:44.401556   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:44.500367   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:44.500821   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:44.874277   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:44.901461   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:44.999884   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:45.000149   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:45.374182   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:45.401335   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:45.500501   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:45.500635   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:45.875588   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:45.901307   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:45.999508   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:46.000122   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:46.373778   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:46.401493   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:46.499788   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:46.500119   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:46.621018   11845 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-t2vgg" in "kube-system" namespace has status "Ready":"False"
	I0816 12:22:46.874772   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:46.900804   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:46.999438   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:47.000073   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:47.374902   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:47.401145   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:47.498942   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:47.499004   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:47.873894   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:47.901528   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:48.000268   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:48.001360   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:48.375118   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:48.400897   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:48.499656   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:48.500704   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:48.874350   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:48.901571   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:48.999732   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:48.999858   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:49.120968   11845 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-t2vgg" in "kube-system" namespace has status "Ready":"False"
	I0816 12:22:49.376495   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:49.402146   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:49.503154   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:49.505216   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:49.874109   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:49.901306   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:49.999044   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:50.000586   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:50.374233   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:50.401364   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:50.499094   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:50.499453   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:50.874365   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:50.902033   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:50.998275   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:50.999883   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:51.121475   11845 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-t2vgg" in "kube-system" namespace has status "Ready":"False"
	I0816 12:22:51.377854   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:51.401689   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:51.499858   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:51.499952   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:51.875795   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:51.900761   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:51.999047   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:51.999315   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:52.121184   11845 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-t2vgg" in "kube-system" namespace has status "Ready":"True"
	I0816 12:22:52.121208   11845 pod_ready.go:82] duration metric: took 19.506185483s for pod "nvidia-device-plugin-daemonset-t2vgg" in "kube-system" namespace to be "Ready" ...
	I0816 12:22:52.121232   11845 pod_ready.go:39] duration metric: took 23.573086665s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 12:22:52.121250   11845 api_server.go:52] waiting for apiserver process to appear ...
	I0816 12:22:52.121298   11845 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 12:22:52.138217   11845 api_server.go:72] duration metric: took 28.669188574s to wait for apiserver process to appear ...
	I0816 12:22:52.138242   11845 api_server.go:88] waiting for apiserver healthz status ...
	I0816 12:22:52.138262   11845 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0816 12:22:52.142298   11845 api_server.go:279] https://192.168.39.129:8443/healthz returned 200:
	ok
	I0816 12:22:52.143279   11845 api_server.go:141] control plane version: v1.31.0
	I0816 12:22:52.143297   11845 api_server.go:131] duration metric: took 5.048115ms to wait for apiserver health ...
	I0816 12:22:52.143304   11845 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 12:22:52.150910   11845 system_pods.go:59] 18 kube-system pods found
	I0816 12:22:52.150932   11845 system_pods.go:61] "coredns-6f6b679f8f-jmsfb" [541a04aa-8d7e-4811-b4b3-dbc0c1bebbb7] Running
	I0816 12:22:52.150939   11845 system_pods.go:61] "csi-hostpath-attacher-0" [5478a03b-ccb2-41ad-80b2-ac918d2be036] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0816 12:22:52.150947   11845 system_pods.go:61] "csi-hostpath-resizer-0" [4d1634dc-6351-4561-985c-5ce419dd8959] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0816 12:22:52.150958   11845 system_pods.go:61] "csi-hostpathplugin-hxhgw" [b59fe750-7fe2-4c40-bba9-836bc4990c73] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0816 12:22:52.150969   11845 system_pods.go:61] "etcd-addons-966941" [98a85f02-a468-4db0-9f86-69a1339f6f3b] Running
	I0816 12:22:52.150975   11845 system_pods.go:61] "kube-apiserver-addons-966941" [93d8e2a8-a4b0-4e0e-a54f-db67df0f7d4a] Running
	I0816 12:22:52.150981   11845 system_pods.go:61] "kube-controller-manager-addons-966941" [b1bc1e28-2d78-4080-9d44-7d9fdfe18914] Running
	I0816 12:22:52.150990   11845 system_pods.go:61] "kube-ingress-dns-minikube" [ac8db978-31ce-467e-8c0c-585910bf0042] Running
	I0816 12:22:52.150998   11845 system_pods.go:61] "kube-proxy-qnd5q" [0d7c8f55-8a0f-4598-a0fd-2f7116e8af54] Running
	I0816 12:22:52.151002   11845 system_pods.go:61] "kube-scheduler-addons-966941" [28625162-35c5-4cc6-be67-f64f326e8edd] Running
	I0816 12:22:52.151008   11845 system_pods.go:61] "metrics-server-8988944d9-p6z8v" [32196dc2-ada2-4e60-b64c-573967f34e54] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 12:22:52.151012   11845 system_pods.go:61] "nvidia-device-plugin-daemonset-t2vgg" [67831983-255a-47c4-9db7-8be119bea725] Running
	I0816 12:22:52.151018   11845 system_pods.go:61] "registry-6fb4cdfc84-pbs55" [ce8c7d7b-e1bd-4400-989e-ff5ee6472906] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0816 12:22:52.151026   11845 system_pods.go:61] "registry-proxy-ntgtj" [1d1c166b-3b57-45d7-a283-a4e340b16541] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0816 12:22:52.151033   11845 system_pods.go:61] "snapshot-controller-56fcc65765-c5drr" [071997c6-7740-4297-a69c-b4d219bbebc8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0816 12:22:52.151041   11845 system_pods.go:61] "snapshot-controller-56fcc65765-ln299" [b41b38e8-3e51-4c0c-87b1-6d3abc4889a4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0816 12:22:52.151047   11845 system_pods.go:61] "storage-provisioner" [be4bc2aa-70f7-48ee-b9f1-46102ba63337] Running
	I0816 12:22:52.151055   11845 system_pods.go:61] "tiller-deploy-b48cc5f79-v26s2" [505f660d-cfba-443f-a970-69b28a26f3c1] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0816 12:22:52.151064   11845 system_pods.go:74] duration metric: took 7.754399ms to wait for pod list to return data ...
	I0816 12:22:52.151078   11845 default_sa.go:34] waiting for default service account to be created ...
	I0816 12:22:52.152806   11845 default_sa.go:45] found service account: "default"
	I0816 12:22:52.152820   11845 default_sa.go:55] duration metric: took 1.735265ms for default service account to be created ...
	I0816 12:22:52.152826   11845 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 12:22:52.159573   11845 system_pods.go:86] 18 kube-system pods found
	I0816 12:22:52.159593   11845 system_pods.go:89] "coredns-6f6b679f8f-jmsfb" [541a04aa-8d7e-4811-b4b3-dbc0c1bebbb7] Running
	I0816 12:22:52.159602   11845 system_pods.go:89] "csi-hostpath-attacher-0" [5478a03b-ccb2-41ad-80b2-ac918d2be036] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0816 12:22:52.159610   11845 system_pods.go:89] "csi-hostpath-resizer-0" [4d1634dc-6351-4561-985c-5ce419dd8959] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0816 12:22:52.159621   11845 system_pods.go:89] "csi-hostpathplugin-hxhgw" [b59fe750-7fe2-4c40-bba9-836bc4990c73] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0816 12:22:52.159631   11845 system_pods.go:89] "etcd-addons-966941" [98a85f02-a468-4db0-9f86-69a1339f6f3b] Running
	I0816 12:22:52.159638   11845 system_pods.go:89] "kube-apiserver-addons-966941" [93d8e2a8-a4b0-4e0e-a54f-db67df0f7d4a] Running
	I0816 12:22:52.159644   11845 system_pods.go:89] "kube-controller-manager-addons-966941" [b1bc1e28-2d78-4080-9d44-7d9fdfe18914] Running
	I0816 12:22:52.159654   11845 system_pods.go:89] "kube-ingress-dns-minikube" [ac8db978-31ce-467e-8c0c-585910bf0042] Running
	I0816 12:22:52.159661   11845 system_pods.go:89] "kube-proxy-qnd5q" [0d7c8f55-8a0f-4598-a0fd-2f7116e8af54] Running
	I0816 12:22:52.159665   11845 system_pods.go:89] "kube-scheduler-addons-966941" [28625162-35c5-4cc6-be67-f64f326e8edd] Running
	I0816 12:22:52.159670   11845 system_pods.go:89] "metrics-server-8988944d9-p6z8v" [32196dc2-ada2-4e60-b64c-573967f34e54] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 12:22:52.159677   11845 system_pods.go:89] "nvidia-device-plugin-daemonset-t2vgg" [67831983-255a-47c4-9db7-8be119bea725] Running
	I0816 12:22:52.159683   11845 system_pods.go:89] "registry-6fb4cdfc84-pbs55" [ce8c7d7b-e1bd-4400-989e-ff5ee6472906] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0816 12:22:52.159691   11845 system_pods.go:89] "registry-proxy-ntgtj" [1d1c166b-3b57-45d7-a283-a4e340b16541] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0816 12:22:52.159700   11845 system_pods.go:89] "snapshot-controller-56fcc65765-c5drr" [071997c6-7740-4297-a69c-b4d219bbebc8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0816 12:22:52.159708   11845 system_pods.go:89] "snapshot-controller-56fcc65765-ln299" [b41b38e8-3e51-4c0c-87b1-6d3abc4889a4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0816 12:22:52.159717   11845 system_pods.go:89] "storage-provisioner" [be4bc2aa-70f7-48ee-b9f1-46102ba63337] Running
	I0816 12:22:52.159728   11845 system_pods.go:89] "tiller-deploy-b48cc5f79-v26s2" [505f660d-cfba-443f-a970-69b28a26f3c1] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0816 12:22:52.159740   11845 system_pods.go:126] duration metric: took 6.908249ms to wait for k8s-apps to be running ...
	I0816 12:22:52.159752   11845 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 12:22:52.159796   11845 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 12:22:52.175468   11845 system_svc.go:56] duration metric: took 15.712057ms WaitForService to wait for kubelet
	I0816 12:22:52.175485   11845 kubeadm.go:582] duration metric: took 28.706463274s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 12:22:52.175503   11845 node_conditions.go:102] verifying NodePressure condition ...
	I0816 12:22:52.177953   11845 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 12:22:52.177970   11845 node_conditions.go:123] node cpu capacity is 2
	I0816 12:22:52.177981   11845 node_conditions.go:105] duration metric: took 2.474011ms to run NodePressure ...
	I0816 12:22:52.177992   11845 start.go:241] waiting for startup goroutines ...
	I0816 12:22:52.374694   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:52.401290   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:52.498772   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:52.499202   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:52.874789   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:52.901374   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:52.998738   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:52.999053   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:53.374628   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:53.400598   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:53.498995   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:53.499805   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:53.874669   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:53.900902   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:53.998447   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:53.999969   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:54.374153   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:54.400994   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:54.499807   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:54.500601   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:54.874918   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:54.901272   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:54.998827   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:54.999146   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:55.374864   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:55.402006   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:55.498877   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:55.501821   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:55.873907   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:55.901446   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:55.999355   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:55.999731   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:56.374667   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:56.400841   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:56.498344   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:56.498843   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:56.873686   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:56.901219   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:57.000225   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:57.000684   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:57.406452   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:57.406464   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:57.581809   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:57.581811   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:57.873553   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:57.901897   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:57.999428   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:58.000647   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:58.375854   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:58.400890   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:58.499130   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:58.499366   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:58.874318   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:58.901171   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:58.998876   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:58.999273   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:59.374396   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:59.401990   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:59.498496   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:59.498866   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:22:59.873864   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:22:59.901177   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:22:59.999686   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:22:59.999863   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:00.375290   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:00.400741   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:00.500522   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:00.500540   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:23:00.874191   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:00.901875   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:00.998704   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:23:00.998915   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:01.374078   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:01.401750   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:01.506346   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:23:01.506527   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:01.876368   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:01.902422   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:01.999596   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:02.000086   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:23:02.374151   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:02.401472   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:02.498948   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:23:02.499534   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:02.875274   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:02.901557   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:03.000269   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:03.000771   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:23:03.374993   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:03.401214   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:03.499004   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:23:03.500065   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:03.874452   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:03.902257   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:04.000052   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:23:04.000236   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:04.373591   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:04.401858   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:04.499439   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:23:04.500206   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:04.875175   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:04.900804   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:04.999897   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:23:05.000047   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:05.374486   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:05.401688   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:05.499393   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:23:05.500080   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:05.877004   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:05.903549   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:05.999977   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:23:06.000110   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:06.374268   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:06.401613   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:06.499179   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:23:06.499565   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:06.875406   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:06.901511   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:06.998797   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:23:07.003046   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:07.374553   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:07.401523   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:07.499715   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:23:07.500159   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:07.874729   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:07.974038   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:08.074967   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:23:08.075671   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:08.373797   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:08.402092   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:08.498444   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:23:08.498864   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:08.875182   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:08.901422   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:08.999944   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:09.000033   11845 kapi.go:107] duration metric: took 36.005384954s to wait for kubernetes.io/minikube-addons=registry ...
	I0816 12:23:09.374660   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:09.402094   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:09.499479   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:09.874921   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:09.901199   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:09.999261   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:10.374195   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:10.401485   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:10.499632   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:10.880422   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:10.901095   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:11.000170   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:11.374609   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:11.400958   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:11.498800   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:11.875773   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:11.901576   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:12.025297   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:12.379651   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:12.404776   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:12.500633   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:12.875115   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:12.901587   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:13.002562   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:13.375903   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:13.401277   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:13.499028   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:13.875059   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:13.901218   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:14.000465   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:14.374907   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:14.401815   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:14.499800   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:14.873796   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:14.901670   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:15.000128   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:15.374272   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:15.401525   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:15.499405   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:15.912958   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:15.913975   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:15.999293   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:16.376802   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:16.474341   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:16.499099   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:16.873901   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:16.901914   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:16.999332   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:17.375635   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:17.402734   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:17.500349   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:17.874067   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:17.900941   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:18.000821   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:18.376430   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:18.401004   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:18.502166   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:18.874665   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:18.900609   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:18.999582   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:19.374808   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:19.401204   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:19.498622   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:19.874936   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:19.900675   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:19.999824   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:20.374071   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:20.401596   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:20.499720   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:20.877583   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:20.901311   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:20.998963   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:21.375218   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:21.400901   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:21.499725   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:21.874820   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:21.902058   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:21.999435   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:22.374582   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:22.724718   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:22.726063   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:22.877439   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:22.977942   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:23.002228   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:23.375176   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:23.400817   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:23.500420   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:23.874085   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:23.900945   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:24.000016   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:24.373927   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:24.402008   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:24.499806   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:24.875066   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:24.901018   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:24.998297   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:25.374686   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:25.401323   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:25.498956   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:25.873581   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:25.902769   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:26.225837   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:26.373952   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:26.401335   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:26.499236   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:26.874753   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:26.901658   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:26.998867   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:27.495357   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:27.496728   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:27.500287   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:27.874566   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:27.903747   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:28.001145   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:28.375469   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:28.410583   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:28.507272   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:28.876611   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:28.900471   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:28.999004   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:29.375014   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:29.400980   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:29.499253   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:29.878123   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:29.902334   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:29.998830   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:30.375185   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:30.401259   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:30.498734   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:30.874379   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:30.901585   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:30.999376   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:31.374060   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:31.402494   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:31.499240   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:31.873501   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:31.901964   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:32.001713   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:32.375004   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:32.475187   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:32.498490   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:32.874126   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:32.901660   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:32.999705   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:33.648062   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:33.648944   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:33.649048   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:33.879000   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:33.978503   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:33.998800   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:34.377154   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:34.401470   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:34.498933   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:34.874023   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:34.901202   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:34.998628   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:35.375722   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:35.475771   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:35.500319   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:35.874155   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:35.901147   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:35.998585   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:36.375292   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:36.402252   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:36.504605   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:36.874635   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:36.974912   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:37.075871   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:37.380259   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:37.476943   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:37.499312   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:37.884243   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:37.901536   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:37.999233   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:38.376164   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:38.401242   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:38.499014   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:38.874388   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:38.902109   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:39.007867   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:39.377261   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:39.401958   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:39.499842   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:39.874793   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:39.900837   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:40.001184   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:40.373598   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:40.401282   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:40.498665   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:40.874831   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:40.905107   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:40.999737   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:41.587811   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:41.588238   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:41.589218   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:41.875453   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:41.900826   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:41.999247   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:42.377133   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:42.400830   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:42.499335   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:42.873948   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:42.901613   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:42.999803   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:43.373968   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:43.401208   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:43.499080   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:43.873787   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:43.900721   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:44.000558   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:44.631266   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:44.631977   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:44.631992   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:44.875238   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:44.901193   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:44.999818   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:45.375253   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:45.401216   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:45.499005   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:45.875266   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:45.901645   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:46.000597   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:46.374564   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:46.401726   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:46.509223   11845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:23:46.874783   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:46.904394   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:47.015118   11845 kapi.go:107] duration metric: took 1m14.02046352s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0816 12:23:47.375233   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:47.401241   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:47.874630   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:47.900714   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:48.374248   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:48.400953   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:48.875333   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:48.901247   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:49.376363   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:49.401878   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:49.875581   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:49.901367   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:50.375708   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:50.401994   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:51.238531   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:51.240766   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:51.378359   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:51.401247   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:23:51.874269   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:51.973249   11845 kapi.go:107] duration metric: took 1m16.575493995s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0816 12:23:51.974718   11845 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-966941 cluster.
	I0816 12:23:51.976011   11845 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0816 12:23:51.977221   11845 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0816 12:23:52.375308   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:52.873775   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:53.374625   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:53.874171   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:54.374181   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:54.874272   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:55.376777   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:55.874150   11845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:23:56.374685   11845 kapi.go:107] duration metric: took 1m22.505212527s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0816 12:23:56.376692   11845 out.go:177] * Enabled addons: nvidia-device-plugin, storage-provisioner, ingress-dns, default-storageclass, cloud-spanner, storage-provisioner-rancher, metrics-server, inspektor-gadget, helm-tiller, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0816 12:23:56.377867   11845 addons.go:510] duration metric: took 1m32.908811507s for enable addons: enabled=[nvidia-device-plugin storage-provisioner ingress-dns default-storageclass cloud-spanner storage-provisioner-rancher metrics-server inspektor-gadget helm-tiller yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0816 12:23:56.377916   11845 start.go:246] waiting for cluster config update ...
	I0816 12:23:56.377942   11845 start.go:255] writing updated cluster config ...
	I0816 12:23:56.378258   11845 ssh_runner.go:195] Run: rm -f paused
	I0816 12:23:56.428917   11845 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 12:23:56.430937   11845 out.go:177] * Done! kubectl is now configured to use "addons-966941" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 16 12:30:02 addons-966941 crio[681]: time="2024-08-16 12:30:02.017761280Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723811402017739661,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590828,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b977b01b-da72-43da-862d-836fa449e3e4 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 12:30:02 addons-966941 crio[681]: time="2024-08-16 12:30:02.018232050Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0f4ce3e8-1594-4d51-ac64-e7c87b332db2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 12:30:02 addons-966941 crio[681]: time="2024-08-16 12:30:02.018310410Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0f4ce3e8-1594-4d51-ac64-e7c87b332db2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 12:30:02 addons-966941 crio[681]: time="2024-08-16 12:30:02.018595718Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c7b774160af91adad43f12404c85d0837a2ba7fcf45a4cbcd1cd37044ffceaa9,PodSandboxId:a780f4468d501c5d9431e632fb120fb1e2e901a794e46acc703beceac17f385d,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1723811253756890448,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-xgd2h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f258e324-71ea-4930-9f6b-bbfed2eb5b61,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a2aea549461411c5baa256e80f79c058abfd14fb90bb929a522c674554a1a3b,PodSandboxId:a620cfcad2fa513f2d5b0b4c2693e4e3b1813e1fa85b6bec6977a2e3fbff77f5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1723811114047008258,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 293f8398-f883-4566-aa48-f7d867211e99,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:844855539e3e8ee7266bb520f0657f04d1401f30d8900c6b0cab2b33d3c97ea7,PodSandboxId:6a4d6b515995273427cc7b9a80957490f785ed505674a81cbf4ace8c48e1af97,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723811039919934465,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 809c2f02-508e-450d-8
88c-83832697c981,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d2540ee0015260f79d7216e22bd020d23ba3f5b476926295586832ab614c8b8,PodSandboxId:f4ed0fa81aa0fcad219c6e65931663b3e3f8b654d17fefa34f606f99ecf2e622,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1723810992986918932,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-p6z8v,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 32196dc2-ada2-4e60-b64c-573967f34e54,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6219df4f6a9ab0f6b5579c6136b386e7097254a60f6cef5b1162ea5650ebd0a,PodSandboxId:e438f1d3d5dcb8af5c97f50deac825f20310b46bd988473c52cf2fe270f51ebe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723810949899739168,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,
io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be4bc2aa-70f7-48ee-b9f1-46102ba63337,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bae8bfdc21a399cfdf25528506b00489071eec863d2519c65a9d6fa7a4c667a,PodSandboxId:ba5c40550c3689f9ac8933f8dcb3d3a723b1b355b3282a41dd0af9e5dc7beeca,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723810946547512969,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b
679f8f-jmsfb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 541a04aa-8d7e-4811-b4b3-dbc0c1bebbb7,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:826db47751f803b9411a806994ccc674fdc9ef490dad62f4a9dea23670d53247,PodSandboxId:e13b5c028dc5173dc8aef6705c70877617dd130d5eb43dbb2570fc3d90ab912b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96
f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723810943771683793,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qnd5q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d7c8f55-8a0f-4598-a0fd-2f7116e8af54,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:087e5b6007d48af7bd58a02dfea863fe58e858b3fdbaac1c9265aeb756141853,PodSandboxId:09a7b3e67f6b4ed73ebcb3fe0371e13247e855fa826c2735ad75d9b125fe9a78,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f
729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723810932834701643,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-966941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd7f0f2e511e3ee492e03bbea1a692cb,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bb33a64c7a6efa428cde2b0584281471cfad35b6f88d3f978f389ad8d11bcd1,PodSandboxId:488317774d000c5795c0edbf4ca3205c524369679846a5c2b8de576ebbb68850,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAI
NER_RUNNING,CreatedAt:1723810932774115052,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-966941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 940807d2a7fa779893a3e1bd18518954,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6a417011125a0257acefc3dc36994fa81803b6e0adcda7539a8151c8c779ebf,PodSandboxId:145971dbfd079a6c99cc4057db517e09f461766963676daf20568687a8c4357e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:172381093283036
5759,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-966941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e37003e0366a0904b6a2e41d3bd1df29,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea1e709000b7f625b1f43aeb5c4527ba0a2bbfc2704787b2b5e3cd7641d29fb3,PodSandboxId:871d3079795f60786b563db947d7f0387d194d6dd3d4fd690287875745fb3c00,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723810932817029528,Lab
els:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-966941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b02dca01b057b45613a9a29a35a25c5b,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0f4ce3e8-1594-4d51-ac64-e7c87b332db2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 12:30:02 addons-966941 crio[681]: time="2024-08-16 12:30:02.056992546Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0e32e8aa-029b-4f07-9f22-ba949674cb9e name=/runtime.v1.RuntimeService/Version
	Aug 16 12:30:02 addons-966941 crio[681]: time="2024-08-16 12:30:02.057091513Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0e32e8aa-029b-4f07-9f22-ba949674cb9e name=/runtime.v1.RuntimeService/Version
	Aug 16 12:30:02 addons-966941 crio[681]: time="2024-08-16 12:30:02.058216928Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=db29987f-9375-4913-b2bb-9c0d00ac161c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 12:30:02 addons-966941 crio[681]: time="2024-08-16 12:30:02.059647929Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723811402059619930,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590828,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=db29987f-9375-4913-b2bb-9c0d00ac161c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 12:30:02 addons-966941 crio[681]: time="2024-08-16 12:30:02.060216083Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3ff57a91-9e8f-42b9-9f7f-ce80fe819e91 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 12:30:02 addons-966941 crio[681]: time="2024-08-16 12:30:02.060295333Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3ff57a91-9e8f-42b9-9f7f-ce80fe819e91 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 12:30:02 addons-966941 crio[681]: time="2024-08-16 12:30:02.060634191Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c7b774160af91adad43f12404c85d0837a2ba7fcf45a4cbcd1cd37044ffceaa9,PodSandboxId:a780f4468d501c5d9431e632fb120fb1e2e901a794e46acc703beceac17f385d,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1723811253756890448,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-xgd2h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f258e324-71ea-4930-9f6b-bbfed2eb5b61,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a2aea549461411c5baa256e80f79c058abfd14fb90bb929a522c674554a1a3b,PodSandboxId:a620cfcad2fa513f2d5b0b4c2693e4e3b1813e1fa85b6bec6977a2e3fbff77f5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1723811114047008258,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 293f8398-f883-4566-aa48-f7d867211e99,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:844855539e3e8ee7266bb520f0657f04d1401f30d8900c6b0cab2b33d3c97ea7,PodSandboxId:6a4d6b515995273427cc7b9a80957490f785ed505674a81cbf4ace8c48e1af97,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723811039919934465,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 809c2f02-508e-450d-8
88c-83832697c981,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d2540ee0015260f79d7216e22bd020d23ba3f5b476926295586832ab614c8b8,PodSandboxId:f4ed0fa81aa0fcad219c6e65931663b3e3f8b654d17fefa34f606f99ecf2e622,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1723810992986918932,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-p6z8v,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 32196dc2-ada2-4e60-b64c-573967f34e54,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6219df4f6a9ab0f6b5579c6136b386e7097254a60f6cef5b1162ea5650ebd0a,PodSandboxId:e438f1d3d5dcb8af5c97f50deac825f20310b46bd988473c52cf2fe270f51ebe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723810949899739168,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,
io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be4bc2aa-70f7-48ee-b9f1-46102ba63337,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bae8bfdc21a399cfdf25528506b00489071eec863d2519c65a9d6fa7a4c667a,PodSandboxId:ba5c40550c3689f9ac8933f8dcb3d3a723b1b355b3282a41dd0af9e5dc7beeca,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723810946547512969,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b
679f8f-jmsfb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 541a04aa-8d7e-4811-b4b3-dbc0c1bebbb7,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:826db47751f803b9411a806994ccc674fdc9ef490dad62f4a9dea23670d53247,PodSandboxId:e13b5c028dc5173dc8aef6705c70877617dd130d5eb43dbb2570fc3d90ab912b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96
f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723810943771683793,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qnd5q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d7c8f55-8a0f-4598-a0fd-2f7116e8af54,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:087e5b6007d48af7bd58a02dfea863fe58e858b3fdbaac1c9265aeb756141853,PodSandboxId:09a7b3e67f6b4ed73ebcb3fe0371e13247e855fa826c2735ad75d9b125fe9a78,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f
729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723810932834701643,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-966941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd7f0f2e511e3ee492e03bbea1a692cb,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bb33a64c7a6efa428cde2b0584281471cfad35b6f88d3f978f389ad8d11bcd1,PodSandboxId:488317774d000c5795c0edbf4ca3205c524369679846a5c2b8de576ebbb68850,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAI
NER_RUNNING,CreatedAt:1723810932774115052,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-966941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 940807d2a7fa779893a3e1bd18518954,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6a417011125a0257acefc3dc36994fa81803b6e0adcda7539a8151c8c779ebf,PodSandboxId:145971dbfd079a6c99cc4057db517e09f461766963676daf20568687a8c4357e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:172381093283036
5759,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-966941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e37003e0366a0904b6a2e41d3bd1df29,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea1e709000b7f625b1f43aeb5c4527ba0a2bbfc2704787b2b5e3cd7641d29fb3,PodSandboxId:871d3079795f60786b563db947d7f0387d194d6dd3d4fd690287875745fb3c00,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723810932817029528,Lab
els:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-966941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b02dca01b057b45613a9a29a35a25c5b,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3ff57a91-9e8f-42b9-9f7f-ce80fe819e91 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 12:30:02 addons-966941 crio[681]: time="2024-08-16 12:30:02.099650360Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=08c3655c-8ef6-423c-ad4c-2d069d33eb33 name=/runtime.v1.RuntimeService/Version
	Aug 16 12:30:02 addons-966941 crio[681]: time="2024-08-16 12:30:02.099742559Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=08c3655c-8ef6-423c-ad4c-2d069d33eb33 name=/runtime.v1.RuntimeService/Version
	Aug 16 12:30:02 addons-966941 crio[681]: time="2024-08-16 12:30:02.100765655Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=92d4a45e-2be5-4f23-b4de-25541974fbeb name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 12:30:02 addons-966941 crio[681]: time="2024-08-16 12:30:02.101964092Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723811402101937602,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590828,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=92d4a45e-2be5-4f23-b4de-25541974fbeb name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 12:30:02 addons-966941 crio[681]: time="2024-08-16 12:30:02.102731554Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=07a949aa-2167-4864-b405-a4f5ed40b19a name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 12:30:02 addons-966941 crio[681]: time="2024-08-16 12:30:02.102798770Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=07a949aa-2167-4864-b405-a4f5ed40b19a name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 12:30:02 addons-966941 crio[681]: time="2024-08-16 12:30:02.103084646Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c7b774160af91adad43f12404c85d0837a2ba7fcf45a4cbcd1cd37044ffceaa9,PodSandboxId:a780f4468d501c5d9431e632fb120fb1e2e901a794e46acc703beceac17f385d,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1723811253756890448,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-xgd2h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f258e324-71ea-4930-9f6b-bbfed2eb5b61,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a2aea549461411c5baa256e80f79c058abfd14fb90bb929a522c674554a1a3b,PodSandboxId:a620cfcad2fa513f2d5b0b4c2693e4e3b1813e1fa85b6bec6977a2e3fbff77f5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1723811114047008258,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 293f8398-f883-4566-aa48-f7d867211e99,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:844855539e3e8ee7266bb520f0657f04d1401f30d8900c6b0cab2b33d3c97ea7,PodSandboxId:6a4d6b515995273427cc7b9a80957490f785ed505674a81cbf4ace8c48e1af97,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723811039919934465,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 809c2f02-508e-450d-8
88c-83832697c981,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d2540ee0015260f79d7216e22bd020d23ba3f5b476926295586832ab614c8b8,PodSandboxId:f4ed0fa81aa0fcad219c6e65931663b3e3f8b654d17fefa34f606f99ecf2e622,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1723810992986918932,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-p6z8v,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 32196dc2-ada2-4e60-b64c-573967f34e54,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6219df4f6a9ab0f6b5579c6136b386e7097254a60f6cef5b1162ea5650ebd0a,PodSandboxId:e438f1d3d5dcb8af5c97f50deac825f20310b46bd988473c52cf2fe270f51ebe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723810949899739168,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,
io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be4bc2aa-70f7-48ee-b9f1-46102ba63337,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bae8bfdc21a399cfdf25528506b00489071eec863d2519c65a9d6fa7a4c667a,PodSandboxId:ba5c40550c3689f9ac8933f8dcb3d3a723b1b355b3282a41dd0af9e5dc7beeca,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723810946547512969,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b
679f8f-jmsfb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 541a04aa-8d7e-4811-b4b3-dbc0c1bebbb7,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:826db47751f803b9411a806994ccc674fdc9ef490dad62f4a9dea23670d53247,PodSandboxId:e13b5c028dc5173dc8aef6705c70877617dd130d5eb43dbb2570fc3d90ab912b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96
f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723810943771683793,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qnd5q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d7c8f55-8a0f-4598-a0fd-2f7116e8af54,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:087e5b6007d48af7bd58a02dfea863fe58e858b3fdbaac1c9265aeb756141853,PodSandboxId:09a7b3e67f6b4ed73ebcb3fe0371e13247e855fa826c2735ad75d9b125fe9a78,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f
729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723810932834701643,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-966941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd7f0f2e511e3ee492e03bbea1a692cb,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bb33a64c7a6efa428cde2b0584281471cfad35b6f88d3f978f389ad8d11bcd1,PodSandboxId:488317774d000c5795c0edbf4ca3205c524369679846a5c2b8de576ebbb68850,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAI
NER_RUNNING,CreatedAt:1723810932774115052,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-966941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 940807d2a7fa779893a3e1bd18518954,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6a417011125a0257acefc3dc36994fa81803b6e0adcda7539a8151c8c779ebf,PodSandboxId:145971dbfd079a6c99cc4057db517e09f461766963676daf20568687a8c4357e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:172381093283036
5759,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-966941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e37003e0366a0904b6a2e41d3bd1df29,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea1e709000b7f625b1f43aeb5c4527ba0a2bbfc2704787b2b5e3cd7641d29fb3,PodSandboxId:871d3079795f60786b563db947d7f0387d194d6dd3d4fd690287875745fb3c00,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723810932817029528,Lab
els:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-966941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b02dca01b057b45613a9a29a35a25c5b,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=07a949aa-2167-4864-b405-a4f5ed40b19a name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 12:30:02 addons-966941 crio[681]: time="2024-08-16 12:30:02.141687812Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=81cd6810-3086-4c1d-84b0-5ba658379458 name=/runtime.v1.RuntimeService/Version
	Aug 16 12:30:02 addons-966941 crio[681]: time="2024-08-16 12:30:02.141761677Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=81cd6810-3086-4c1d-84b0-5ba658379458 name=/runtime.v1.RuntimeService/Version
	Aug 16 12:30:02 addons-966941 crio[681]: time="2024-08-16 12:30:02.143635848Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a3f1e1d0-e465-4a81-9934-62c42bc48cba name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 12:30:02 addons-966941 crio[681]: time="2024-08-16 12:30:02.144850754Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723811402144821999,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590828,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a3f1e1d0-e465-4a81-9934-62c42bc48cba name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 12:30:02 addons-966941 crio[681]: time="2024-08-16 12:30:02.145476795Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=634f59a5-93d5-4e8d-a8a6-89ea9ad2f545 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 12:30:02 addons-966941 crio[681]: time="2024-08-16 12:30:02.145548453Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=634f59a5-93d5-4e8d-a8a6-89ea9ad2f545 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 12:30:02 addons-966941 crio[681]: time="2024-08-16 12:30:02.145833924Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c7b774160af91adad43f12404c85d0837a2ba7fcf45a4cbcd1cd37044ffceaa9,PodSandboxId:a780f4468d501c5d9431e632fb120fb1e2e901a794e46acc703beceac17f385d,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1723811253756890448,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-xgd2h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f258e324-71ea-4930-9f6b-bbfed2eb5b61,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a2aea549461411c5baa256e80f79c058abfd14fb90bb929a522c674554a1a3b,PodSandboxId:a620cfcad2fa513f2d5b0b4c2693e4e3b1813e1fa85b6bec6977a2e3fbff77f5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1723811114047008258,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 293f8398-f883-4566-aa48-f7d867211e99,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:844855539e3e8ee7266bb520f0657f04d1401f30d8900c6b0cab2b33d3c97ea7,PodSandboxId:6a4d6b515995273427cc7b9a80957490f785ed505674a81cbf4ace8c48e1af97,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723811039919934465,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 809c2f02-508e-450d-8
88c-83832697c981,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d2540ee0015260f79d7216e22bd020d23ba3f5b476926295586832ab614c8b8,PodSandboxId:f4ed0fa81aa0fcad219c6e65931663b3e3f8b654d17fefa34f606f99ecf2e622,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1723810992986918932,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-p6z8v,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 32196dc2-ada2-4e60-b64c-573967f34e54,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6219df4f6a9ab0f6b5579c6136b386e7097254a60f6cef5b1162ea5650ebd0a,PodSandboxId:e438f1d3d5dcb8af5c97f50deac825f20310b46bd988473c52cf2fe270f51ebe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723810949899739168,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,
io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be4bc2aa-70f7-48ee-b9f1-46102ba63337,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bae8bfdc21a399cfdf25528506b00489071eec863d2519c65a9d6fa7a4c667a,PodSandboxId:ba5c40550c3689f9ac8933f8dcb3d3a723b1b355b3282a41dd0af9e5dc7beeca,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723810946547512969,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b
679f8f-jmsfb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 541a04aa-8d7e-4811-b4b3-dbc0c1bebbb7,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:826db47751f803b9411a806994ccc674fdc9ef490dad62f4a9dea23670d53247,PodSandboxId:e13b5c028dc5173dc8aef6705c70877617dd130d5eb43dbb2570fc3d90ab912b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96
f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723810943771683793,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qnd5q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d7c8f55-8a0f-4598-a0fd-2f7116e8af54,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:087e5b6007d48af7bd58a02dfea863fe58e858b3fdbaac1c9265aeb756141853,PodSandboxId:09a7b3e67f6b4ed73ebcb3fe0371e13247e855fa826c2735ad75d9b125fe9a78,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f
729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723810932834701643,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-966941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd7f0f2e511e3ee492e03bbea1a692cb,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bb33a64c7a6efa428cde2b0584281471cfad35b6f88d3f978f389ad8d11bcd1,PodSandboxId:488317774d000c5795c0edbf4ca3205c524369679846a5c2b8de576ebbb68850,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAI
NER_RUNNING,CreatedAt:1723810932774115052,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-966941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 940807d2a7fa779893a3e1bd18518954,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6a417011125a0257acefc3dc36994fa81803b6e0adcda7539a8151c8c779ebf,PodSandboxId:145971dbfd079a6c99cc4057db517e09f461766963676daf20568687a8c4357e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:172381093283036
5759,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-966941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e37003e0366a0904b6a2e41d3bd1df29,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea1e709000b7f625b1f43aeb5c4527ba0a2bbfc2704787b2b5e3cd7641d29fb3,PodSandboxId:871d3079795f60786b563db947d7f0387d194d6dd3d4fd690287875745fb3c00,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723810932817029528,Lab
els:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-966941,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b02dca01b057b45613a9a29a35a25c5b,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=634f59a5-93d5-4e8d-a8a6-89ea9ad2f545 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c7b774160af91       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   a780f4468d501       hello-world-app-55bf9c44b4-xgd2h
	9a2aea5494614       docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0                         4 minutes ago       Running             nginx                     0                   a620cfcad2fa5       nginx
	844855539e3e8       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     6 minutes ago       Running             busybox                   0                   6a4d6b5159952       busybox
	3d2540ee00152       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   6 minutes ago       Running             metrics-server            0                   f4ed0fa81aa0f       metrics-server-8988944d9-p6z8v
	c6219df4f6a9a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        7 minutes ago       Running             storage-provisioner       0                   e438f1d3d5dcb       storage-provisioner
	9bae8bfdc21a3       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        7 minutes ago       Running             coredns                   0                   ba5c40550c368       coredns-6f6b679f8f-jmsfb
	826db47751f80       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                                        7 minutes ago       Running             kube-proxy                0                   e13b5c028dc51       kube-proxy-qnd5q
	087e5b6007d48       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                                        7 minutes ago       Running             kube-scheduler            0                   09a7b3e67f6b4       kube-scheduler-addons-966941
	c6a417011125a       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                                        7 minutes ago       Running             kube-apiserver            0                   145971dbfd079       kube-apiserver-addons-966941
	ea1e709000b7f       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                                        7 minutes ago       Running             kube-controller-manager   0                   871d3079795f6       kube-controller-manager-addons-966941
	7bb33a64c7a6e       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        7 minutes ago       Running             etcd                      0                   488317774d000       etcd-addons-966941
	
	
	==> coredns [9bae8bfdc21a399cfdf25528506b00489071eec863d2519c65a9d6fa7a4c667a] <==
	[INFO] 10.244.0.7:36155 - 48735 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000296073s
	[INFO] 10.244.0.7:35810 - 31838 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000085852s
	[INFO] 10.244.0.7:35810 - 46684 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000093016s
	[INFO] 10.244.0.7:58626 - 41299 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00006217s
	[INFO] 10.244.0.7:58626 - 51285 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000102842s
	[INFO] 10.244.0.7:45240 - 11379 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00010382s
	[INFO] 10.244.0.7:45240 - 39029 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000178655s
	[INFO] 10.244.0.7:37080 - 4549 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000121924s
	[INFO] 10.244.0.7:37080 - 40920 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000172492s
	[INFO] 10.244.0.7:39588 - 57604 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000109906s
	[INFO] 10.244.0.7:39588 - 40450 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000031395s
	[INFO] 10.244.0.7:42235 - 63003 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000086268s
	[INFO] 10.244.0.7:42235 - 52761 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000090073s
	[INFO] 10.244.0.7:47747 - 57743 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000104217s
	[INFO] 10.244.0.7:47747 - 9612 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000090525s
	[INFO] 10.244.0.22:57347 - 46561 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000357197s
	[INFO] 10.244.0.22:40248 - 22516 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000114261s
	[INFO] 10.244.0.22:51106 - 49073 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00012772s
	[INFO] 10.244.0.22:34649 - 57528 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000053051s
	[INFO] 10.244.0.22:49390 - 51786 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000075473s
	[INFO] 10.244.0.22:45355 - 40275 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000043733s
	[INFO] 10.244.0.22:58957 - 56775 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000778625s
	[INFO] 10.244.0.22:58237 - 3316 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.00109432s
	[INFO] 10.244.0.26:40291 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000285169s
	[INFO] 10.244.0.26:47593 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000149944s
	
	
	==> describe nodes <==
	Name:               addons-966941
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-966941
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab84f9bc76071a77c857a14f5c66dccc01002b05
	                    minikube.k8s.io/name=addons-966941
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_16T12_22_18_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-966941
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 12:22:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-966941
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 12:29:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 12:27:54 +0000   Fri, 16 Aug 2024 12:22:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 12:27:54 +0000   Fri, 16 Aug 2024 12:22:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 12:27:54 +0000   Fri, 16 Aug 2024 12:22:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 12:27:54 +0000   Fri, 16 Aug 2024 12:22:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.129
	  Hostname:    addons-966941
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 fb11edb67432498084ba0979e0b9a2a0
	  System UUID:                fb11edb6-7432-4980-84ba-0979e0b9a2a0
	  Boot ID:                    99dd81b0-f07e-42c1-807f-c6307b945b9c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m6s
	  default                     hello-world-app-55bf9c44b4-xgd2h         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m32s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	  kube-system                 coredns-6f6b679f8f-jmsfb                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     7m39s
	  kube-system                 etcd-addons-966941                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         7m44s
	  kube-system                 kube-apiserver-addons-966941             250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m44s
	  kube-system                 kube-controller-manager-addons-966941    200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m44s
	  kube-system                 kube-proxy-qnd5q                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m40s
	  kube-system                 kube-scheduler-addons-966941             100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m44s
	  kube-system                 metrics-server-8988944d9-p6z8v           100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         7m34s
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m37s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  7m50s (x8 over 7m50s)  kubelet          Node addons-966941 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m50s (x8 over 7m50s)  kubelet          Node addons-966941 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m50s (x7 over 7m50s)  kubelet          Node addons-966941 status is now: NodeHasSufficientPID
	  Normal  Starting                 7m45s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m45s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m44s                  kubelet          Node addons-966941 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m44s                  kubelet          Node addons-966941 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m44s                  kubelet          Node addons-966941 status is now: NodeHasSufficientPID
	  Normal  NodeReady                7m43s                  kubelet          Node addons-966941 status is now: NodeReady
	  Normal  RegisteredNode           7m40s                  node-controller  Node addons-966941 event: Registered Node addons-966941 in Controller
	
	
	==> dmesg <==
	[Aug16 12:23] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.093804] kauditd_printk_skb: 4 callbacks suppressed
	[  +9.826264] kauditd_printk_skb: 2 callbacks suppressed
	[ +10.451320] kauditd_printk_skb: 52 callbacks suppressed
	[  +5.115925] kauditd_printk_skb: 77 callbacks suppressed
	[  +7.615968] kauditd_printk_skb: 31 callbacks suppressed
	[  +5.333674] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.009297] kauditd_printk_skb: 20 callbacks suppressed
	[Aug16 12:24] kauditd_printk_skb: 41 callbacks suppressed
	[  +8.348175] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.905413] kauditd_printk_skb: 42 callbacks suppressed
	[  +5.965412] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.953442] kauditd_printk_skb: 37 callbacks suppressed
	[  +8.399765] kauditd_printk_skb: 35 callbacks suppressed
	[  +6.425735] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.345615] kauditd_printk_skb: 7 callbacks suppressed
	[Aug16 12:25] kauditd_printk_skb: 15 callbacks suppressed
	[  +7.490148] kauditd_printk_skb: 45 callbacks suppressed
	[  +8.348775] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.286670] kauditd_printk_skb: 14 callbacks suppressed
	[ +11.082959] kauditd_printk_skb: 3 callbacks suppressed
	[  +7.848424] kauditd_printk_skb: 6 callbacks suppressed
	[  +8.375819] kauditd_printk_skb: 33 callbacks suppressed
	[Aug16 12:27] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.246086] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [7bb33a64c7a6efa428cde2b0584281471cfad35b6f88d3f978f389ad8d11bcd1] <==
	{"level":"warn","ts":"2024-08-16T12:23:44.614515Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.638969ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-16T12:23:44.615703Z","caller":"traceutil/trace.go:171","msg":"trace[1742710230] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1135; }","duration":"129.725833ms","start":"2024-08-16T12:23:44.485872Z","end":"2024-08-16T12:23:44.615598Z","steps":["trace[1742710230] 'agreement among raft nodes before linearized reading'  (duration: 128.631135ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T12:23:51.209538Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":16204392754877957419,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-08-16T12:23:51.221748Z","caller":"traceutil/trace.go:171","msg":"trace[2039939734] linearizableReadLoop","detail":"{readStateIndex:1202; appliedIndex:1201; }","duration":"512.57673ms","start":"2024-08-16T12:23:50.709157Z","end":"2024-08-16T12:23:51.221733Z","steps":["trace[2039939734] 'read index received'  (duration: 512.339957ms)","trace[2039939734] 'applied index is now lower than readState.Index'  (duration: 236.33µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-16T12:23:51.221945Z","caller":"traceutil/trace.go:171","msg":"trace[529895148] transaction","detail":"{read_only:false; response_revision:1169; number_of_response:1; }","duration":"565.556788ms","start":"2024-08-16T12:23:50.656341Z","end":"2024-08-16T12:23:51.221898Z","steps":["trace[529895148] 'process raft request'  (duration: 565.284352ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T12:23:51.222041Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"361.358985ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-16T12:23:51.222096Z","caller":"traceutil/trace.go:171","msg":"trace[136183815] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1169; }","duration":"361.425408ms","start":"2024-08-16T12:23:50.860662Z","end":"2024-08-16T12:23:51.222087Z","steps":["trace[136183815] 'agreement among raft nodes before linearized reading'  (duration: 361.341256ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T12:23:51.222122Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T12:23:50.860609Z","time spent":"361.506145ms","remote":"127.0.0.1:57548","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-08-16T12:23:51.222173Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"276.432867ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-16T12:23:51.222209Z","caller":"traceutil/trace.go:171","msg":"trace[114815219] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1169; }","duration":"276.501007ms","start":"2024-08-16T12:23:50.945702Z","end":"2024-08-16T12:23:51.222203Z","steps":["trace[114815219] 'agreement among raft nodes before linearized reading'  (duration: 276.369503ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T12:23:51.222271Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"333.649612ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-16T12:23:51.222306Z","caller":"traceutil/trace.go:171","msg":"trace[1057252280] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1169; }","duration":"333.684634ms","start":"2024-08-16T12:23:50.888615Z","end":"2024-08-16T12:23:51.222300Z","steps":["trace[1057252280] 'agreement among raft nodes before linearized reading'  (duration: 333.639927ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T12:23:51.222323Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T12:23:50.888582Z","time spent":"333.735805ms","remote":"127.0.0.1:57548","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-08-16T12:23:51.222049Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T12:23:50.656320Z","time spent":"565.675315ms","remote":"127.0.0.1:57536","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1163 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-08-16T12:23:51.222574Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"513.41428ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-16T12:23:51.222668Z","caller":"traceutil/trace.go:171","msg":"trace[1494340823] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1169; }","duration":"513.510286ms","start":"2024-08-16T12:23:50.709151Z","end":"2024-08-16T12:23:51.222661Z","steps":["trace[1494340823] 'agreement among raft nodes before linearized reading'  (duration: 513.402588ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T12:24:11.849192Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.022431ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-16T12:24:11.849473Z","caller":"traceutil/trace.go:171","msg":"trace[1305355956] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1293; }","duration":"140.289651ms","start":"2024-08-16T12:24:11.709117Z","end":"2024-08-16T12:24:11.849407Z","steps":["trace[1305355956] 'range keys from in-memory index tree'  (duration: 140.010663ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T12:25:08.757797Z","caller":"traceutil/trace.go:171","msg":"trace[1556507125] transaction","detail":"{read_only:false; response_revision:1630; number_of_response:1; }","duration":"342.552941ms","start":"2024-08-16T12:25:08.415211Z","end":"2024-08-16T12:25:08.757764Z","steps":["trace[1556507125] 'process raft request'  (duration: 342.199817ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T12:25:08.758074Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T12:25:08.415194Z","time spent":"342.732347ms","remote":"127.0.0.1:36570","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":538,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1603 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:451 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"warn","ts":"2024-08-16T12:25:21.796884Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"208.997808ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-16T12:25:21.796949Z","caller":"traceutil/trace.go:171","msg":"trace[1246856543] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1741; }","duration":"209.073243ms","start":"2024-08-16T12:25:21.587866Z","end":"2024-08-16T12:25:21.796939Z","steps":["trace[1246856543] 'range keys from in-memory index tree'  (duration: 208.900576ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T12:25:21.797111Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"137.440704ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/hpvc-restore\" ","response":"range_response_count:1 size:982"}
	{"level":"info","ts":"2024-08-16T12:25:21.797128Z","caller":"traceutil/trace.go:171","msg":"trace[1889004501] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/hpvc-restore; range_end:; response_count:1; response_revision:1741; }","duration":"137.459665ms","start":"2024-08-16T12:25:21.659663Z","end":"2024-08-16T12:25:21.797122Z","steps":["trace[1889004501] 'range keys from in-memory index tree'  (duration: 137.348197ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T12:25:23.899175Z","caller":"traceutil/trace.go:171","msg":"trace[755449834] transaction","detail":"{read_only:false; response_revision:1745; number_of_response:1; }","duration":"110.62388ms","start":"2024-08-16T12:25:23.788535Z","end":"2024-08-16T12:25:23.899159Z","steps":["trace[755449834] 'process raft request'  (duration: 109.976462ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:30:02 up 8 min,  0 users,  load average: 0.11, 0.72, 0.54
	Linux addons-966941 5.10.207 #1 SMP Wed Aug 14 19:18:01 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [c6a417011125a0257acefc3dc36994fa81803b6e0adcda7539a8151c8c779ebf] <==
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0816 12:24:19.290043       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0816 12:24:19.292846       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E0816 12:24:49.808317       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0816 12:24:54.814828       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.39.129:8443->10.244.0.28:49778: read: connection reset by peer
	I0816 12:25:03.844138       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0816 12:25:04.901306       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0816 12:25:09.527850       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0816 12:25:09.713962       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.106.85.112"}
	I0816 12:25:16.451689       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0816 12:25:18.460162       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.109.69.59"}
	I0816 12:25:51.054165       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0816 12:25:51.054222       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0816 12:25:51.082374       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0816 12:25:51.082476       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0816 12:25:51.190128       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0816 12:25:51.190230       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0816 12:25:51.200774       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0816 12:25:51.200822       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0816 12:25:52.191107       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0816 12:25:52.201843       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0816 12:25:52.333816       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0816 12:27:30.881595       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.97.242.210"}
	
	
	==> kube-controller-manager [ea1e709000b7f625b1f43aeb5c4527ba0a2bbfc2704787b2b5e3cd7641d29fb3] <==
	W0816 12:27:54.854478       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 12:27:54.854532       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0816 12:28:02.620773       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 12:28:02.620883       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0816 12:28:25.844980       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 12:28:25.845046       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0816 12:28:41.949391       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 12:28:41.949612       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0816 12:28:43.594667       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 12:28:43.594719       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0816 12:28:50.152518       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 12:28:50.152574       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0816 12:29:23.121880       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 12:29:23.122015       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0816 12:29:28.418856       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 12:29:28.418991       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0816 12:29:33.913148       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 12:29:33.913200       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0816 12:29:41.151484       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 12:29:41.151547       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0816 12:30:01.001219       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 12:30:01.001266       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0816 12:30:01.127758       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-8988944d9" duration="11.085µs"
	W0816 12:30:02.328892       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 12:30:02.328946       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [826db47751f803b9411a806994ccc674fdc9ef490dad62f4a9dea23670d53247] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0816 12:22:24.474371       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0816 12:22:24.484976       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.129"]
	E0816 12:22:24.485053       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 12:22:24.623813       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0816 12:22:24.623845       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0816 12:22:24.623872       1 server_linux.go:169] "Using iptables Proxier"
	I0816 12:22:24.630017       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 12:22:24.630244       1 server.go:483] "Version info" version="v1.31.0"
	I0816 12:22:24.630255       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 12:22:24.636227       1 config.go:197] "Starting service config controller"
	I0816 12:22:24.636253       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 12:22:24.636275       1 config.go:104] "Starting endpoint slice config controller"
	I0816 12:22:24.636279       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 12:22:24.636857       1 config.go:326] "Starting node config controller"
	I0816 12:22:24.636874       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 12:22:24.736382       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0816 12:22:24.736480       1 shared_informer.go:320] Caches are synced for service config
	I0816 12:22:24.737127       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [087e5b6007d48af7bd58a02dfea863fe58e858b3fdbaac1c9265aeb756141853] <==
	W0816 12:22:15.506679       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 12:22:15.506709       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0816 12:22:15.507222       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 12:22:15.507264       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 12:22:15.507874       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 12:22:15.507920       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 12:22:16.394177       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 12:22:16.394232       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 12:22:16.406752       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 12:22:16.406840       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 12:22:16.491918       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0816 12:22:16.491967       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0816 12:22:16.614672       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 12:22:16.614720       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 12:22:16.635236       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 12:22:16.635290       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0816 12:22:16.645900       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 12:22:16.646045       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 12:22:16.682324       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 12:22:16.682461       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 12:22:16.698490       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 12:22:16.699097       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 12:22:16.698926       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 12:22:16.699381       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0816 12:22:18.695410       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 16 12:29:17 addons-966941 kubelet[1224]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 12:29:17 addons-966941 kubelet[1224]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 12:29:17 addons-966941 kubelet[1224]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 12:29:18 addons-966941 kubelet[1224]: E0816 12:29:18.186292    1224 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723811358185555014,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590828,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:29:18 addons-966941 kubelet[1224]: E0816 12:29:18.186549    1224 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723811358185555014,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590828,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:29:28 addons-966941 kubelet[1224]: E0816 12:29:28.188966    1224 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723811368188653636,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590828,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:29:28 addons-966941 kubelet[1224]: E0816 12:29:28.189060    1224 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723811368188653636,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590828,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:29:38 addons-966941 kubelet[1224]: E0816 12:29:38.192510    1224 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723811378191691476,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590828,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:29:38 addons-966941 kubelet[1224]: E0816 12:29:38.192624    1224 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723811378191691476,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590828,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:29:48 addons-966941 kubelet[1224]: E0816 12:29:48.195081    1224 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723811388194596814,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590828,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:29:48 addons-966941 kubelet[1224]: E0816 12:29:48.195349    1224 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723811388194596814,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590828,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:29:58 addons-966941 kubelet[1224]: E0816 12:29:58.197820    1224 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723811398197198970,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590828,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:29:58 addons-966941 kubelet[1224]: E0816 12:29:58.197914    1224 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723811398197198970,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590828,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:29:59 addons-966941 kubelet[1224]: I0816 12:29:59.960665    1224 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Aug 16 12:30:01 addons-966941 kubelet[1224]: I0816 12:30:01.150829    1224 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-xgd2h" podStartSLOduration=148.661391504 podStartE2EDuration="2m31.150798368s" podCreationTimestamp="2024-08-16 12:27:30 +0000 UTC" firstStartedPulling="2024-08-16 12:27:31.254258092 +0000 UTC m=+313.471338473" lastFinishedPulling="2024-08-16 12:27:33.743664955 +0000 UTC m=+315.960745337" observedRunningTime="2024-08-16 12:27:34.137833575 +0000 UTC m=+316.354913962" watchObservedRunningTime="2024-08-16 12:30:01.150798368 +0000 UTC m=+463.367878758"
	Aug 16 12:30:02 addons-966941 kubelet[1224]: I0816 12:30:02.532795    1224 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7k6r\" (UniqueName: \"kubernetes.io/projected/32196dc2-ada2-4e60-b64c-573967f34e54-kube-api-access-w7k6r\") pod \"32196dc2-ada2-4e60-b64c-573967f34e54\" (UID: \"32196dc2-ada2-4e60-b64c-573967f34e54\") "
	Aug 16 12:30:02 addons-966941 kubelet[1224]: I0816 12:30:02.532879    1224 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/32196dc2-ada2-4e60-b64c-573967f34e54-tmp-dir\") pod \"32196dc2-ada2-4e60-b64c-573967f34e54\" (UID: \"32196dc2-ada2-4e60-b64c-573967f34e54\") "
	Aug 16 12:30:02 addons-966941 kubelet[1224]: I0816 12:30:02.533260    1224 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32196dc2-ada2-4e60-b64c-573967f34e54-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "32196dc2-ada2-4e60-b64c-573967f34e54" (UID: "32196dc2-ada2-4e60-b64c-573967f34e54"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Aug 16 12:30:02 addons-966941 kubelet[1224]: I0816 12:30:02.543661    1224 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32196dc2-ada2-4e60-b64c-573967f34e54-kube-api-access-w7k6r" (OuterVolumeSpecName: "kube-api-access-w7k6r") pod "32196dc2-ada2-4e60-b64c-573967f34e54" (UID: "32196dc2-ada2-4e60-b64c-573967f34e54"). InnerVolumeSpecName "kube-api-access-w7k6r". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 16 12:30:02 addons-966941 kubelet[1224]: I0816 12:30:02.634135    1224 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-w7k6r\" (UniqueName: \"kubernetes.io/projected/32196dc2-ada2-4e60-b64c-573967f34e54-kube-api-access-w7k6r\") on node \"addons-966941\" DevicePath \"\""
	Aug 16 12:30:02 addons-966941 kubelet[1224]: I0816 12:30:02.634163    1224 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/32196dc2-ada2-4e60-b64c-573967f34e54-tmp-dir\") on node \"addons-966941\" DevicePath \"\""
	Aug 16 12:30:02 addons-966941 kubelet[1224]: I0816 12:30:02.721007    1224 scope.go:117] "RemoveContainer" containerID="3d2540ee0015260f79d7216e22bd020d23ba3f5b476926295586832ab614c8b8"
	Aug 16 12:30:02 addons-966941 kubelet[1224]: I0816 12:30:02.769314    1224 scope.go:117] "RemoveContainer" containerID="3d2540ee0015260f79d7216e22bd020d23ba3f5b476926295586832ab614c8b8"
	Aug 16 12:30:02 addons-966941 kubelet[1224]: E0816 12:30:02.770176    1224 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d2540ee0015260f79d7216e22bd020d23ba3f5b476926295586832ab614c8b8\": container with ID starting with 3d2540ee0015260f79d7216e22bd020d23ba3f5b476926295586832ab614c8b8 not found: ID does not exist" containerID="3d2540ee0015260f79d7216e22bd020d23ba3f5b476926295586832ab614c8b8"
	Aug 16 12:30:02 addons-966941 kubelet[1224]: I0816 12:30:02.770212    1224 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d2540ee0015260f79d7216e22bd020d23ba3f5b476926295586832ab614c8b8"} err="failed to get container status \"3d2540ee0015260f79d7216e22bd020d23ba3f5b476926295586832ab614c8b8\": rpc error: code = NotFound desc = could not find container \"3d2540ee0015260f79d7216e22bd020d23ba3f5b476926295586832ab614c8b8\": container with ID starting with 3d2540ee0015260f79d7216e22bd020d23ba3f5b476926295586832ab614c8b8 not found: ID does not exist"
	
	
	==> storage-provisioner [c6219df4f6a9ab0f6b5579c6136b386e7097254a60f6cef5b1162ea5650ebd0a] <==
	I0816 12:22:30.562900       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0816 12:22:30.608636       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0816 12:22:30.608684       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0816 12:22:30.748783       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0816 12:22:30.748928       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-966941_3269c7d6-db75-4b54-a4b8-88c25905904a!
	I0816 12:22:30.750148       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a001f778-fcb5-42eb-a580-0d5d7ade1b5b", APIVersion:"v1", ResourceVersion:"667", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-966941_3269c7d6-db75-4b54-a4b8-88c25905904a became leader
	I0816 12:22:31.010791       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-966941_3269c7d6-db75-4b54-a4b8-88c25905904a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-966941 -n addons-966941
helpers_test.go:261: (dbg) Run:  kubectl --context addons-966941 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (323.96s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.36s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-966941
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-966941: exit status 82 (2m0.462683861s)

                                                
                                                
-- stdout --
	* Stopping node "addons-966941"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-966941" : exit status 82
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-966941
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-966941: exit status 11 (21.609834282s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.129:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-966941" : exit status 11
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-966941
addons_test.go:182: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-966941: exit status 11 (6.144818375s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.129:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:184: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-966941" : exit status 11
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-966941
addons_test.go:187: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-966941: exit status 11 (6.144544088s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.129:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:189: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-966941" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 node stop m02 -v=7 --alsologtostderr
E0816 12:42:02.858514   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/functional-756697/client.crt: no such file or directory" logger="UnhandledError"
E0816 12:43:24.780432   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/functional-756697/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-863936 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.460257729s)

                                                
                                                
-- stdout --
	* Stopping node "ha-863936-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 12:41:55.671342   26301 out.go:345] Setting OutFile to fd 1 ...
	I0816 12:41:55.671612   26301 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 12:41:55.671623   26301 out.go:358] Setting ErrFile to fd 2...
	I0816 12:41:55.671629   26301 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 12:41:55.671796   26301 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-3966/.minikube/bin
	I0816 12:41:55.672043   26301 mustload.go:65] Loading cluster: ha-863936
	I0816 12:41:55.672406   26301 config.go:182] Loaded profile config "ha-863936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 12:41:55.672422   26301 stop.go:39] StopHost: ha-863936-m02
	I0816 12:41:55.672774   26301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:41:55.672824   26301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:41:55.688320   26301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33609
	I0816 12:41:55.688768   26301 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:41:55.689306   26301 main.go:141] libmachine: Using API Version  1
	I0816 12:41:55.689331   26301 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:41:55.689680   26301 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:41:55.692100   26301 out.go:177] * Stopping node "ha-863936-m02"  ...
	I0816 12:41:55.693431   26301 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0816 12:41:55.693464   26301 main.go:141] libmachine: (ha-863936-m02) Calling .DriverName
	I0816 12:41:55.693671   26301 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0816 12:41:55.693696   26301 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHHostname
	I0816 12:41:55.696284   26301 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:41:55.696711   26301 main.go:141] libmachine: (ha-863936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1e:73", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:37:36 +0000 UTC Type:0 Mac:52:54:00:c0:1e:73 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-863936-m02 Clientid:01:52:54:00:c0:1e:73}
	I0816 12:41:55.696737   26301 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:41:55.696893   26301 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHPort
	I0816 12:41:55.697094   26301 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHKeyPath
	I0816 12:41:55.697249   26301 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHUsername
	I0816 12:41:55.697403   26301 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m02/id_rsa Username:docker}
	I0816 12:41:55.779953   26301 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0816 12:41:55.835994   26301 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0816 12:41:55.894508   26301 main.go:141] libmachine: Stopping "ha-863936-m02"...
	I0816 12:41:55.894543   26301 main.go:141] libmachine: (ha-863936-m02) Calling .GetState
	I0816 12:41:55.896266   26301 main.go:141] libmachine: (ha-863936-m02) Calling .Stop
	I0816 12:41:55.900208   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 0/120
	I0816 12:41:56.902591   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 1/120
	I0816 12:41:57.903888   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 2/120
	I0816 12:41:58.905161   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 3/120
	I0816 12:41:59.907479   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 4/120
	I0816 12:42:00.909571   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 5/120
	I0816 12:42:01.911252   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 6/120
	I0816 12:42:02.912593   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 7/120
	I0816 12:42:03.913746   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 8/120
	I0816 12:42:04.915058   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 9/120
	I0816 12:42:05.917109   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 10/120
	I0816 12:42:06.919273   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 11/120
	I0816 12:42:07.920656   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 12/120
	I0816 12:42:08.922400   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 13/120
	I0816 12:42:09.923688   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 14/120
	I0816 12:42:10.925465   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 15/120
	I0816 12:42:11.927393   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 16/120
	I0816 12:42:12.929560   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 17/120
	I0816 12:42:13.931252   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 18/120
	I0816 12:42:14.932480   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 19/120
	I0816 12:42:15.934148   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 20/120
	I0816 12:42:16.936936   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 21/120
	I0816 12:42:17.938346   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 22/120
	I0816 12:42:18.939725   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 23/120
	I0816 12:42:19.941940   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 24/120
	I0816 12:42:20.943169   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 25/120
	I0816 12:42:21.944454   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 26/120
	I0816 12:42:22.945744   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 27/120
	I0816 12:42:23.948007   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 28/120
	I0816 12:42:24.949470   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 29/120
	I0816 12:42:25.951418   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 30/120
	I0816 12:42:26.952949   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 31/120
	I0816 12:42:27.954988   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 32/120
	I0816 12:42:28.956190   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 33/120
	I0816 12:42:29.957412   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 34/120
	I0816 12:42:30.959364   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 35/120
	I0816 12:42:31.960665   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 36/120
	I0816 12:42:32.962471   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 37/120
	I0816 12:42:33.963713   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 38/120
	I0816 12:42:34.965182   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 39/120
	I0816 12:42:35.967186   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 40/120
	I0816 12:42:36.969340   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 41/120
	I0816 12:42:37.970625   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 42/120
	I0816 12:42:38.971896   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 43/120
	I0816 12:42:39.973125   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 44/120
	I0816 12:42:40.974820   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 45/120
	I0816 12:42:41.976020   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 46/120
	I0816 12:42:42.977311   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 47/120
	I0816 12:42:43.978692   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 48/120
	I0816 12:42:44.979925   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 49/120
	I0816 12:42:45.981914   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 50/120
	I0816 12:42:46.983289   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 51/120
	I0816 12:42:47.985135   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 52/120
	I0816 12:42:48.986385   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 53/120
	I0816 12:42:49.988101   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 54/120
	I0816 12:42:50.990062   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 55/120
	I0816 12:42:51.991360   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 56/120
	I0816 12:42:52.992899   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 57/120
	I0816 12:42:53.994206   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 58/120
	I0816 12:42:54.995648   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 59/120
	I0816 12:42:55.997401   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 60/120
	I0816 12:42:56.998797   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 61/120
	I0816 12:42:58.000246   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 62/120
	I0816 12:42:59.001671   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 63/120
	I0816 12:43:00.003017   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 64/120
	I0816 12:43:01.004803   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 65/120
	I0816 12:43:02.006279   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 66/120
	I0816 12:43:03.007574   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 67/120
	I0816 12:43:04.009266   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 68/120
	I0816 12:43:05.010618   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 69/120
	I0816 12:43:06.012394   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 70/120
	I0816 12:43:07.013637   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 71/120
	I0816 12:43:08.015324   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 72/120
	I0816 12:43:09.016798   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 73/120
	I0816 12:43:10.018967   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 74/120
	I0816 12:43:11.020806   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 75/120
	I0816 12:43:12.022182   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 76/120
	I0816 12:43:13.023303   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 77/120
	I0816 12:43:14.024814   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 78/120
	I0816 12:43:15.025983   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 79/120
	I0816 12:43:16.028010   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 80/120
	I0816 12:43:17.029450   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 81/120
	I0816 12:43:18.030733   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 82/120
	I0816 12:43:19.032062   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 83/120
	I0816 12:43:20.034128   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 84/120
	I0816 12:43:21.036085   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 85/120
	I0816 12:43:22.037541   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 86/120
	I0816 12:43:23.039473   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 87/120
	I0816 12:43:24.041352   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 88/120
	I0816 12:43:25.043552   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 89/120
	I0816 12:43:26.045212   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 90/120
	I0816 12:43:27.047316   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 91/120
	I0816 12:43:28.048529   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 92/120
	I0816 12:43:29.049690   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 93/120
	I0816 12:43:30.051293   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 94/120
	I0816 12:43:31.053098   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 95/120
	I0816 12:43:32.054427   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 96/120
	I0816 12:43:33.055826   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 97/120
	I0816 12:43:34.057069   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 98/120
	I0816 12:43:35.058243   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 99/120
	I0816 12:43:36.060300   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 100/120
	I0816 12:43:37.061573   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 101/120
	I0816 12:43:38.063287   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 102/120
	I0816 12:43:39.064660   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 103/120
	I0816 12:43:40.065912   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 104/120
	I0816 12:43:41.067797   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 105/120
	I0816 12:43:42.068860   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 106/120
	I0816 12:43:43.070411   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 107/120
	I0816 12:43:44.071754   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 108/120
	I0816 12:43:45.073012   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 109/120
	I0816 12:43:46.075030   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 110/120
	I0816 12:43:47.076432   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 111/120
	I0816 12:43:48.078017   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 112/120
	I0816 12:43:49.079268   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 113/120
	I0816 12:43:50.080539   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 114/120
	I0816 12:43:51.082101   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 115/120
	I0816 12:43:52.083330   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 116/120
	I0816 12:43:53.085421   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 117/120
	I0816 12:43:54.087331   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 118/120
	I0816 12:43:55.088615   26301 main.go:141] libmachine: (ha-863936-m02) Waiting for machine to stop 119/120
	I0816 12:43:56.090109   26301 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0816 12:43:56.090352   26301 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-863936 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 status -v=7 --alsologtostderr
E0816 12:43:56.823247   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-863936 status -v=7 --alsologtostderr: exit status 3 (19.040726183s)

                                                
                                                
-- stdout --
	ha-863936
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-863936-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-863936-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-863936-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 12:43:56.136101   26754 out.go:345] Setting OutFile to fd 1 ...
	I0816 12:43:56.136231   26754 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 12:43:56.136242   26754 out.go:358] Setting ErrFile to fd 2...
	I0816 12:43:56.136248   26754 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 12:43:56.136456   26754 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-3966/.minikube/bin
	I0816 12:43:56.136742   26754 out.go:352] Setting JSON to false
	I0816 12:43:56.136773   26754 mustload.go:65] Loading cluster: ha-863936
	I0816 12:43:56.136812   26754 notify.go:220] Checking for updates...
	I0816 12:43:56.137230   26754 config.go:182] Loaded profile config "ha-863936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 12:43:56.137250   26754 status.go:255] checking status of ha-863936 ...
	I0816 12:43:56.137669   26754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:43:56.137730   26754 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:43:56.154131   26754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35665
	I0816 12:43:56.154591   26754 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:43:56.155076   26754 main.go:141] libmachine: Using API Version  1
	I0816 12:43:56.155102   26754 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:43:56.155477   26754 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:43:56.155688   26754 main.go:141] libmachine: (ha-863936) Calling .GetState
	I0816 12:43:56.157225   26754 status.go:330] ha-863936 host status = "Running" (err=<nil>)
	I0816 12:43:56.157242   26754 host.go:66] Checking if "ha-863936" exists ...
	I0816 12:43:56.157542   26754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:43:56.157579   26754 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:43:56.172469   26754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41025
	I0816 12:43:56.172838   26754 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:43:56.173337   26754 main.go:141] libmachine: Using API Version  1
	I0816 12:43:56.173360   26754 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:43:56.173680   26754 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:43:56.173836   26754 main.go:141] libmachine: (ha-863936) Calling .GetIP
	I0816 12:43:56.176621   26754 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:43:56.177063   26754 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:43:56.177088   26754 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:43:56.177234   26754 host.go:66] Checking if "ha-863936" exists ...
	I0816 12:43:56.177557   26754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:43:56.177595   26754 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:43:56.191984   26754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41499
	I0816 12:43:56.192379   26754 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:43:56.192841   26754 main.go:141] libmachine: Using API Version  1
	I0816 12:43:56.192862   26754 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:43:56.193173   26754 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:43:56.193385   26754 main.go:141] libmachine: (ha-863936) Calling .DriverName
	I0816 12:43:56.193548   26754 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 12:43:56.193577   26754 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:43:56.195934   26754 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:43:56.196254   26754 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:43:56.196278   26754 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:43:56.196421   26754 main.go:141] libmachine: (ha-863936) Calling .GetSSHPort
	I0816 12:43:56.196587   26754 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:43:56.196731   26754 main.go:141] libmachine: (ha-863936) Calling .GetSSHUsername
	I0816 12:43:56.196845   26754 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936/id_rsa Username:docker}
	I0816 12:43:56.278186   26754 ssh_runner.go:195] Run: systemctl --version
	I0816 12:43:56.284889   26754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 12:43:56.302352   26754 kubeconfig.go:125] found "ha-863936" server: "https://192.168.39.254:8443"
	I0816 12:43:56.302391   26754 api_server.go:166] Checking apiserver status ...
	I0816 12:43:56.302433   26754 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 12:43:56.317554   26754 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1179/cgroup
	W0816 12:43:56.328363   26754 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1179/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 12:43:56.328454   26754 ssh_runner.go:195] Run: ls
	I0816 12:43:56.333190   26754 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0816 12:43:56.339539   26754 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0816 12:43:56.339562   26754 status.go:422] ha-863936 apiserver status = Running (err=<nil>)
	I0816 12:43:56.339571   26754 status.go:257] ha-863936 status: &{Name:ha-863936 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 12:43:56.339587   26754 status.go:255] checking status of ha-863936-m02 ...
	I0816 12:43:56.339868   26754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:43:56.339897   26754 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:43:56.354982   26754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37143
	I0816 12:43:56.355412   26754 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:43:56.355855   26754 main.go:141] libmachine: Using API Version  1
	I0816 12:43:56.355875   26754 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:43:56.356215   26754 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:43:56.356387   26754 main.go:141] libmachine: (ha-863936-m02) Calling .GetState
	I0816 12:43:56.357898   26754 status.go:330] ha-863936-m02 host status = "Running" (err=<nil>)
	I0816 12:43:56.357915   26754 host.go:66] Checking if "ha-863936-m02" exists ...
	I0816 12:43:56.358201   26754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:43:56.358235   26754 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:43:56.372642   26754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34561
	I0816 12:43:56.373029   26754 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:43:56.373442   26754 main.go:141] libmachine: Using API Version  1
	I0816 12:43:56.373466   26754 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:43:56.373750   26754 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:43:56.373927   26754 main.go:141] libmachine: (ha-863936-m02) Calling .GetIP
	I0816 12:43:56.376709   26754 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:43:56.377183   26754 main.go:141] libmachine: (ha-863936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1e:73", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:37:36 +0000 UTC Type:0 Mac:52:54:00:c0:1e:73 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-863936-m02 Clientid:01:52:54:00:c0:1e:73}
	I0816 12:43:56.377207   26754 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:43:56.377323   26754 host.go:66] Checking if "ha-863936-m02" exists ...
	I0816 12:43:56.377760   26754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:43:56.377807   26754 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:43:56.392762   26754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41021
	I0816 12:43:56.393297   26754 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:43:56.393738   26754 main.go:141] libmachine: Using API Version  1
	I0816 12:43:56.393764   26754 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:43:56.394099   26754 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:43:56.394251   26754 main.go:141] libmachine: (ha-863936-m02) Calling .DriverName
	I0816 12:43:56.394415   26754 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 12:43:56.394436   26754 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHHostname
	I0816 12:43:56.397121   26754 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:43:56.397680   26754 main.go:141] libmachine: (ha-863936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1e:73", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:37:36 +0000 UTC Type:0 Mac:52:54:00:c0:1e:73 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-863936-m02 Clientid:01:52:54:00:c0:1e:73}
	I0816 12:43:56.397695   26754 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:43:56.397914   26754 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHPort
	I0816 12:43:56.398079   26754 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHKeyPath
	I0816 12:43:56.398362   26754 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHUsername
	I0816 12:43:56.398548   26754 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m02/id_rsa Username:docker}
	W0816 12:44:14.773110   26754 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.101:22: connect: no route to host
	W0816 12:44:14.773219   26754 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.101:22: connect: no route to host
	E0816 12:44:14.773233   26754 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.101:22: connect: no route to host
	I0816 12:44:14.773244   26754 status.go:257] ha-863936-m02 status: &{Name:ha-863936-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0816 12:44:14.773261   26754 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.101:22: connect: no route to host
	I0816 12:44:14.773268   26754 status.go:255] checking status of ha-863936-m03 ...
	I0816 12:44:14.773578   26754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:14.773624   26754 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:14.788099   26754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35419
	I0816 12:44:14.788570   26754 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:14.789031   26754 main.go:141] libmachine: Using API Version  1
	I0816 12:44:14.789050   26754 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:14.789354   26754 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:14.789557   26754 main.go:141] libmachine: (ha-863936-m03) Calling .GetState
	I0816 12:44:14.790962   26754 status.go:330] ha-863936-m03 host status = "Running" (err=<nil>)
	I0816 12:44:14.790977   26754 host.go:66] Checking if "ha-863936-m03" exists ...
	I0816 12:44:14.791266   26754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:14.791300   26754 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:14.805586   26754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45953
	I0816 12:44:14.806001   26754 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:14.806456   26754 main.go:141] libmachine: Using API Version  1
	I0816 12:44:14.806481   26754 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:14.806734   26754 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:14.806891   26754 main.go:141] libmachine: (ha-863936-m03) Calling .GetIP
	I0816 12:44:14.809126   26754 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:44:14.809472   26754 main.go:141] libmachine: (ha-863936-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:05:59", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:39:35 +0000 UTC Type:0 Mac:52:54:00:ec:05:59 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-863936-m03 Clientid:01:52:54:00:ec:05:59}
	I0816 12:44:14.809496   26754 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined IP address 192.168.39.116 and MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:44:14.809593   26754 host.go:66] Checking if "ha-863936-m03" exists ...
	I0816 12:44:14.810009   26754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:14.810070   26754 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:14.824365   26754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46697
	I0816 12:44:14.824770   26754 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:14.825287   26754 main.go:141] libmachine: Using API Version  1
	I0816 12:44:14.825305   26754 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:14.825585   26754 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:14.825776   26754 main.go:141] libmachine: (ha-863936-m03) Calling .DriverName
	I0816 12:44:14.825968   26754 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 12:44:14.825989   26754 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHHostname
	I0816 12:44:14.828327   26754 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:44:14.828714   26754 main.go:141] libmachine: (ha-863936-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:05:59", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:39:35 +0000 UTC Type:0 Mac:52:54:00:ec:05:59 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-863936-m03 Clientid:01:52:54:00:ec:05:59}
	I0816 12:44:14.828738   26754 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined IP address 192.168.39.116 and MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:44:14.828869   26754 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHPort
	I0816 12:44:14.829044   26754 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHKeyPath
	I0816 12:44:14.829181   26754 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHUsername
	I0816 12:44:14.829304   26754 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m03/id_rsa Username:docker}
	I0816 12:44:14.915145   26754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 12:44:14.933264   26754 kubeconfig.go:125] found "ha-863936" server: "https://192.168.39.254:8443"
	I0816 12:44:14.933289   26754 api_server.go:166] Checking apiserver status ...
	I0816 12:44:14.933344   26754 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 12:44:14.949843   26754 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1495/cgroup
	W0816 12:44:14.960919   26754 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1495/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 12:44:14.960978   26754 ssh_runner.go:195] Run: ls
	I0816 12:44:14.965812   26754 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0816 12:44:14.970209   26754 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0816 12:44:14.970230   26754 status.go:422] ha-863936-m03 apiserver status = Running (err=<nil>)
	I0816 12:44:14.970238   26754 status.go:257] ha-863936-m03 status: &{Name:ha-863936-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 12:44:14.970253   26754 status.go:255] checking status of ha-863936-m04 ...
	I0816 12:44:14.970530   26754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:14.970559   26754 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:14.985590   26754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44407
	I0816 12:44:14.985978   26754 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:14.986451   26754 main.go:141] libmachine: Using API Version  1
	I0816 12:44:14.986469   26754 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:14.986830   26754 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:14.987019   26754 main.go:141] libmachine: (ha-863936-m04) Calling .GetState
	I0816 12:44:14.988369   26754 status.go:330] ha-863936-m04 host status = "Running" (err=<nil>)
	I0816 12:44:14.988383   26754 host.go:66] Checking if "ha-863936-m04" exists ...
	I0816 12:44:14.988659   26754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:14.988690   26754 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:15.003128   26754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40619
	I0816 12:44:15.003528   26754 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:15.003933   26754 main.go:141] libmachine: Using API Version  1
	I0816 12:44:15.003950   26754 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:15.004224   26754 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:15.004377   26754 main.go:141] libmachine: (ha-863936-m04) Calling .GetIP
	I0816 12:44:15.006884   26754 main.go:141] libmachine: (ha-863936-m04) DBG | domain ha-863936-m04 has defined MAC address 52:54:00:32:98:98 in network mk-ha-863936
	I0816 12:44:15.007238   26754 main.go:141] libmachine: (ha-863936-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:98:98", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:41:03 +0000 UTC Type:0 Mac:52:54:00:32:98:98 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-863936-m04 Clientid:01:52:54:00:32:98:98}
	I0816 12:44:15.007270   26754 main.go:141] libmachine: (ha-863936-m04) DBG | domain ha-863936-m04 has defined IP address 192.168.39.74 and MAC address 52:54:00:32:98:98 in network mk-ha-863936
	I0816 12:44:15.007397   26754 host.go:66] Checking if "ha-863936-m04" exists ...
	I0816 12:44:15.007813   26754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:15.007853   26754 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:15.022339   26754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40417
	I0816 12:44:15.022769   26754 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:15.023159   26754 main.go:141] libmachine: Using API Version  1
	I0816 12:44:15.023177   26754 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:15.023532   26754 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:15.023755   26754 main.go:141] libmachine: (ha-863936-m04) Calling .DriverName
	I0816 12:44:15.023922   26754 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 12:44:15.023939   26754 main.go:141] libmachine: (ha-863936-m04) Calling .GetSSHHostname
	I0816 12:44:15.026518   26754 main.go:141] libmachine: (ha-863936-m04) DBG | domain ha-863936-m04 has defined MAC address 52:54:00:32:98:98 in network mk-ha-863936
	I0816 12:44:15.027001   26754 main.go:141] libmachine: (ha-863936-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:98:98", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:41:03 +0000 UTC Type:0 Mac:52:54:00:32:98:98 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-863936-m04 Clientid:01:52:54:00:32:98:98}
	I0816 12:44:15.027025   26754 main.go:141] libmachine: (ha-863936-m04) DBG | domain ha-863936-m04 has defined IP address 192.168.39.74 and MAC address 52:54:00:32:98:98 in network mk-ha-863936
	I0816 12:44:15.027178   26754 main.go:141] libmachine: (ha-863936-m04) Calling .GetSSHPort
	I0816 12:44:15.027325   26754 main.go:141] libmachine: (ha-863936-m04) Calling .GetSSHKeyPath
	I0816 12:44:15.027441   26754 main.go:141] libmachine: (ha-863936-m04) Calling .GetSSHUsername
	I0816 12:44:15.027582   26754 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m04/id_rsa Username:docker}
	I0816 12:44:15.114237   26754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 12:44:15.130993   26754 status.go:257] ha-863936-m04 status: &{Name:ha-863936-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-863936 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-863936 -n ha-863936
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-863936 logs -n 25: (1.545182079s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-863936 cp ha-863936-m03:/home/docker/cp-test.txt                              | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2848660471/001/cp-test_ha-863936-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-863936 ssh -n                                                                 | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | ha-863936-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-863936 cp ha-863936-m03:/home/docker/cp-test.txt                              | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | ha-863936:/home/docker/cp-test_ha-863936-m03_ha-863936.txt                       |           |         |         |                     |                     |
	| ssh     | ha-863936 ssh -n                                                                 | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | ha-863936-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-863936 ssh -n ha-863936 sudo cat                                              | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | /home/docker/cp-test_ha-863936-m03_ha-863936.txt                                 |           |         |         |                     |                     |
	| cp      | ha-863936 cp ha-863936-m03:/home/docker/cp-test.txt                              | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | ha-863936-m02:/home/docker/cp-test_ha-863936-m03_ha-863936-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-863936 ssh -n                                                                 | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | ha-863936-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-863936 ssh -n ha-863936-m02 sudo cat                                          | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | /home/docker/cp-test_ha-863936-m03_ha-863936-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-863936 cp ha-863936-m03:/home/docker/cp-test.txt                              | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | ha-863936-m04:/home/docker/cp-test_ha-863936-m03_ha-863936-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-863936 ssh -n                                                                 | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | ha-863936-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-863936 ssh -n ha-863936-m04 sudo cat                                          | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | /home/docker/cp-test_ha-863936-m03_ha-863936-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-863936 cp testdata/cp-test.txt                                                | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | ha-863936-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-863936 ssh -n                                                                 | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | ha-863936-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-863936 cp ha-863936-m04:/home/docker/cp-test.txt                              | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2848660471/001/cp-test_ha-863936-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-863936 ssh -n                                                                 | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | ha-863936-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-863936 cp ha-863936-m04:/home/docker/cp-test.txt                              | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | ha-863936:/home/docker/cp-test_ha-863936-m04_ha-863936.txt                       |           |         |         |                     |                     |
	| ssh     | ha-863936 ssh -n                                                                 | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | ha-863936-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-863936 ssh -n ha-863936 sudo cat                                              | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | /home/docker/cp-test_ha-863936-m04_ha-863936.txt                                 |           |         |         |                     |                     |
	| cp      | ha-863936 cp ha-863936-m04:/home/docker/cp-test.txt                              | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | ha-863936-m02:/home/docker/cp-test_ha-863936-m04_ha-863936-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-863936 ssh -n                                                                 | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | ha-863936-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-863936 ssh -n ha-863936-m02 sudo cat                                          | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | /home/docker/cp-test_ha-863936-m04_ha-863936-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-863936 cp ha-863936-m04:/home/docker/cp-test.txt                              | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | ha-863936-m03:/home/docker/cp-test_ha-863936-m04_ha-863936-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-863936 ssh -n                                                                 | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | ha-863936-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-863936 ssh -n ha-863936-m03 sudo cat                                          | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | /home/docker/cp-test_ha-863936-m04_ha-863936-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-863936 node stop m02 -v=7                                                     | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 12:36:33
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 12:36:33.028737   22106 out.go:345] Setting OutFile to fd 1 ...
	I0816 12:36:33.029022   22106 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 12:36:33.029032   22106 out.go:358] Setting ErrFile to fd 2...
	I0816 12:36:33.029038   22106 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 12:36:33.029244   22106 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-3966/.minikube/bin
	I0816 12:36:33.029799   22106 out.go:352] Setting JSON to false
	I0816 12:36:33.030663   22106 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1138,"bootTime":1723810655,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 12:36:33.030718   22106 start.go:139] virtualization: kvm guest
	I0816 12:36:33.032809   22106 out.go:177] * [ha-863936] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 12:36:33.034134   22106 notify.go:220] Checking for updates...
	I0816 12:36:33.034197   22106 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 12:36:33.035350   22106 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 12:36:33.036498   22106 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 12:36:33.037706   22106 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-3966/.minikube
	I0816 12:36:33.038927   22106 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 12:36:33.040084   22106 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 12:36:33.041429   22106 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 12:36:33.075523   22106 out.go:177] * Using the kvm2 driver based on user configuration
	I0816 12:36:33.076768   22106 start.go:297] selected driver: kvm2
	I0816 12:36:33.076793   22106 start.go:901] validating driver "kvm2" against <nil>
	I0816 12:36:33.076808   22106 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 12:36:33.077467   22106 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 12:36:33.077544   22106 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-3966/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 12:36:33.091248   22106 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0816 12:36:33.091295   22106 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 12:36:33.091522   22106 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 12:36:33.091549   22106 cni.go:84] Creating CNI manager for ""
	I0816 12:36:33.091555   22106 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0816 12:36:33.091564   22106 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0816 12:36:33.091604   22106 start.go:340] cluster config:
	{Name:ha-863936 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-863936 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0816 12:36:33.091685   22106 iso.go:125] acquiring lock: {Name:mk00139b359c6fe4c84d80e843a2099303fb7c5b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 12:36:33.093441   22106 out.go:177] * Starting "ha-863936" primary control-plane node in "ha-863936" cluster
	I0816 12:36:33.094542   22106 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 12:36:33.094580   22106 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0816 12:36:33.094590   22106 cache.go:56] Caching tarball of preloaded images
	I0816 12:36:33.094653   22106 preload.go:172] Found /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 12:36:33.094663   22106 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0816 12:36:33.094930   22106 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/config.json ...
	I0816 12:36:33.094948   22106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/config.json: {Name:mkbf2b129b047186e4a4a70a39c941aa37bc0fd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:36:33.095073   22106 start.go:360] acquireMachinesLock for ha-863936: {Name:mk0b4b88ee8893cefe753073bc08069ac50e5641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 12:36:33.095100   22106 start.go:364] duration metric: took 14.702µs to acquireMachinesLock for "ha-863936"
	I0816 12:36:33.095116   22106 start.go:93] Provisioning new machine with config: &{Name:ha-863936 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-863936 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 12:36:33.095178   22106 start.go:125] createHost starting for "" (driver="kvm2")
	I0816 12:36:33.096737   22106 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 12:36:33.096862   22106 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:36:33.096894   22106 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:36:33.110446   22106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38001
	I0816 12:36:33.110839   22106 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:36:33.111381   22106 main.go:141] libmachine: Using API Version  1
	I0816 12:36:33.111408   22106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:36:33.111738   22106 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:36:33.111902   22106 main.go:141] libmachine: (ha-863936) Calling .GetMachineName
	I0816 12:36:33.112046   22106 main.go:141] libmachine: (ha-863936) Calling .DriverName
	I0816 12:36:33.112171   22106 start.go:159] libmachine.API.Create for "ha-863936" (driver="kvm2")
	I0816 12:36:33.112198   22106 client.go:168] LocalClient.Create starting
	I0816 12:36:33.112229   22106 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem
	I0816 12:36:33.112263   22106 main.go:141] libmachine: Decoding PEM data...
	I0816 12:36:33.112279   22106 main.go:141] libmachine: Parsing certificate...
	I0816 12:36:33.112331   22106 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem
	I0816 12:36:33.112349   22106 main.go:141] libmachine: Decoding PEM data...
	I0816 12:36:33.112362   22106 main.go:141] libmachine: Parsing certificate...
	I0816 12:36:33.112377   22106 main.go:141] libmachine: Running pre-create checks...
	I0816 12:36:33.112389   22106 main.go:141] libmachine: (ha-863936) Calling .PreCreateCheck
	I0816 12:36:33.112703   22106 main.go:141] libmachine: (ha-863936) Calling .GetConfigRaw
	I0816 12:36:33.113064   22106 main.go:141] libmachine: Creating machine...
	I0816 12:36:33.113077   22106 main.go:141] libmachine: (ha-863936) Calling .Create
	I0816 12:36:33.113203   22106 main.go:141] libmachine: (ha-863936) Creating KVM machine...
	I0816 12:36:33.114386   22106 main.go:141] libmachine: (ha-863936) DBG | found existing default KVM network
	I0816 12:36:33.114969   22106 main.go:141] libmachine: (ha-863936) DBG | I0816 12:36:33.114854   22145 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ac0}
	I0816 12:36:33.115010   22106 main.go:141] libmachine: (ha-863936) DBG | created network xml: 
	I0816 12:36:33.115031   22106 main.go:141] libmachine: (ha-863936) DBG | <network>
	I0816 12:36:33.115042   22106 main.go:141] libmachine: (ha-863936) DBG |   <name>mk-ha-863936</name>
	I0816 12:36:33.115060   22106 main.go:141] libmachine: (ha-863936) DBG |   <dns enable='no'/>
	I0816 12:36:33.115072   22106 main.go:141] libmachine: (ha-863936) DBG |   
	I0816 12:36:33.115089   22106 main.go:141] libmachine: (ha-863936) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0816 12:36:33.115100   22106 main.go:141] libmachine: (ha-863936) DBG |     <dhcp>
	I0816 12:36:33.115109   22106 main.go:141] libmachine: (ha-863936) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0816 12:36:33.115126   22106 main.go:141] libmachine: (ha-863936) DBG |     </dhcp>
	I0816 12:36:33.115136   22106 main.go:141] libmachine: (ha-863936) DBG |   </ip>
	I0816 12:36:33.115144   22106 main.go:141] libmachine: (ha-863936) DBG |   
	I0816 12:36:33.115148   22106 main.go:141] libmachine: (ha-863936) DBG | </network>
	I0816 12:36:33.115155   22106 main.go:141] libmachine: (ha-863936) DBG | 
	I0816 12:36:33.119982   22106 main.go:141] libmachine: (ha-863936) DBG | trying to create private KVM network mk-ha-863936 192.168.39.0/24...
	I0816 12:36:33.182767   22106 main.go:141] libmachine: (ha-863936) DBG | private KVM network mk-ha-863936 192.168.39.0/24 created
	I0816 12:36:33.182793   22106 main.go:141] libmachine: (ha-863936) Setting up store path in /home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936 ...
	I0816 12:36:33.182818   22106 main.go:141] libmachine: (ha-863936) DBG | I0816 12:36:33.182754   22145 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19423-3966/.minikube
	I0816 12:36:33.182837   22106 main.go:141] libmachine: (ha-863936) Building disk image from file:///home/jenkins/minikube-integration/19423-3966/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso
	I0816 12:36:33.182872   22106 main.go:141] libmachine: (ha-863936) Downloading /home/jenkins/minikube-integration/19423-3966/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19423-3966/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso...
	I0816 12:36:33.429831   22106 main.go:141] libmachine: (ha-863936) DBG | I0816 12:36:33.429695   22145 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936/id_rsa...
	I0816 12:36:33.532414   22106 main.go:141] libmachine: (ha-863936) DBG | I0816 12:36:33.532299   22145 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936/ha-863936.rawdisk...
	I0816 12:36:33.532446   22106 main.go:141] libmachine: (ha-863936) DBG | Writing magic tar header
	I0816 12:36:33.532460   22106 main.go:141] libmachine: (ha-863936) DBG | Writing SSH key tar header
	I0816 12:36:33.532471   22106 main.go:141] libmachine: (ha-863936) DBG | I0816 12:36:33.532406   22145 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936 ...
	I0816 12:36:33.532567   22106 main.go:141] libmachine: (ha-863936) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936
	I0816 12:36:33.532596   22106 main.go:141] libmachine: (ha-863936) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-3966/.minikube/machines
	I0816 12:36:33.532610   22106 main.go:141] libmachine: (ha-863936) Setting executable bit set on /home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936 (perms=drwx------)
	I0816 12:36:33.532619   22106 main.go:141] libmachine: (ha-863936) Setting executable bit set on /home/jenkins/minikube-integration/19423-3966/.minikube/machines (perms=drwxr-xr-x)
	I0816 12:36:33.532632   22106 main.go:141] libmachine: (ha-863936) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-3966/.minikube
	I0816 12:36:33.532639   22106 main.go:141] libmachine: (ha-863936) Setting executable bit set on /home/jenkins/minikube-integration/19423-3966/.minikube (perms=drwxr-xr-x)
	I0816 12:36:33.532645   22106 main.go:141] libmachine: (ha-863936) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-3966
	I0816 12:36:33.532655   22106 main.go:141] libmachine: (ha-863936) Setting executable bit set on /home/jenkins/minikube-integration/19423-3966 (perms=drwxrwxr-x)
	I0816 12:36:33.532662   22106 main.go:141] libmachine: (ha-863936) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0816 12:36:33.532670   22106 main.go:141] libmachine: (ha-863936) DBG | Checking permissions on dir: /home/jenkins
	I0816 12:36:33.532675   22106 main.go:141] libmachine: (ha-863936) DBG | Checking permissions on dir: /home
	I0816 12:36:33.532685   22106 main.go:141] libmachine: (ha-863936) DBG | Skipping /home - not owner
	I0816 12:36:33.532694   22106 main.go:141] libmachine: (ha-863936) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0816 12:36:33.532700   22106 main.go:141] libmachine: (ha-863936) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0816 12:36:33.532747   22106 main.go:141] libmachine: (ha-863936) Creating domain...
	I0816 12:36:33.533598   22106 main.go:141] libmachine: (ha-863936) define libvirt domain using xml: 
	I0816 12:36:33.533614   22106 main.go:141] libmachine: (ha-863936) <domain type='kvm'>
	I0816 12:36:33.533620   22106 main.go:141] libmachine: (ha-863936)   <name>ha-863936</name>
	I0816 12:36:33.533625   22106 main.go:141] libmachine: (ha-863936)   <memory unit='MiB'>2200</memory>
	I0816 12:36:33.533633   22106 main.go:141] libmachine: (ha-863936)   <vcpu>2</vcpu>
	I0816 12:36:33.533643   22106 main.go:141] libmachine: (ha-863936)   <features>
	I0816 12:36:33.533674   22106 main.go:141] libmachine: (ha-863936)     <acpi/>
	I0816 12:36:33.533697   22106 main.go:141] libmachine: (ha-863936)     <apic/>
	I0816 12:36:33.533704   22106 main.go:141] libmachine: (ha-863936)     <pae/>
	I0816 12:36:33.533720   22106 main.go:141] libmachine: (ha-863936)     
	I0816 12:36:33.533731   22106 main.go:141] libmachine: (ha-863936)   </features>
	I0816 12:36:33.533736   22106 main.go:141] libmachine: (ha-863936)   <cpu mode='host-passthrough'>
	I0816 12:36:33.533741   22106 main.go:141] libmachine: (ha-863936)   
	I0816 12:36:33.533746   22106 main.go:141] libmachine: (ha-863936)   </cpu>
	I0816 12:36:33.533754   22106 main.go:141] libmachine: (ha-863936)   <os>
	I0816 12:36:33.533768   22106 main.go:141] libmachine: (ha-863936)     <type>hvm</type>
	I0816 12:36:33.533780   22106 main.go:141] libmachine: (ha-863936)     <boot dev='cdrom'/>
	I0816 12:36:33.533788   22106 main.go:141] libmachine: (ha-863936)     <boot dev='hd'/>
	I0816 12:36:33.533796   22106 main.go:141] libmachine: (ha-863936)     <bootmenu enable='no'/>
	I0816 12:36:33.533803   22106 main.go:141] libmachine: (ha-863936)   </os>
	I0816 12:36:33.533808   22106 main.go:141] libmachine: (ha-863936)   <devices>
	I0816 12:36:33.533813   22106 main.go:141] libmachine: (ha-863936)     <disk type='file' device='cdrom'>
	I0816 12:36:33.533820   22106 main.go:141] libmachine: (ha-863936)       <source file='/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936/boot2docker.iso'/>
	I0816 12:36:33.533830   22106 main.go:141] libmachine: (ha-863936)       <target dev='hdc' bus='scsi'/>
	I0816 12:36:33.533837   22106 main.go:141] libmachine: (ha-863936)       <readonly/>
	I0816 12:36:33.533844   22106 main.go:141] libmachine: (ha-863936)     </disk>
	I0816 12:36:33.533859   22106 main.go:141] libmachine: (ha-863936)     <disk type='file' device='disk'>
	I0816 12:36:33.533870   22106 main.go:141] libmachine: (ha-863936)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0816 12:36:33.533884   22106 main.go:141] libmachine: (ha-863936)       <source file='/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936/ha-863936.rawdisk'/>
	I0816 12:36:33.533894   22106 main.go:141] libmachine: (ha-863936)       <target dev='hda' bus='virtio'/>
	I0816 12:36:33.533906   22106 main.go:141] libmachine: (ha-863936)     </disk>
	I0816 12:36:33.533912   22106 main.go:141] libmachine: (ha-863936)     <interface type='network'>
	I0816 12:36:33.533926   22106 main.go:141] libmachine: (ha-863936)       <source network='mk-ha-863936'/>
	I0816 12:36:33.533945   22106 main.go:141] libmachine: (ha-863936)       <model type='virtio'/>
	I0816 12:36:33.533962   22106 main.go:141] libmachine: (ha-863936)     </interface>
	I0816 12:36:33.533974   22106 main.go:141] libmachine: (ha-863936)     <interface type='network'>
	I0816 12:36:33.533984   22106 main.go:141] libmachine: (ha-863936)       <source network='default'/>
	I0816 12:36:33.533995   22106 main.go:141] libmachine: (ha-863936)       <model type='virtio'/>
	I0816 12:36:33.534010   22106 main.go:141] libmachine: (ha-863936)     </interface>
	I0816 12:36:33.534018   22106 main.go:141] libmachine: (ha-863936)     <serial type='pty'>
	I0816 12:36:33.534029   22106 main.go:141] libmachine: (ha-863936)       <target port='0'/>
	I0816 12:36:33.534041   22106 main.go:141] libmachine: (ha-863936)     </serial>
	I0816 12:36:33.534049   22106 main.go:141] libmachine: (ha-863936)     <console type='pty'>
	I0816 12:36:33.534062   22106 main.go:141] libmachine: (ha-863936)       <target type='serial' port='0'/>
	I0816 12:36:33.534071   22106 main.go:141] libmachine: (ha-863936)     </console>
	I0816 12:36:33.534087   22106 main.go:141] libmachine: (ha-863936)     <rng model='virtio'>
	I0816 12:36:33.534102   22106 main.go:141] libmachine: (ha-863936)       <backend model='random'>/dev/random</backend>
	I0816 12:36:33.534111   22106 main.go:141] libmachine: (ha-863936)     </rng>
	I0816 12:36:33.534116   22106 main.go:141] libmachine: (ha-863936)     
	I0816 12:36:33.534124   22106 main.go:141] libmachine: (ha-863936)     
	I0816 12:36:33.534132   22106 main.go:141] libmachine: (ha-863936)   </devices>
	I0816 12:36:33.534144   22106 main.go:141] libmachine: (ha-863936) </domain>
	I0816 12:36:33.534154   22106 main.go:141] libmachine: (ha-863936) 
	I0816 12:36:33.538625   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:3f:7d:80 in network default
	I0816 12:36:33.539104   22106 main.go:141] libmachine: (ha-863936) Ensuring networks are active...
	I0816 12:36:33.539122   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:33.539711   22106 main.go:141] libmachine: (ha-863936) Ensuring network default is active
	I0816 12:36:33.539944   22106 main.go:141] libmachine: (ha-863936) Ensuring network mk-ha-863936 is active
	I0816 12:36:33.540382   22106 main.go:141] libmachine: (ha-863936) Getting domain xml...
	I0816 12:36:33.541054   22106 main.go:141] libmachine: (ha-863936) Creating domain...
	I0816 12:36:34.707299   22106 main.go:141] libmachine: (ha-863936) Waiting to get IP...
	I0816 12:36:34.708214   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:34.708557   22106 main.go:141] libmachine: (ha-863936) DBG | unable to find current IP address of domain ha-863936 in network mk-ha-863936
	I0816 12:36:34.708585   22106 main.go:141] libmachine: (ha-863936) DBG | I0816 12:36:34.708533   22145 retry.go:31] will retry after 235.79842ms: waiting for machine to come up
	I0816 12:36:34.946052   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:34.946490   22106 main.go:141] libmachine: (ha-863936) DBG | unable to find current IP address of domain ha-863936 in network mk-ha-863936
	I0816 12:36:34.946510   22106 main.go:141] libmachine: (ha-863936) DBG | I0816 12:36:34.946459   22145 retry.go:31] will retry after 286.730589ms: waiting for machine to come up
	I0816 12:36:35.234829   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:35.235292   22106 main.go:141] libmachine: (ha-863936) DBG | unable to find current IP address of domain ha-863936 in network mk-ha-863936
	I0816 12:36:35.235319   22106 main.go:141] libmachine: (ha-863936) DBG | I0816 12:36:35.235249   22145 retry.go:31] will retry after 372.002112ms: waiting for machine to come up
	I0816 12:36:35.608963   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:35.609506   22106 main.go:141] libmachine: (ha-863936) DBG | unable to find current IP address of domain ha-863936 in network mk-ha-863936
	I0816 12:36:35.609529   22106 main.go:141] libmachine: (ha-863936) DBG | I0816 12:36:35.609480   22145 retry.go:31] will retry after 435.098284ms: waiting for machine to come up
	I0816 12:36:36.045944   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:36.046322   22106 main.go:141] libmachine: (ha-863936) DBG | unable to find current IP address of domain ha-863936 in network mk-ha-863936
	I0816 12:36:36.046350   22106 main.go:141] libmachine: (ha-863936) DBG | I0816 12:36:36.046274   22145 retry.go:31] will retry after 725.404095ms: waiting for machine to come up
	I0816 12:36:36.773280   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:36.773700   22106 main.go:141] libmachine: (ha-863936) DBG | unable to find current IP address of domain ha-863936 in network mk-ha-863936
	I0816 12:36:36.773729   22106 main.go:141] libmachine: (ha-863936) DBG | I0816 12:36:36.773653   22145 retry.go:31] will retry after 744.247182ms: waiting for machine to come up
	I0816 12:36:37.519622   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:37.520086   22106 main.go:141] libmachine: (ha-863936) DBG | unable to find current IP address of domain ha-863936 in network mk-ha-863936
	I0816 12:36:37.520137   22106 main.go:141] libmachine: (ha-863936) DBG | I0816 12:36:37.520001   22145 retry.go:31] will retry after 804.927636ms: waiting for machine to come up
	I0816 12:36:38.326481   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:38.326877   22106 main.go:141] libmachine: (ha-863936) DBG | unable to find current IP address of domain ha-863936 in network mk-ha-863936
	I0816 12:36:38.326902   22106 main.go:141] libmachine: (ha-863936) DBG | I0816 12:36:38.326829   22145 retry.go:31] will retry after 941.718732ms: waiting for machine to come up
	I0816 12:36:39.269832   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:39.270287   22106 main.go:141] libmachine: (ha-863936) DBG | unable to find current IP address of domain ha-863936 in network mk-ha-863936
	I0816 12:36:39.270329   22106 main.go:141] libmachine: (ha-863936) DBG | I0816 12:36:39.270252   22145 retry.go:31] will retry after 1.138744713s: waiting for machine to come up
	I0816 12:36:40.410235   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:40.410623   22106 main.go:141] libmachine: (ha-863936) DBG | unable to find current IP address of domain ha-863936 in network mk-ha-863936
	I0816 12:36:40.410644   22106 main.go:141] libmachine: (ha-863936) DBG | I0816 12:36:40.410585   22145 retry.go:31] will retry after 1.56134778s: waiting for machine to come up
	I0816 12:36:41.974169   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:41.974598   22106 main.go:141] libmachine: (ha-863936) DBG | unable to find current IP address of domain ha-863936 in network mk-ha-863936
	I0816 12:36:41.974629   22106 main.go:141] libmachine: (ha-863936) DBG | I0816 12:36:41.974543   22145 retry.go:31] will retry after 2.667992359s: waiting for machine to come up
	I0816 12:36:44.645158   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:44.645587   22106 main.go:141] libmachine: (ha-863936) DBG | unable to find current IP address of domain ha-863936 in network mk-ha-863936
	I0816 12:36:44.645635   22106 main.go:141] libmachine: (ha-863936) DBG | I0816 12:36:44.645578   22145 retry.go:31] will retry after 2.979452041s: waiting for machine to come up
	I0816 12:36:47.628572   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:47.629020   22106 main.go:141] libmachine: (ha-863936) DBG | unable to find current IP address of domain ha-863936 in network mk-ha-863936
	I0816 12:36:47.629047   22106 main.go:141] libmachine: (ha-863936) DBG | I0816 12:36:47.628972   22145 retry.go:31] will retry after 2.839313737s: waiting for machine to come up
	I0816 12:36:50.471956   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:50.472551   22106 main.go:141] libmachine: (ha-863936) DBG | unable to find current IP address of domain ha-863936 in network mk-ha-863936
	I0816 12:36:50.472580   22106 main.go:141] libmachine: (ha-863936) DBG | I0816 12:36:50.472504   22145 retry.go:31] will retry after 4.05549474s: waiting for machine to come up
	I0816 12:36:54.529582   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:54.529882   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has current primary IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:54.529901   22106 main.go:141] libmachine: (ha-863936) Found IP for machine: 192.168.39.2
	I0816 12:36:54.529912   22106 main.go:141] libmachine: (ha-863936) Reserving static IP address...
	I0816 12:36:54.530219   22106 main.go:141] libmachine: (ha-863936) DBG | unable to find host DHCP lease matching {name: "ha-863936", mac: "52:54:00:88:fe:d4", ip: "192.168.39.2"} in network mk-ha-863936
	I0816 12:36:54.599629   22106 main.go:141] libmachine: (ha-863936) DBG | Getting to WaitForSSH function...
	I0816 12:36:54.599659   22106 main.go:141] libmachine: (ha-863936) Reserved static IP address: 192.168.39.2
	I0816 12:36:54.599672   22106 main.go:141] libmachine: (ha-863936) Waiting for SSH to be available...
	I0816 12:36:54.602035   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:54.602380   22106 main.go:141] libmachine: (ha-863936) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936
	I0816 12:36:54.602405   22106 main.go:141] libmachine: (ha-863936) DBG | unable to find defined IP address of network mk-ha-863936 interface with MAC address 52:54:00:88:fe:d4
	I0816 12:36:54.602542   22106 main.go:141] libmachine: (ha-863936) DBG | Using SSH client type: external
	I0816 12:36:54.602568   22106 main.go:141] libmachine: (ha-863936) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936/id_rsa (-rw-------)
	I0816 12:36:54.602613   22106 main.go:141] libmachine: (ha-863936) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 12:36:54.602645   22106 main.go:141] libmachine: (ha-863936) DBG | About to run SSH command:
	I0816 12:36:54.602762   22106 main.go:141] libmachine: (ha-863936) DBG | exit 0
	I0816 12:36:54.606160   22106 main.go:141] libmachine: (ha-863936) DBG | SSH cmd err, output: exit status 255: 
	I0816 12:36:54.606178   22106 main.go:141] libmachine: (ha-863936) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0816 12:36:54.606185   22106 main.go:141] libmachine: (ha-863936) DBG | command : exit 0
	I0816 12:36:54.606192   22106 main.go:141] libmachine: (ha-863936) DBG | err     : exit status 255
	I0816 12:36:54.606199   22106 main.go:141] libmachine: (ha-863936) DBG | output  : 
	I0816 12:36:57.608362   22106 main.go:141] libmachine: (ha-863936) DBG | Getting to WaitForSSH function...
	I0816 12:36:57.611132   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:57.611494   22106 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:36:57.611523   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:57.611608   22106 main.go:141] libmachine: (ha-863936) DBG | Using SSH client type: external
	I0816 12:36:57.611642   22106 main.go:141] libmachine: (ha-863936) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936/id_rsa (-rw-------)
	I0816 12:36:57.611672   22106 main.go:141] libmachine: (ha-863936) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.2 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 12:36:57.611686   22106 main.go:141] libmachine: (ha-863936) DBG | About to run SSH command:
	I0816 12:36:57.611697   22106 main.go:141] libmachine: (ha-863936) DBG | exit 0
	I0816 12:36:57.733040   22106 main.go:141] libmachine: (ha-863936) DBG | SSH cmd err, output: <nil>: 
	I0816 12:36:57.733299   22106 main.go:141] libmachine: (ha-863936) KVM machine creation complete!
	I0816 12:36:57.733639   22106 main.go:141] libmachine: (ha-863936) Calling .GetConfigRaw
	I0816 12:36:57.734186   22106 main.go:141] libmachine: (ha-863936) Calling .DriverName
	I0816 12:36:57.734331   22106 main.go:141] libmachine: (ha-863936) Calling .DriverName
	I0816 12:36:57.734501   22106 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0816 12:36:57.734515   22106 main.go:141] libmachine: (ha-863936) Calling .GetState
	I0816 12:36:57.735605   22106 main.go:141] libmachine: Detecting operating system of created instance...
	I0816 12:36:57.735617   22106 main.go:141] libmachine: Waiting for SSH to be available...
	I0816 12:36:57.735622   22106 main.go:141] libmachine: Getting to WaitForSSH function...
	I0816 12:36:57.735628   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:36:57.737594   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:57.737913   22106 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:36:57.737937   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:57.738062   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHPort
	I0816 12:36:57.738225   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:36:57.738384   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:36:57.738529   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHUsername
	I0816 12:36:57.738675   22106 main.go:141] libmachine: Using SSH client type: native
	I0816 12:36:57.738912   22106 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0816 12:36:57.738928   22106 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0816 12:36:57.836202   22106 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 12:36:57.836231   22106 main.go:141] libmachine: Detecting the provisioner...
	I0816 12:36:57.836240   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:36:57.838974   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:57.839315   22106 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:36:57.839347   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:57.839552   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHPort
	I0816 12:36:57.839749   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:36:57.839916   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:36:57.840055   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHUsername
	I0816 12:36:57.840205   22106 main.go:141] libmachine: Using SSH client type: native
	I0816 12:36:57.840396   22106 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0816 12:36:57.840409   22106 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0816 12:36:57.937627   22106 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0816 12:36:57.937686   22106 main.go:141] libmachine: found compatible host: buildroot
	I0816 12:36:57.937693   22106 main.go:141] libmachine: Provisioning with buildroot...
	I0816 12:36:57.937700   22106 main.go:141] libmachine: (ha-863936) Calling .GetMachineName
	I0816 12:36:57.937945   22106 buildroot.go:166] provisioning hostname "ha-863936"
	I0816 12:36:57.937971   22106 main.go:141] libmachine: (ha-863936) Calling .GetMachineName
	I0816 12:36:57.938121   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:36:57.940492   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:57.940894   22106 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:36:57.940929   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:57.941085   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHPort
	I0816 12:36:57.941286   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:36:57.941472   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:36:57.941596   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHUsername
	I0816 12:36:57.941743   22106 main.go:141] libmachine: Using SSH client type: native
	I0816 12:36:57.941969   22106 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0816 12:36:57.941984   22106 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-863936 && echo "ha-863936" | sudo tee /etc/hostname
	I0816 12:36:58.051455   22106 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-863936
	
	I0816 12:36:58.051484   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:36:58.054131   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:58.054428   22106 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:36:58.054455   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:58.054631   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHPort
	I0816 12:36:58.054839   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:36:58.055014   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:36:58.055187   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHUsername
	I0816 12:36:58.055335   22106 main.go:141] libmachine: Using SSH client type: native
	I0816 12:36:58.055527   22106 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0816 12:36:58.055548   22106 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-863936' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-863936/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-863936' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 12:36:58.162086   22106 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 12:36:58.162115   22106 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-3966/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-3966/.minikube}
	I0816 12:36:58.162165   22106 buildroot.go:174] setting up certificates
	I0816 12:36:58.162183   22106 provision.go:84] configureAuth start
	I0816 12:36:58.162191   22106 main.go:141] libmachine: (ha-863936) Calling .GetMachineName
	I0816 12:36:58.162442   22106 main.go:141] libmachine: (ha-863936) Calling .GetIP
	I0816 12:36:58.165016   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:58.165350   22106 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:36:58.165373   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:58.165526   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:36:58.167671   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:58.168011   22106 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:36:58.168037   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:58.168147   22106 provision.go:143] copyHostCerts
	I0816 12:36:58.168177   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem
	I0816 12:36:58.168216   22106 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem, removing ...
	I0816 12:36:58.168236   22106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem
	I0816 12:36:58.168314   22106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem (1082 bytes)
	I0816 12:36:58.168420   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem
	I0816 12:36:58.168445   22106 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem, removing ...
	I0816 12:36:58.168451   22106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem
	I0816 12:36:58.168502   22106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem (1123 bytes)
	I0816 12:36:58.168577   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem
	I0816 12:36:58.168615   22106 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem, removing ...
	I0816 12:36:58.168624   22106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem
	I0816 12:36:58.168661   22106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem (1675 bytes)
	I0816 12:36:58.168762   22106 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem org=jenkins.ha-863936 san=[127.0.0.1 192.168.39.2 ha-863936 localhost minikube]
	I0816 12:36:58.274002   22106 provision.go:177] copyRemoteCerts
	I0816 12:36:58.274071   22106 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 12:36:58.274102   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:36:58.276663   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:58.276965   22106 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:36:58.276994   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:58.277196   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHPort
	I0816 12:36:58.277361   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:36:58.277516   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHUsername
	I0816 12:36:58.277664   22106 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936/id_rsa Username:docker}
	I0816 12:36:58.355502   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 12:36:58.355592   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 12:36:58.383229   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 12:36:58.383294   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0816 12:36:58.410432   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 12:36:58.410508   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 12:36:58.437316   22106 provision.go:87] duration metric: took 275.123314ms to configureAuth
	I0816 12:36:58.437338   22106 buildroot.go:189] setting minikube options for container-runtime
	I0816 12:36:58.437527   22106 config.go:182] Loaded profile config "ha-863936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 12:36:58.437605   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:36:58.439981   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:58.440293   22106 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:36:58.440318   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:58.440490   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHPort
	I0816 12:36:58.440673   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:36:58.440832   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:36:58.440996   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHUsername
	I0816 12:36:58.441159   22106 main.go:141] libmachine: Using SSH client type: native
	I0816 12:36:58.441317   22106 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0816 12:36:58.441330   22106 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 12:36:58.710508   22106 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 12:36:58.710534   22106 main.go:141] libmachine: Checking connection to Docker...
	I0816 12:36:58.710543   22106 main.go:141] libmachine: (ha-863936) Calling .GetURL
	I0816 12:36:58.711676   22106 main.go:141] libmachine: (ha-863936) DBG | Using libvirt version 6000000
	I0816 12:36:58.713804   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:58.714036   22106 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:36:58.714070   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:58.714187   22106 main.go:141] libmachine: Docker is up and running!
	I0816 12:36:58.714202   22106 main.go:141] libmachine: Reticulating splines...
	I0816 12:36:58.714210   22106 client.go:171] duration metric: took 25.602002765s to LocalClient.Create
	I0816 12:36:58.714235   22106 start.go:167] duration metric: took 25.602064165s to libmachine.API.Create "ha-863936"
	I0816 12:36:58.714256   22106 start.go:293] postStartSetup for "ha-863936" (driver="kvm2")
	I0816 12:36:58.714279   22106 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 12:36:58.714298   22106 main.go:141] libmachine: (ha-863936) Calling .DriverName
	I0816 12:36:58.714526   22106 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 12:36:58.714548   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:36:58.716428   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:58.716673   22106 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:36:58.716699   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:58.716805   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHPort
	I0816 12:36:58.716975   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:36:58.717145   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHUsername
	I0816 12:36:58.717303   22106 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936/id_rsa Username:docker}
	I0816 12:36:58.795033   22106 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 12:36:58.799670   22106 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 12:36:58.799688   22106 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/addons for local assets ...
	I0816 12:36:58.799754   22106 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/files for local assets ...
	I0816 12:36:58.799847   22106 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem -> 111492.pem in /etc/ssl/certs
	I0816 12:36:58.799857   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem -> /etc/ssl/certs/111492.pem
	I0816 12:36:58.799980   22106 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 12:36:58.809592   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /etc/ssl/certs/111492.pem (1708 bytes)
	I0816 12:36:58.837153   22106 start.go:296] duration metric: took 122.874442ms for postStartSetup
	I0816 12:36:58.837200   22106 main.go:141] libmachine: (ha-863936) Calling .GetConfigRaw
	I0816 12:36:58.837738   22106 main.go:141] libmachine: (ha-863936) Calling .GetIP
	I0816 12:36:58.840054   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:58.840360   22106 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:36:58.840382   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:58.840590   22106 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/config.json ...
	I0816 12:36:58.840792   22106 start.go:128] duration metric: took 25.745604524s to createHost
	I0816 12:36:58.840815   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:36:58.842610   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:58.842896   22106 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:36:58.842925   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:58.843043   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHPort
	I0816 12:36:58.843206   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:36:58.843336   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:36:58.843494   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHUsername
	I0816 12:36:58.843671   22106 main.go:141] libmachine: Using SSH client type: native
	I0816 12:36:58.843871   22106 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0816 12:36:58.843883   22106 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 12:36:58.941633   22106 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723811818.921188895
	
	I0816 12:36:58.941655   22106 fix.go:216] guest clock: 1723811818.921188895
	I0816 12:36:58.941663   22106 fix.go:229] Guest: 2024-08-16 12:36:58.921188895 +0000 UTC Remote: 2024-08-16 12:36:58.84080489 +0000 UTC m=+25.845157784 (delta=80.384005ms)
	I0816 12:36:58.941701   22106 fix.go:200] guest clock delta is within tolerance: 80.384005ms
	I0816 12:36:58.941708   22106 start.go:83] releasing machines lock for "ha-863936", held for 25.846598719s
	I0816 12:36:58.941732   22106 main.go:141] libmachine: (ha-863936) Calling .DriverName
	I0816 12:36:58.941956   22106 main.go:141] libmachine: (ha-863936) Calling .GetIP
	I0816 12:36:58.944195   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:58.944538   22106 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:36:58.944578   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:58.944679   22106 main.go:141] libmachine: (ha-863936) Calling .DriverName
	I0816 12:36:58.945211   22106 main.go:141] libmachine: (ha-863936) Calling .DriverName
	I0816 12:36:58.945356   22106 main.go:141] libmachine: (ha-863936) Calling .DriverName
	I0816 12:36:58.945429   22106 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 12:36:58.945477   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:36:58.945629   22106 ssh_runner.go:195] Run: cat /version.json
	I0816 12:36:58.945652   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:36:58.947899   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:58.948211   22106 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:36:58.948234   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:58.948252   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:58.948347   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHPort
	I0816 12:36:58.948536   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:36:58.948693   22106 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:36:58.948713   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:58.948752   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHUsername
	I0816 12:36:58.948862   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHPort
	I0816 12:36:58.948993   22106 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936/id_rsa Username:docker}
	I0816 12:36:58.949063   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:36:58.949201   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHUsername
	I0816 12:36:58.949332   22106 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936/id_rsa Username:docker}
	I0816 12:36:59.022013   22106 ssh_runner.go:195] Run: systemctl --version
	I0816 12:36:59.046055   22106 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 12:36:59.199918   22106 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 12:36:59.205719   22106 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 12:36:59.205792   22106 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 12:36:59.222101   22106 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 12:36:59.222124   22106 start.go:495] detecting cgroup driver to use...
	I0816 12:36:59.222183   22106 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 12:36:59.238191   22106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 12:36:59.251719   22106 docker.go:217] disabling cri-docker service (if available) ...
	I0816 12:36:59.251769   22106 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 12:36:59.265166   22106 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 12:36:59.278597   22106 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 12:36:59.393979   22106 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 12:36:59.544406   22106 docker.go:233] disabling docker service ...
	I0816 12:36:59.544464   22106 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 12:36:59.558840   22106 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 12:36:59.571562   22106 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 12:36:59.694834   22106 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 12:36:59.813595   22106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 12:36:59.827354   22106 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 12:36:59.845758   22106 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 12:36:59.845811   22106 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:36:59.856402   22106 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 12:36:59.856447   22106 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:36:59.866890   22106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:36:59.877035   22106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:36:59.887490   22106 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 12:36:59.897770   22106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:36:59.907908   22106 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:36:59.924420   22106 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:36:59.934587   22106 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 12:36:59.943661   22106 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 12:36:59.943727   22106 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 12:36:59.956613   22106 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 12:36:59.965940   22106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 12:37:00.085504   22106 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 12:37:00.221358   22106 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 12:37:00.221431   22106 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 12:37:00.226179   22106 start.go:563] Will wait 60s for crictl version
	I0816 12:37:00.226239   22106 ssh_runner.go:195] Run: which crictl
	I0816 12:37:00.229795   22106 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 12:37:00.268160   22106 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 12:37:00.268251   22106 ssh_runner.go:195] Run: crio --version
	I0816 12:37:00.294793   22106 ssh_runner.go:195] Run: crio --version
	I0816 12:37:00.324459   22106 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 12:37:00.325811   22106 main.go:141] libmachine: (ha-863936) Calling .GetIP
	I0816 12:37:00.328293   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:37:00.328641   22106 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:37:00.328667   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:37:00.328847   22106 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0816 12:37:00.332764   22106 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 12:37:00.345931   22106 kubeadm.go:883] updating cluster {Name:ha-863936 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-863936 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 12:37:00.346063   22106 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 12:37:00.346111   22106 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 12:37:00.377715   22106 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 12:37:00.377789   22106 ssh_runner.go:195] Run: which lz4
	I0816 12:37:00.381595   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0816 12:37:00.381678   22106 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 12:37:00.385779   22106 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 12:37:00.385813   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0816 12:37:01.718458   22106 crio.go:462] duration metric: took 1.336808857s to copy over tarball
	I0816 12:37:01.718543   22106 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 12:37:03.731657   22106 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.013082282s)
	I0816 12:37:03.731688   22106 crio.go:469] duration metric: took 2.013202273s to extract the tarball
	I0816 12:37:03.731696   22106 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 12:37:03.768560   22106 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 12:37:03.814909   22106 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 12:37:03.814938   22106 cache_images.go:84] Images are preloaded, skipping loading
	I0816 12:37:03.814945   22106 kubeadm.go:934] updating node { 192.168.39.2 8443 v1.31.0 crio true true} ...
	I0816 12:37:03.815033   22106 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-863936 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-863936 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 12:37:03.815109   22106 ssh_runner.go:195] Run: crio config
	I0816 12:37:03.864151   22106 cni.go:84] Creating CNI manager for ""
	I0816 12:37:03.864171   22106 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0816 12:37:03.864180   22106 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 12:37:03.864199   22106 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-863936 NodeName:ha-863936 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 12:37:03.864315   22106 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-863936"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 12:37:03.864339   22106 kube-vip.go:115] generating kube-vip config ...
	I0816 12:37:03.864381   22106 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0816 12:37:03.881475   22106 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0816 12:37:03.881674   22106 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0816 12:37:03.881752   22106 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 12:37:03.891974   22106 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 12:37:03.892045   22106 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0816 12:37:03.904064   22106 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0816 12:37:03.920714   22106 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 12:37:03.937785   22106 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0816 12:37:03.954082   22106 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0816 12:37:03.969749   22106 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0816 12:37:03.973444   22106 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 12:37:03.985039   22106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 12:37:04.117836   22106 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 12:37:04.135197   22106 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936 for IP: 192.168.39.2
	I0816 12:37:04.135219   22106 certs.go:194] generating shared ca certs ...
	I0816 12:37:04.135238   22106 certs.go:226] acquiring lock for ca certs: {Name:mkdb46755373c135acf8239d2d4352f4b0b3d1d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:37:04.135409   22106 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key
	I0816 12:37:04.135464   22106 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key
	I0816 12:37:04.135479   22106 certs.go:256] generating profile certs ...
	I0816 12:37:04.135540   22106 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/client.key
	I0816 12:37:04.135557   22106 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/client.crt with IP's: []
	I0816 12:37:04.286829   22106 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/client.crt ...
	I0816 12:37:04.286855   22106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/client.crt: {Name:mk3c8e19727ad782fc37b7c10c318864d8bf662a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:37:04.287013   22106 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/client.key ...
	I0816 12:37:04.287023   22106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/client.key: {Name:mk20a68f4171979de7052db8f1e89f5baaff55a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:37:04.287123   22106 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.key.3e1ece89
	I0816 12:37:04.287140   22106 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.crt.3e1ece89 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.2 192.168.39.254]
	I0816 12:37:04.419270   22106 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.crt.3e1ece89 ...
	I0816 12:37:04.419298   22106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.crt.3e1ece89: {Name:mkfebc5717092261a16c434a47e224f6ebd88df9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:37:04.419437   22106 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.key.3e1ece89 ...
	I0816 12:37:04.419449   22106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.key.3e1ece89: {Name:mk235afa59962aa082ba1b26e96b63080d574abf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:37:04.419518   22106 certs.go:381] copying /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.crt.3e1ece89 -> /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.crt
	I0816 12:37:04.419598   22106 certs.go:385] copying /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.key.3e1ece89 -> /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.key
	I0816 12:37:04.419652   22106 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/proxy-client.key
	I0816 12:37:04.419666   22106 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/proxy-client.crt with IP's: []
	I0816 12:37:04.753212   22106 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/proxy-client.crt ...
	I0816 12:37:04.753239   22106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/proxy-client.crt: {Name:mk61c146dbc6bf8fbcfd831eae718e0e1aa7bc23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:37:04.753382   22106 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/proxy-client.key ...
	I0816 12:37:04.753393   22106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/proxy-client.key: {Name:mk7152f64e6ce778dd27d833594971ad2030a4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:37:04.753454   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0816 12:37:04.753470   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0816 12:37:04.753481   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0816 12:37:04.753494   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0816 12:37:04.753507   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0816 12:37:04.753519   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0816 12:37:04.753531   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0816 12:37:04.753543   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0816 12:37:04.753590   22106 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem (1338 bytes)
	W0816 12:37:04.753624   22106 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149_empty.pem, impossibly tiny 0 bytes
	I0816 12:37:04.753632   22106 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 12:37:04.753653   22106 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem (1082 bytes)
	I0816 12:37:04.753676   22106 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem (1123 bytes)
	I0816 12:37:04.753698   22106 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem (1675 bytes)
	I0816 12:37:04.753734   22106 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem (1708 bytes)
	I0816 12:37:04.753758   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0816 12:37:04.753772   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem -> /usr/share/ca-certificates/11149.pem
	I0816 12:37:04.753807   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem -> /usr/share/ca-certificates/111492.pem
	I0816 12:37:04.754318   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 12:37:04.779865   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 12:37:04.803727   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 12:37:04.827684   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 12:37:04.851974   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0816 12:37:04.875136   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 12:37:04.901383   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 12:37:04.942284   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 12:37:04.969463   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 12:37:04.992491   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem --> /usr/share/ca-certificates/11149.pem (1338 bytes)
	I0816 12:37:05.015994   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /usr/share/ca-certificates/111492.pem (1708 bytes)
	I0816 12:37:05.040511   22106 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 12:37:05.056803   22106 ssh_runner.go:195] Run: openssl version
	I0816 12:37:05.062412   22106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 12:37:05.073019   22106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 12:37:05.077441   22106 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 12:22 /usr/share/ca-certificates/minikubeCA.pem
	I0816 12:37:05.077485   22106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 12:37:05.083071   22106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 12:37:05.093512   22106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11149.pem && ln -fs /usr/share/ca-certificates/11149.pem /etc/ssl/certs/11149.pem"
	I0816 12:37:05.103596   22106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11149.pem
	I0816 12:37:05.107740   22106 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 12:33 /usr/share/ca-certificates/11149.pem
	I0816 12:37:05.107780   22106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11149.pem
	I0816 12:37:05.113302   22106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11149.pem /etc/ssl/certs/51391683.0"
	I0816 12:37:05.123486   22106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111492.pem && ln -fs /usr/share/ca-certificates/111492.pem /etc/ssl/certs/111492.pem"
	I0816 12:37:05.133632   22106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111492.pem
	I0816 12:37:05.137892   22106 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 12:33 /usr/share/ca-certificates/111492.pem
	I0816 12:37:05.137932   22106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111492.pem
	I0816 12:37:05.143541   22106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111492.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 12:37:05.153932   22106 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 12:37:05.157875   22106 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0816 12:37:05.157925   22106 kubeadm.go:392] StartCluster: {Name:ha-863936 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-863936 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 12:37:05.157993   22106 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 12:37:05.158032   22106 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 12:37:05.198628   22106 cri.go:89] found id: ""
	I0816 12:37:05.198687   22106 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 12:37:05.208057   22106 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 12:37:05.221611   22106 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 12:37:05.233147   22106 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 12:37:05.233165   22106 kubeadm.go:157] found existing configuration files:
	
	I0816 12:37:05.233223   22106 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 12:37:05.241915   22106 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 12:37:05.241973   22106 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 12:37:05.250984   22106 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 12:37:05.259559   22106 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 12:37:05.259609   22106 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 12:37:05.268632   22106 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 12:37:05.277082   22106 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 12:37:05.277124   22106 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 12:37:05.286168   22106 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 12:37:05.294641   22106 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 12:37:05.294686   22106 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 12:37:05.303471   22106 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 12:37:05.406815   22106 kubeadm.go:310] W0816 12:37:05.392317     855 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 12:37:05.409908   22106 kubeadm.go:310] W0816 12:37:05.395512     855 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 12:37:05.518595   22106 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 12:37:16.343194   22106 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0816 12:37:16.343273   22106 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 12:37:16.343362   22106 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 12:37:16.343494   22106 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 12:37:16.343613   22106 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0816 12:37:16.343705   22106 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 12:37:16.345471   22106 out.go:235]   - Generating certificates and keys ...
	I0816 12:37:16.345570   22106 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 12:37:16.345653   22106 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 12:37:16.345741   22106 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0816 12:37:16.345810   22106 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0816 12:37:16.345878   22106 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0816 12:37:16.345958   22106 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0816 12:37:16.346013   22106 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0816 12:37:16.346134   22106 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-863936 localhost] and IPs [192.168.39.2 127.0.0.1 ::1]
	I0816 12:37:16.346203   22106 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0816 12:37:16.346310   22106 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-863936 localhost] and IPs [192.168.39.2 127.0.0.1 ::1]
	I0816 12:37:16.346365   22106 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0816 12:37:16.346433   22106 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0816 12:37:16.346501   22106 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0816 12:37:16.346565   22106 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 12:37:16.346636   22106 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 12:37:16.346714   22106 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0816 12:37:16.346783   22106 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 12:37:16.346873   22106 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 12:37:16.346953   22106 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 12:37:16.347033   22106 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 12:37:16.347128   22106 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 12:37:16.348632   22106 out.go:235]   - Booting up control plane ...
	I0816 12:37:16.348728   22106 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 12:37:16.348816   22106 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 12:37:16.348878   22106 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 12:37:16.349027   22106 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 12:37:16.349155   22106 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 12:37:16.349225   22106 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 12:37:16.349372   22106 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0816 12:37:16.349501   22106 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0816 12:37:16.349559   22106 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.000363ms
	I0816 12:37:16.349659   22106 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0816 12:37:16.349739   22106 kubeadm.go:310] [api-check] The API server is healthy after 6.014136208s
	I0816 12:37:16.349845   22106 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0816 12:37:16.349953   22106 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0816 12:37:16.350002   22106 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0816 12:37:16.350159   22106 kubeadm.go:310] [mark-control-plane] Marking the node ha-863936 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0816 12:37:16.350218   22106 kubeadm.go:310] [bootstrap-token] Using token: lvudru.afb7dzk6lhr7lh2y
	I0816 12:37:16.351850   22106 out.go:235]   - Configuring RBAC rules ...
	I0816 12:37:16.351979   22106 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0816 12:37:16.352082   22106 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0816 12:37:16.352227   22106 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0816 12:37:16.352376   22106 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0816 12:37:16.352482   22106 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0816 12:37:16.352588   22106 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0816 12:37:16.352706   22106 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0816 12:37:16.352744   22106 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0816 12:37:16.352783   22106 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0816 12:37:16.352789   22106 kubeadm.go:310] 
	I0816 12:37:16.352868   22106 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0816 12:37:16.352879   22106 kubeadm.go:310] 
	I0816 12:37:16.353010   22106 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0816 12:37:16.353022   22106 kubeadm.go:310] 
	I0816 12:37:16.353053   22106 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0816 12:37:16.353129   22106 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0816 12:37:16.353197   22106 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0816 12:37:16.353207   22106 kubeadm.go:310] 
	I0816 12:37:16.353299   22106 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0816 12:37:16.353311   22106 kubeadm.go:310] 
	I0816 12:37:16.353375   22106 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0816 12:37:16.353384   22106 kubeadm.go:310] 
	I0816 12:37:16.353471   22106 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0816 12:37:16.353683   22106 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0816 12:37:16.353779   22106 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0816 12:37:16.353789   22106 kubeadm.go:310] 
	I0816 12:37:16.353891   22106 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0816 12:37:16.353999   22106 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0816 12:37:16.354008   22106 kubeadm.go:310] 
	I0816 12:37:16.354144   22106 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token lvudru.afb7dzk6lhr7lh2y \
	I0816 12:37:16.354282   22106 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4dc33a2efd2478f5cbf5aa38b4f971f510c5b41ce450063af834610254851641 \
	I0816 12:37:16.354313   22106 kubeadm.go:310] 	--control-plane 
	I0816 12:37:16.354320   22106 kubeadm.go:310] 
	I0816 12:37:16.354404   22106 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0816 12:37:16.354424   22106 kubeadm.go:310] 
	I0816 12:37:16.354538   22106 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token lvudru.afb7dzk6lhr7lh2y \
	I0816 12:37:16.354652   22106 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4dc33a2efd2478f5cbf5aa38b4f971f510c5b41ce450063af834610254851641 
	I0816 12:37:16.354695   22106 cni.go:84] Creating CNI manager for ""
	I0816 12:37:16.354704   22106 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0816 12:37:16.356387   22106 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0816 12:37:16.357774   22106 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0816 12:37:16.363355   22106 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0816 12:37:16.363371   22106 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0816 12:37:16.384862   22106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0816 12:37:16.748270   22106 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 12:37:16.748343   22106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 12:37:16.748368   22106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-863936 minikube.k8s.io/updated_at=2024_08_16T12_37_16_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ab84f9bc76071a77c857a14f5c66dccc01002b05 minikube.k8s.io/name=ha-863936 minikube.k8s.io/primary=true
	I0816 12:37:16.781305   22106 ops.go:34] apiserver oom_adj: -16
	I0816 12:37:16.884210   22106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 12:37:17.384293   22106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 12:37:17.884514   22106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 12:37:18.385238   22106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 12:37:18.884557   22106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 12:37:19.385272   22106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 12:37:19.884233   22106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 12:37:19.982889   22106 kubeadm.go:1113] duration metric: took 3.234603021s to wait for elevateKubeSystemPrivileges
	I0816 12:37:19.982926   22106 kubeadm.go:394] duration metric: took 14.825002272s to StartCluster
	I0816 12:37:19.982948   22106 settings.go:142] acquiring lock: {Name:mk96ffc9331ceacd8f3c1c33a59b38e047898a0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:37:19.983025   22106 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 12:37:19.983705   22106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/kubeconfig: {Name:mk102d7d0e1fecb6a50b9d6c1ee82dcff7f7a898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:37:19.983899   22106 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 12:37:19.983915   22106 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 12:37:19.983955   22106 addons.go:69] Setting storage-provisioner=true in profile "ha-863936"
	I0816 12:37:19.983988   22106 addons.go:234] Setting addon storage-provisioner=true in "ha-863936"
	I0816 12:37:19.983901   22106 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 12:37:19.984019   22106 addons.go:69] Setting default-storageclass=true in profile "ha-863936"
	I0816 12:37:19.984028   22106 start.go:241] waiting for startup goroutines ...
	I0816 12:37:19.984024   22106 host.go:66] Checking if "ha-863936" exists ...
	I0816 12:37:19.984085   22106 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-863936"
	I0816 12:37:19.984163   22106 config.go:182] Loaded profile config "ha-863936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 12:37:19.984423   22106 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:37:19.984451   22106 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:37:19.984485   22106 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:37:19.984517   22106 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:37:19.999421   22106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39767
	I0816 12:37:19.999861   22106 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:37:19.999953   22106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37565
	I0816 12:37:20.000281   22106 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:37:20.000461   22106 main.go:141] libmachine: Using API Version  1
	I0816 12:37:20.000487   22106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:37:20.000742   22106 main.go:141] libmachine: Using API Version  1
	I0816 12:37:20.000767   22106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:37:20.000856   22106 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:37:20.001041   22106 main.go:141] libmachine: (ha-863936) Calling .GetState
	I0816 12:37:20.001088   22106 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:37:20.001572   22106 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:37:20.001598   22106 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:37:20.003400   22106 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 12:37:20.003717   22106 kapi.go:59] client config for ha-863936: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/client.crt", KeyFile:"/home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/client.key", CAFile:"/home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f189a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0816 12:37:20.004233   22106 cert_rotation.go:140] Starting client certificate rotation controller
	I0816 12:37:20.004587   22106 addons.go:234] Setting addon default-storageclass=true in "ha-863936"
	I0816 12:37:20.004631   22106 host.go:66] Checking if "ha-863936" exists ...
	I0816 12:37:20.005023   22106 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:37:20.005053   22106 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:37:20.016523   22106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41951
	I0816 12:37:20.016987   22106 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:37:20.017535   22106 main.go:141] libmachine: Using API Version  1
	I0816 12:37:20.017553   22106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:37:20.017859   22106 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:37:20.018056   22106 main.go:141] libmachine: (ha-863936) Calling .GetState
	I0816 12:37:20.019745   22106 main.go:141] libmachine: (ha-863936) Calling .DriverName
	I0816 12:37:20.019891   22106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42291
	I0816 12:37:20.020227   22106 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:37:20.020608   22106 main.go:141] libmachine: Using API Version  1
	I0816 12:37:20.020625   22106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:37:20.020925   22106 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:37:20.021540   22106 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:37:20.021606   22106 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:37:20.022159   22106 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 12:37:20.023672   22106 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 12:37:20.023693   22106 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 12:37:20.023713   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:37:20.026914   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:37:20.027315   22106 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:37:20.027335   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:37:20.027483   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHPort
	I0816 12:37:20.027626   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:37:20.027725   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHUsername
	I0816 12:37:20.027820   22106 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936/id_rsa Username:docker}
	I0816 12:37:20.038121   22106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43121
	I0816 12:37:20.038481   22106 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:37:20.038973   22106 main.go:141] libmachine: Using API Version  1
	I0816 12:37:20.038994   22106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:37:20.039283   22106 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:37:20.039456   22106 main.go:141] libmachine: (ha-863936) Calling .GetState
	I0816 12:37:20.040897   22106 main.go:141] libmachine: (ha-863936) Calling .DriverName
	I0816 12:37:20.041134   22106 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 12:37:20.041146   22106 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 12:37:20.041160   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:37:20.043868   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:37:20.045012   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHPort
	I0816 12:37:20.045018   22106 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:37:20.045046   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:37:20.045217   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:37:20.045376   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHUsername
	I0816 12:37:20.045500   22106 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936/id_rsa Username:docker}
	I0816 12:37:20.094170   22106 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0816 12:37:20.165956   22106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 12:37:20.190074   22106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 12:37:20.595501   22106 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0816 12:37:20.943920   22106 main.go:141] libmachine: Making call to close driver server
	I0816 12:37:20.943938   22106 main.go:141] libmachine: Making call to close driver server
	I0816 12:37:20.943948   22106 main.go:141] libmachine: (ha-863936) Calling .Close
	I0816 12:37:20.943955   22106 main.go:141] libmachine: (ha-863936) Calling .Close
	I0816 12:37:20.944243   22106 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:37:20.944252   22106 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:37:20.944265   22106 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:37:20.944269   22106 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:37:20.944276   22106 main.go:141] libmachine: Making call to close driver server
	I0816 12:37:20.944280   22106 main.go:141] libmachine: Making call to close driver server
	I0816 12:37:20.944285   22106 main.go:141] libmachine: (ha-863936) Calling .Close
	I0816 12:37:20.944288   22106 main.go:141] libmachine: (ha-863936) Calling .Close
	I0816 12:37:20.944481   22106 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:37:20.944484   22106 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:37:20.944494   22106 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:37:20.944505   22106 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:37:20.944566   22106 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0816 12:37:20.944588   22106 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0816 12:37:20.944672   22106 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0816 12:37:20.944681   22106 round_trippers.go:469] Request Headers:
	I0816 12:37:20.944700   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:37:20.944705   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:37:20.955313   22106 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0816 12:37:20.955988   22106 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0816 12:37:20.956003   22106 round_trippers.go:469] Request Headers:
	I0816 12:37:20.956010   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:37:20.956013   22106 round_trippers.go:473]     Content-Type: application/json
	I0816 12:37:20.956016   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:37:20.958516   22106 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 12:37:20.958647   22106 main.go:141] libmachine: Making call to close driver server
	I0816 12:37:20.958661   22106 main.go:141] libmachine: (ha-863936) Calling .Close
	I0816 12:37:20.958945   22106 main.go:141] libmachine: (ha-863936) DBG | Closing plugin on server side
	I0816 12:37:20.958989   22106 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:37:20.958998   22106 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:37:20.961733   22106 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0816 12:37:20.962970   22106 addons.go:510] duration metric: took 979.05008ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0816 12:37:20.963004   22106 start.go:246] waiting for cluster config update ...
	I0816 12:37:20.963016   22106 start.go:255] writing updated cluster config ...
	I0816 12:37:20.964754   22106 out.go:201] 
	I0816 12:37:20.966457   22106 config.go:182] Loaded profile config "ha-863936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 12:37:20.966523   22106 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/config.json ...
	I0816 12:37:20.968354   22106 out.go:177] * Starting "ha-863936-m02" control-plane node in "ha-863936" cluster
	I0816 12:37:20.969799   22106 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 12:37:20.969820   22106 cache.go:56] Caching tarball of preloaded images
	I0816 12:37:20.969901   22106 preload.go:172] Found /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 12:37:20.969912   22106 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0816 12:37:20.969980   22106 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/config.json ...
	I0816 12:37:20.970128   22106 start.go:360] acquireMachinesLock for ha-863936-m02: {Name:mk0b4b88ee8893cefe753073bc08069ac50e5641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 12:37:20.970164   22106 start.go:364] duration metric: took 19.96µs to acquireMachinesLock for "ha-863936-m02"
	I0816 12:37:20.970178   22106 start.go:93] Provisioning new machine with config: &{Name:ha-863936 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-863936 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 12:37:20.970252   22106 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0816 12:37:20.971726   22106 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 12:37:20.971799   22106 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:37:20.971825   22106 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:37:20.986311   22106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46103
	I0816 12:37:20.986832   22106 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:37:20.987310   22106 main.go:141] libmachine: Using API Version  1
	I0816 12:37:20.987330   22106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:37:20.987658   22106 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:37:20.987875   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetMachineName
	I0816 12:37:20.988025   22106 main.go:141] libmachine: (ha-863936-m02) Calling .DriverName
	I0816 12:37:20.988221   22106 start.go:159] libmachine.API.Create for "ha-863936" (driver="kvm2")
	I0816 12:37:20.988243   22106 client.go:168] LocalClient.Create starting
	I0816 12:37:20.988275   22106 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem
	I0816 12:37:20.988311   22106 main.go:141] libmachine: Decoding PEM data...
	I0816 12:37:20.988332   22106 main.go:141] libmachine: Parsing certificate...
	I0816 12:37:20.988400   22106 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem
	I0816 12:37:20.988430   22106 main.go:141] libmachine: Decoding PEM data...
	I0816 12:37:20.988452   22106 main.go:141] libmachine: Parsing certificate...
	I0816 12:37:20.988479   22106 main.go:141] libmachine: Running pre-create checks...
	I0816 12:37:20.988491   22106 main.go:141] libmachine: (ha-863936-m02) Calling .PreCreateCheck
	I0816 12:37:20.988642   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetConfigRaw
	I0816 12:37:20.989056   22106 main.go:141] libmachine: Creating machine...
	I0816 12:37:20.989070   22106 main.go:141] libmachine: (ha-863936-m02) Calling .Create
	I0816 12:37:20.989213   22106 main.go:141] libmachine: (ha-863936-m02) Creating KVM machine...
	I0816 12:37:20.990534   22106 main.go:141] libmachine: (ha-863936-m02) DBG | found existing default KVM network
	I0816 12:37:20.990706   22106 main.go:141] libmachine: (ha-863936-m02) DBG | found existing private KVM network mk-ha-863936
	I0816 12:37:20.990851   22106 main.go:141] libmachine: (ha-863936-m02) Setting up store path in /home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m02 ...
	I0816 12:37:20.990875   22106 main.go:141] libmachine: (ha-863936-m02) Building disk image from file:///home/jenkins/minikube-integration/19423-3966/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso
	I0816 12:37:20.990963   22106 main.go:141] libmachine: (ha-863936-m02) DBG | I0816 12:37:20.990843   22488 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19423-3966/.minikube
	I0816 12:37:20.991031   22106 main.go:141] libmachine: (ha-863936-m02) Downloading /home/jenkins/minikube-integration/19423-3966/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19423-3966/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso...
	I0816 12:37:21.234968   22106 main.go:141] libmachine: (ha-863936-m02) DBG | I0816 12:37:21.234855   22488 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m02/id_rsa...
	I0816 12:37:21.638861   22106 main.go:141] libmachine: (ha-863936-m02) DBG | I0816 12:37:21.638689   22488 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m02/ha-863936-m02.rawdisk...
	I0816 12:37:21.638898   22106 main.go:141] libmachine: (ha-863936-m02) DBG | Writing magic tar header
	I0816 12:37:21.638915   22106 main.go:141] libmachine: (ha-863936-m02) DBG | Writing SSH key tar header
	I0816 12:37:21.638932   22106 main.go:141] libmachine: (ha-863936-m02) DBG | I0816 12:37:21.638831   22488 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m02 ...
	I0816 12:37:21.638949   22106 main.go:141] libmachine: (ha-863936-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m02
	I0816 12:37:21.639051   22106 main.go:141] libmachine: (ha-863936-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-3966/.minikube/machines
	I0816 12:37:21.639080   22106 main.go:141] libmachine: (ha-863936-m02) Setting executable bit set on /home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m02 (perms=drwx------)
	I0816 12:37:21.639091   22106 main.go:141] libmachine: (ha-863936-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-3966/.minikube
	I0816 12:37:21.639107   22106 main.go:141] libmachine: (ha-863936-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-3966
	I0816 12:37:21.639120   22106 main.go:141] libmachine: (ha-863936-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0816 12:37:21.639135   22106 main.go:141] libmachine: (ha-863936-m02) DBG | Checking permissions on dir: /home/jenkins
	I0816 12:37:21.639163   22106 main.go:141] libmachine: (ha-863936-m02) DBG | Checking permissions on dir: /home
	I0816 12:37:21.639181   22106 main.go:141] libmachine: (ha-863936-m02) DBG | Skipping /home - not owner
	I0816 12:37:21.639199   22106 main.go:141] libmachine: (ha-863936-m02) Setting executable bit set on /home/jenkins/minikube-integration/19423-3966/.minikube/machines (perms=drwxr-xr-x)
	I0816 12:37:21.639211   22106 main.go:141] libmachine: (ha-863936-m02) Setting executable bit set on /home/jenkins/minikube-integration/19423-3966/.minikube (perms=drwxr-xr-x)
	I0816 12:37:21.639222   22106 main.go:141] libmachine: (ha-863936-m02) Setting executable bit set on /home/jenkins/minikube-integration/19423-3966 (perms=drwxrwxr-x)
	I0816 12:37:21.639236   22106 main.go:141] libmachine: (ha-863936-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0816 12:37:21.639248   22106 main.go:141] libmachine: (ha-863936-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0816 12:37:21.639263   22106 main.go:141] libmachine: (ha-863936-m02) Creating domain...
	I0816 12:37:21.640077   22106 main.go:141] libmachine: (ha-863936-m02) define libvirt domain using xml: 
	I0816 12:37:21.640101   22106 main.go:141] libmachine: (ha-863936-m02) <domain type='kvm'>
	I0816 12:37:21.640112   22106 main.go:141] libmachine: (ha-863936-m02)   <name>ha-863936-m02</name>
	I0816 12:37:21.640127   22106 main.go:141] libmachine: (ha-863936-m02)   <memory unit='MiB'>2200</memory>
	I0816 12:37:21.640140   22106 main.go:141] libmachine: (ha-863936-m02)   <vcpu>2</vcpu>
	I0816 12:37:21.640147   22106 main.go:141] libmachine: (ha-863936-m02)   <features>
	I0816 12:37:21.640160   22106 main.go:141] libmachine: (ha-863936-m02)     <acpi/>
	I0816 12:37:21.640168   22106 main.go:141] libmachine: (ha-863936-m02)     <apic/>
	I0816 12:37:21.640175   22106 main.go:141] libmachine: (ha-863936-m02)     <pae/>
	I0816 12:37:21.640182   22106 main.go:141] libmachine: (ha-863936-m02)     
	I0816 12:37:21.640189   22106 main.go:141] libmachine: (ha-863936-m02)   </features>
	I0816 12:37:21.640197   22106 main.go:141] libmachine: (ha-863936-m02)   <cpu mode='host-passthrough'>
	I0816 12:37:21.640206   22106 main.go:141] libmachine: (ha-863936-m02)   
	I0816 12:37:21.640267   22106 main.go:141] libmachine: (ha-863936-m02)   </cpu>
	I0816 12:37:21.640310   22106 main.go:141] libmachine: (ha-863936-m02)   <os>
	I0816 12:37:21.640325   22106 main.go:141] libmachine: (ha-863936-m02)     <type>hvm</type>
	I0816 12:37:21.640337   22106 main.go:141] libmachine: (ha-863936-m02)     <boot dev='cdrom'/>
	I0816 12:37:21.640351   22106 main.go:141] libmachine: (ha-863936-m02)     <boot dev='hd'/>
	I0816 12:37:21.640358   22106 main.go:141] libmachine: (ha-863936-m02)     <bootmenu enable='no'/>
	I0816 12:37:21.640369   22106 main.go:141] libmachine: (ha-863936-m02)   </os>
	I0816 12:37:21.640378   22106 main.go:141] libmachine: (ha-863936-m02)   <devices>
	I0816 12:37:21.640390   22106 main.go:141] libmachine: (ha-863936-m02)     <disk type='file' device='cdrom'>
	I0816 12:37:21.640407   22106 main.go:141] libmachine: (ha-863936-m02)       <source file='/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m02/boot2docker.iso'/>
	I0816 12:37:21.640438   22106 main.go:141] libmachine: (ha-863936-m02)       <target dev='hdc' bus='scsi'/>
	I0816 12:37:21.640460   22106 main.go:141] libmachine: (ha-863936-m02)       <readonly/>
	I0816 12:37:21.640473   22106 main.go:141] libmachine: (ha-863936-m02)     </disk>
	I0816 12:37:21.640483   22106 main.go:141] libmachine: (ha-863936-m02)     <disk type='file' device='disk'>
	I0816 12:37:21.640508   22106 main.go:141] libmachine: (ha-863936-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0816 12:37:21.640523   22106 main.go:141] libmachine: (ha-863936-m02)       <source file='/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m02/ha-863936-m02.rawdisk'/>
	I0816 12:37:21.640536   22106 main.go:141] libmachine: (ha-863936-m02)       <target dev='hda' bus='virtio'/>
	I0816 12:37:21.640555   22106 main.go:141] libmachine: (ha-863936-m02)     </disk>
	I0816 12:37:21.640576   22106 main.go:141] libmachine: (ha-863936-m02)     <interface type='network'>
	I0816 12:37:21.640590   22106 main.go:141] libmachine: (ha-863936-m02)       <source network='mk-ha-863936'/>
	I0816 12:37:21.640603   22106 main.go:141] libmachine: (ha-863936-m02)       <model type='virtio'/>
	I0816 12:37:21.640614   22106 main.go:141] libmachine: (ha-863936-m02)     </interface>
	I0816 12:37:21.640624   22106 main.go:141] libmachine: (ha-863936-m02)     <interface type='network'>
	I0816 12:37:21.640634   22106 main.go:141] libmachine: (ha-863936-m02)       <source network='default'/>
	I0816 12:37:21.640646   22106 main.go:141] libmachine: (ha-863936-m02)       <model type='virtio'/>
	I0816 12:37:21.640657   22106 main.go:141] libmachine: (ha-863936-m02)     </interface>
	I0816 12:37:21.640669   22106 main.go:141] libmachine: (ha-863936-m02)     <serial type='pty'>
	I0816 12:37:21.640679   22106 main.go:141] libmachine: (ha-863936-m02)       <target port='0'/>
	I0816 12:37:21.640691   22106 main.go:141] libmachine: (ha-863936-m02)     </serial>
	I0816 12:37:21.640700   22106 main.go:141] libmachine: (ha-863936-m02)     <console type='pty'>
	I0816 12:37:21.640709   22106 main.go:141] libmachine: (ha-863936-m02)       <target type='serial' port='0'/>
	I0816 12:37:21.640720   22106 main.go:141] libmachine: (ha-863936-m02)     </console>
	I0816 12:37:21.640738   22106 main.go:141] libmachine: (ha-863936-m02)     <rng model='virtio'>
	I0816 12:37:21.640758   22106 main.go:141] libmachine: (ha-863936-m02)       <backend model='random'>/dev/random</backend>
	I0816 12:37:21.640770   22106 main.go:141] libmachine: (ha-863936-m02)     </rng>
	I0816 12:37:21.640788   22106 main.go:141] libmachine: (ha-863936-m02)     
	I0816 12:37:21.640796   22106 main.go:141] libmachine: (ha-863936-m02)     
	I0816 12:37:21.640806   22106 main.go:141] libmachine: (ha-863936-m02)   </devices>
	I0816 12:37:21.640817   22106 main.go:141] libmachine: (ha-863936-m02) </domain>
	I0816 12:37:21.640831   22106 main.go:141] libmachine: (ha-863936-m02) 
	I0816 12:37:21.647415   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c8:3b:98 in network default
	I0816 12:37:21.647962   22106 main.go:141] libmachine: (ha-863936-m02) Ensuring networks are active...
	I0816 12:37:21.647986   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:21.648730   22106 main.go:141] libmachine: (ha-863936-m02) Ensuring network default is active
	I0816 12:37:21.649070   22106 main.go:141] libmachine: (ha-863936-m02) Ensuring network mk-ha-863936 is active
	I0816 12:37:21.649455   22106 main.go:141] libmachine: (ha-863936-m02) Getting domain xml...
	I0816 12:37:21.650276   22106 main.go:141] libmachine: (ha-863936-m02) Creating domain...
	I0816 12:37:22.855208   22106 main.go:141] libmachine: (ha-863936-m02) Waiting to get IP...
	I0816 12:37:22.856103   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:22.856557   22106 main.go:141] libmachine: (ha-863936-m02) DBG | unable to find current IP address of domain ha-863936-m02 in network mk-ha-863936
	I0816 12:37:22.856597   22106 main.go:141] libmachine: (ha-863936-m02) DBG | I0816 12:37:22.856536   22488 retry.go:31] will retry after 272.389415ms: waiting for machine to come up
	I0816 12:37:23.130961   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:23.131461   22106 main.go:141] libmachine: (ha-863936-m02) DBG | unable to find current IP address of domain ha-863936-m02 in network mk-ha-863936
	I0816 12:37:23.131484   22106 main.go:141] libmachine: (ha-863936-m02) DBG | I0816 12:37:23.131417   22488 retry.go:31] will retry after 263.73211ms: waiting for machine to come up
	I0816 12:37:23.396863   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:23.397312   22106 main.go:141] libmachine: (ha-863936-m02) DBG | unable to find current IP address of domain ha-863936-m02 in network mk-ha-863936
	I0816 12:37:23.397337   22106 main.go:141] libmachine: (ha-863936-m02) DBG | I0816 12:37:23.397273   22488 retry.go:31] will retry after 313.449142ms: waiting for machine to come up
	I0816 12:37:23.712539   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:23.712963   22106 main.go:141] libmachine: (ha-863936-m02) DBG | unable to find current IP address of domain ha-863936-m02 in network mk-ha-863936
	I0816 12:37:23.712989   22106 main.go:141] libmachine: (ha-863936-m02) DBG | I0816 12:37:23.712936   22488 retry.go:31] will retry after 505.914988ms: waiting for machine to come up
	I0816 12:37:24.220249   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:24.220674   22106 main.go:141] libmachine: (ha-863936-m02) DBG | unable to find current IP address of domain ha-863936-m02 in network mk-ha-863936
	I0816 12:37:24.220702   22106 main.go:141] libmachine: (ha-863936-m02) DBG | I0816 12:37:24.220630   22488 retry.go:31] will retry after 707.95495ms: waiting for machine to come up
	I0816 12:37:24.930477   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:24.930826   22106 main.go:141] libmachine: (ha-863936-m02) DBG | unable to find current IP address of domain ha-863936-m02 in network mk-ha-863936
	I0816 12:37:24.930856   22106 main.go:141] libmachine: (ha-863936-m02) DBG | I0816 12:37:24.930782   22488 retry.go:31] will retry after 639.579813ms: waiting for machine to come up
	I0816 12:37:25.571536   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:25.572001   22106 main.go:141] libmachine: (ha-863936-m02) DBG | unable to find current IP address of domain ha-863936-m02 in network mk-ha-863936
	I0816 12:37:25.572031   22106 main.go:141] libmachine: (ha-863936-m02) DBG | I0816 12:37:25.571949   22488 retry.go:31] will retry after 1.052898678s: waiting for machine to come up
	I0816 12:37:26.625833   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:26.626274   22106 main.go:141] libmachine: (ha-863936-m02) DBG | unable to find current IP address of domain ha-863936-m02 in network mk-ha-863936
	I0816 12:37:26.626326   22106 main.go:141] libmachine: (ha-863936-m02) DBG | I0816 12:37:26.626222   22488 retry.go:31] will retry after 1.484593769s: waiting for machine to come up
	I0816 12:37:28.112785   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:28.113240   22106 main.go:141] libmachine: (ha-863936-m02) DBG | unable to find current IP address of domain ha-863936-m02 in network mk-ha-863936
	I0816 12:37:28.113261   22106 main.go:141] libmachine: (ha-863936-m02) DBG | I0816 12:37:28.113173   22488 retry.go:31] will retry after 1.265009506s: waiting for machine to come up
	I0816 12:37:29.379613   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:29.379966   22106 main.go:141] libmachine: (ha-863936-m02) DBG | unable to find current IP address of domain ha-863936-m02 in network mk-ha-863936
	I0816 12:37:29.379989   22106 main.go:141] libmachine: (ha-863936-m02) DBG | I0816 12:37:29.379927   22488 retry.go:31] will retry after 2.04114548s: waiting for machine to come up
	I0816 12:37:31.422945   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:31.423402   22106 main.go:141] libmachine: (ha-863936-m02) DBG | unable to find current IP address of domain ha-863936-m02 in network mk-ha-863936
	I0816 12:37:31.423436   22106 main.go:141] libmachine: (ha-863936-m02) DBG | I0816 12:37:31.423364   22488 retry.go:31] will retry after 2.857495578s: waiting for machine to come up
	I0816 12:37:34.284282   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:34.284671   22106 main.go:141] libmachine: (ha-863936-m02) DBG | unable to find current IP address of domain ha-863936-m02 in network mk-ha-863936
	I0816 12:37:34.284694   22106 main.go:141] libmachine: (ha-863936-m02) DBG | I0816 12:37:34.284642   22488 retry.go:31] will retry after 3.238481842s: waiting for machine to come up
	I0816 12:37:37.525727   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:37.526164   22106 main.go:141] libmachine: (ha-863936-m02) DBG | unable to find current IP address of domain ha-863936-m02 in network mk-ha-863936
	I0816 12:37:37.526184   22106 main.go:141] libmachine: (ha-863936-m02) DBG | I0816 12:37:37.526113   22488 retry.go:31] will retry after 4.3057399s: waiting for machine to come up
	I0816 12:37:41.833819   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:41.834270   22106 main.go:141] libmachine: (ha-863936-m02) Found IP for machine: 192.168.39.101
	I0816 12:37:41.834289   22106 main.go:141] libmachine: (ha-863936-m02) Reserving static IP address...
	I0816 12:37:41.834299   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has current primary IP address 192.168.39.101 and MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:41.834724   22106 main.go:141] libmachine: (ha-863936-m02) DBG | unable to find host DHCP lease matching {name: "ha-863936-m02", mac: "52:54:00:c0:1e:73", ip: "192.168.39.101"} in network mk-ha-863936
	I0816 12:37:41.905117   22106 main.go:141] libmachine: (ha-863936-m02) Reserved static IP address: 192.168.39.101
	I0816 12:37:41.905143   22106 main.go:141] libmachine: (ha-863936-m02) Waiting for SSH to be available...
	I0816 12:37:41.905189   22106 main.go:141] libmachine: (ha-863936-m02) DBG | Getting to WaitForSSH function...
	I0816 12:37:41.907974   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:41.908426   22106 main.go:141] libmachine: (ha-863936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1e:73", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:37:36 +0000 UTC Type:0 Mac:52:54:00:c0:1e:73 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c0:1e:73}
	I0816 12:37:41.908450   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:41.908608   22106 main.go:141] libmachine: (ha-863936-m02) DBG | Using SSH client type: external
	I0816 12:37:41.908632   22106 main.go:141] libmachine: (ha-863936-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m02/id_rsa (-rw-------)
	I0816 12:37:41.908663   22106 main.go:141] libmachine: (ha-863936-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.101 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 12:37:41.908671   22106 main.go:141] libmachine: (ha-863936-m02) DBG | About to run SSH command:
	I0816 12:37:41.908684   22106 main.go:141] libmachine: (ha-863936-m02) DBG | exit 0
	I0816 12:37:42.036782   22106 main.go:141] libmachine: (ha-863936-m02) DBG | SSH cmd err, output: <nil>: 
	I0816 12:37:42.037083   22106 main.go:141] libmachine: (ha-863936-m02) KVM machine creation complete!
	I0816 12:37:42.037407   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetConfigRaw
	I0816 12:37:42.037913   22106 main.go:141] libmachine: (ha-863936-m02) Calling .DriverName
	I0816 12:37:42.038073   22106 main.go:141] libmachine: (ha-863936-m02) Calling .DriverName
	I0816 12:37:42.038308   22106 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0816 12:37:42.038324   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetState
	I0816 12:37:42.039541   22106 main.go:141] libmachine: Detecting operating system of created instance...
	I0816 12:37:42.039571   22106 main.go:141] libmachine: Waiting for SSH to be available...
	I0816 12:37:42.039577   22106 main.go:141] libmachine: Getting to WaitForSSH function...
	I0816 12:37:42.039584   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHHostname
	I0816 12:37:42.041745   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:42.042058   22106 main.go:141] libmachine: (ha-863936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1e:73", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:37:36 +0000 UTC Type:0 Mac:52:54:00:c0:1e:73 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-863936-m02 Clientid:01:52:54:00:c0:1e:73}
	I0816 12:37:42.042097   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:42.042251   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHPort
	I0816 12:37:42.042374   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHKeyPath
	I0816 12:37:42.042479   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHKeyPath
	I0816 12:37:42.042579   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHUsername
	I0816 12:37:42.042752   22106 main.go:141] libmachine: Using SSH client type: native
	I0816 12:37:42.042946   22106 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I0816 12:37:42.042957   22106 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0816 12:37:42.148036   22106 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 12:37:42.148058   22106 main.go:141] libmachine: Detecting the provisioner...
	I0816 12:37:42.148067   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHHostname
	I0816 12:37:42.150631   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:42.150997   22106 main.go:141] libmachine: (ha-863936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1e:73", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:37:36 +0000 UTC Type:0 Mac:52:54:00:c0:1e:73 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-863936-m02 Clientid:01:52:54:00:c0:1e:73}
	I0816 12:37:42.151019   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:42.151219   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHPort
	I0816 12:37:42.151414   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHKeyPath
	I0816 12:37:42.151595   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHKeyPath
	I0816 12:37:42.151733   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHUsername
	I0816 12:37:42.151890   22106 main.go:141] libmachine: Using SSH client type: native
	I0816 12:37:42.152091   22106 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I0816 12:37:42.152105   22106 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0816 12:37:42.257667   22106 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0816 12:37:42.257739   22106 main.go:141] libmachine: found compatible host: buildroot
	I0816 12:37:42.257750   22106 main.go:141] libmachine: Provisioning with buildroot...
	I0816 12:37:42.257758   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetMachineName
	I0816 12:37:42.257982   22106 buildroot.go:166] provisioning hostname "ha-863936-m02"
	I0816 12:37:42.258013   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetMachineName
	I0816 12:37:42.258225   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHHostname
	I0816 12:37:42.260648   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:42.261018   22106 main.go:141] libmachine: (ha-863936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1e:73", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:37:36 +0000 UTC Type:0 Mac:52:54:00:c0:1e:73 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-863936-m02 Clientid:01:52:54:00:c0:1e:73}
	I0816 12:37:42.261047   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:42.261197   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHPort
	I0816 12:37:42.261376   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHKeyPath
	I0816 12:37:42.261498   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHKeyPath
	I0816 12:37:42.261602   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHUsername
	I0816 12:37:42.261775   22106 main.go:141] libmachine: Using SSH client type: native
	I0816 12:37:42.261937   22106 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I0816 12:37:42.261949   22106 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-863936-m02 && echo "ha-863936-m02" | sudo tee /etc/hostname
	I0816 12:37:42.380594   22106 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-863936-m02
	
	I0816 12:37:42.380615   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHHostname
	I0816 12:37:42.383327   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:42.383693   22106 main.go:141] libmachine: (ha-863936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1e:73", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:37:36 +0000 UTC Type:0 Mac:52:54:00:c0:1e:73 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-863936-m02 Clientid:01:52:54:00:c0:1e:73}
	I0816 12:37:42.383719   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:42.383936   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHPort
	I0816 12:37:42.384178   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHKeyPath
	I0816 12:37:42.384347   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHKeyPath
	I0816 12:37:42.384499   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHUsername
	I0816 12:37:42.384657   22106 main.go:141] libmachine: Using SSH client type: native
	I0816 12:37:42.384846   22106 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I0816 12:37:42.384863   22106 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-863936-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-863936-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-863936-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 12:37:42.501738   22106 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 12:37:42.501765   22106 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-3966/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-3966/.minikube}
	I0816 12:37:42.501779   22106 buildroot.go:174] setting up certificates
	I0816 12:37:42.501788   22106 provision.go:84] configureAuth start
	I0816 12:37:42.501796   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetMachineName
	I0816 12:37:42.502045   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetIP
	I0816 12:37:42.504618   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:42.504898   22106 main.go:141] libmachine: (ha-863936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1e:73", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:37:36 +0000 UTC Type:0 Mac:52:54:00:c0:1e:73 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-863936-m02 Clientid:01:52:54:00:c0:1e:73}
	I0816 12:37:42.504943   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:42.505135   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHHostname
	I0816 12:37:42.507187   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:42.507542   22106 main.go:141] libmachine: (ha-863936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1e:73", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:37:36 +0000 UTC Type:0 Mac:52:54:00:c0:1e:73 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-863936-m02 Clientid:01:52:54:00:c0:1e:73}
	I0816 12:37:42.507570   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:42.507718   22106 provision.go:143] copyHostCerts
	I0816 12:37:42.507747   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem
	I0816 12:37:42.507785   22106 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem, removing ...
	I0816 12:37:42.507797   22106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem
	I0816 12:37:42.507873   22106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem (1082 bytes)
	I0816 12:37:42.507975   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem
	I0816 12:37:42.508000   22106 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem, removing ...
	I0816 12:37:42.508009   22106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem
	I0816 12:37:42.508041   22106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem (1123 bytes)
	I0816 12:37:42.508111   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem
	I0816 12:37:42.508137   22106 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem, removing ...
	I0816 12:37:42.508146   22106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem
	I0816 12:37:42.508193   22106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem (1675 bytes)
	I0816 12:37:42.508286   22106 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem org=jenkins.ha-863936-m02 san=[127.0.0.1 192.168.39.101 ha-863936-m02 localhost minikube]
	I0816 12:37:42.645945   22106 provision.go:177] copyRemoteCerts
	I0816 12:37:42.645994   22106 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 12:37:42.646015   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHHostname
	I0816 12:37:42.648696   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:42.649035   22106 main.go:141] libmachine: (ha-863936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1e:73", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:37:36 +0000 UTC Type:0 Mac:52:54:00:c0:1e:73 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-863936-m02 Clientid:01:52:54:00:c0:1e:73}
	I0816 12:37:42.649061   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:42.649216   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHPort
	I0816 12:37:42.649345   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHKeyPath
	I0816 12:37:42.649484   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHUsername
	I0816 12:37:42.649568   22106 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m02/id_rsa Username:docker}
	I0816 12:37:42.731781   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 12:37:42.731841   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 12:37:42.755699   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 12:37:42.755759   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0816 12:37:42.778658   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 12:37:42.778716   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 12:37:42.801588   22106 provision.go:87] duration metric: took 299.788614ms to configureAuth
	I0816 12:37:42.801637   22106 buildroot.go:189] setting minikube options for container-runtime
	I0816 12:37:42.801814   22106 config.go:182] Loaded profile config "ha-863936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 12:37:42.801879   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHHostname
	I0816 12:37:42.804443   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:42.804758   22106 main.go:141] libmachine: (ha-863936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1e:73", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:37:36 +0000 UTC Type:0 Mac:52:54:00:c0:1e:73 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-863936-m02 Clientid:01:52:54:00:c0:1e:73}
	I0816 12:37:42.804786   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:42.804988   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHPort
	I0816 12:37:42.805161   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHKeyPath
	I0816 12:37:42.805302   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHKeyPath
	I0816 12:37:42.805417   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHUsername
	I0816 12:37:42.805549   22106 main.go:141] libmachine: Using SSH client type: native
	I0816 12:37:42.805716   22106 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I0816 12:37:42.805730   22106 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 12:37:43.072228   22106 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 12:37:43.072252   22106 main.go:141] libmachine: Checking connection to Docker...
	I0816 12:37:43.072261   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetURL
	I0816 12:37:43.073511   22106 main.go:141] libmachine: (ha-863936-m02) DBG | Using libvirt version 6000000
	I0816 12:37:43.075706   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:43.076023   22106 main.go:141] libmachine: (ha-863936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1e:73", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:37:36 +0000 UTC Type:0 Mac:52:54:00:c0:1e:73 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-863936-m02 Clientid:01:52:54:00:c0:1e:73}
	I0816 12:37:43.076049   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:43.076189   22106 main.go:141] libmachine: Docker is up and running!
	I0816 12:37:43.076204   22106 main.go:141] libmachine: Reticulating splines...
	I0816 12:37:43.076211   22106 client.go:171] duration metric: took 22.087958589s to LocalClient.Create
	I0816 12:37:43.076229   22106 start.go:167] duration metric: took 22.088010164s to libmachine.API.Create "ha-863936"
	I0816 12:37:43.076237   22106 start.go:293] postStartSetup for "ha-863936-m02" (driver="kvm2")
	I0816 12:37:43.076246   22106 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 12:37:43.076269   22106 main.go:141] libmachine: (ha-863936-m02) Calling .DriverName
	I0816 12:37:43.076484   22106 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 12:37:43.076507   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHHostname
	I0816 12:37:43.078280   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:43.078557   22106 main.go:141] libmachine: (ha-863936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1e:73", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:37:36 +0000 UTC Type:0 Mac:52:54:00:c0:1e:73 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-863936-m02 Clientid:01:52:54:00:c0:1e:73}
	I0816 12:37:43.078583   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:43.078707   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHPort
	I0816 12:37:43.078871   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHKeyPath
	I0816 12:37:43.079017   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHUsername
	I0816 12:37:43.079154   22106 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m02/id_rsa Username:docker}
	I0816 12:37:43.164009   22106 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 12:37:43.168315   22106 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 12:37:43.168331   22106 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/addons for local assets ...
	I0816 12:37:43.168408   22106 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/files for local assets ...
	I0816 12:37:43.168499   22106 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem -> 111492.pem in /etc/ssl/certs
	I0816 12:37:43.168509   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem -> /etc/ssl/certs/111492.pem
	I0816 12:37:43.168615   22106 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 12:37:43.177933   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /etc/ssl/certs/111492.pem (1708 bytes)
	I0816 12:37:43.201327   22106 start.go:296] duration metric: took 125.079274ms for postStartSetup
	I0816 12:37:43.201370   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetConfigRaw
	I0816 12:37:43.201918   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetIP
	I0816 12:37:43.204181   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:43.204514   22106 main.go:141] libmachine: (ha-863936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1e:73", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:37:36 +0000 UTC Type:0 Mac:52:54:00:c0:1e:73 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-863936-m02 Clientid:01:52:54:00:c0:1e:73}
	I0816 12:37:43.204536   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:43.204779   22106 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/config.json ...
	I0816 12:37:43.204971   22106 start.go:128] duration metric: took 22.234710675s to createHost
	I0816 12:37:43.204991   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHHostname
	I0816 12:37:43.206856   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:43.207256   22106 main.go:141] libmachine: (ha-863936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1e:73", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:37:36 +0000 UTC Type:0 Mac:52:54:00:c0:1e:73 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-863936-m02 Clientid:01:52:54:00:c0:1e:73}
	I0816 12:37:43.207281   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:43.207411   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHPort
	I0816 12:37:43.207587   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHKeyPath
	I0816 12:37:43.207749   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHKeyPath
	I0816 12:37:43.207875   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHUsername
	I0816 12:37:43.208032   22106 main.go:141] libmachine: Using SSH client type: native
	I0816 12:37:43.208178   22106 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I0816 12:37:43.208192   22106 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 12:37:43.317639   22106 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723811863.288503434
	
	I0816 12:37:43.317655   22106 fix.go:216] guest clock: 1723811863.288503434
	I0816 12:37:43.317662   22106 fix.go:229] Guest: 2024-08-16 12:37:43.288503434 +0000 UTC Remote: 2024-08-16 12:37:43.204981486 +0000 UTC m=+70.209334380 (delta=83.521948ms)
	I0816 12:37:43.317676   22106 fix.go:200] guest clock delta is within tolerance: 83.521948ms
	I0816 12:37:43.317680   22106 start.go:83] releasing machines lock for "ha-863936-m02", held for 22.347510342s
	I0816 12:37:43.317698   22106 main.go:141] libmachine: (ha-863936-m02) Calling .DriverName
	I0816 12:37:43.317961   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetIP
	I0816 12:37:43.320459   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:43.320822   22106 main.go:141] libmachine: (ha-863936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1e:73", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:37:36 +0000 UTC Type:0 Mac:52:54:00:c0:1e:73 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-863936-m02 Clientid:01:52:54:00:c0:1e:73}
	I0816 12:37:43.320851   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:43.322946   22106 out.go:177] * Found network options:
	I0816 12:37:43.324216   22106 out.go:177]   - NO_PROXY=192.168.39.2
	W0816 12:37:43.325417   22106 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 12:37:43.325449   22106 main.go:141] libmachine: (ha-863936-m02) Calling .DriverName
	I0816 12:37:43.325979   22106 main.go:141] libmachine: (ha-863936-m02) Calling .DriverName
	I0816 12:37:43.326160   22106 main.go:141] libmachine: (ha-863936-m02) Calling .DriverName
	I0816 12:37:43.326272   22106 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 12:37:43.326310   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHHostname
	W0816 12:37:43.326341   22106 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 12:37:43.326413   22106 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 12:37:43.326434   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHHostname
	I0816 12:37:43.328950   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:43.329206   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:43.329315   22106 main.go:141] libmachine: (ha-863936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1e:73", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:37:36 +0000 UTC Type:0 Mac:52:54:00:c0:1e:73 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-863936-m02 Clientid:01:52:54:00:c0:1e:73}
	I0816 12:37:43.329341   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:43.329468   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHPort
	I0816 12:37:43.329545   22106 main.go:141] libmachine: (ha-863936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1e:73", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:37:36 +0000 UTC Type:0 Mac:52:54:00:c0:1e:73 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-863936-m02 Clientid:01:52:54:00:c0:1e:73}
	I0816 12:37:43.329574   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:43.329635   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHKeyPath
	I0816 12:37:43.329705   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHPort
	I0816 12:37:43.329776   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHUsername
	I0816 12:37:43.329827   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHKeyPath
	I0816 12:37:43.329884   22106 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m02/id_rsa Username:docker}
	I0816 12:37:43.329946   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHUsername
	I0816 12:37:43.330046   22106 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m02/id_rsa Username:docker}
	I0816 12:37:43.561485   22106 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 12:37:43.567214   22106 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 12:37:43.567276   22106 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 12:37:43.583545   22106 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 12:37:43.583562   22106 start.go:495] detecting cgroup driver to use...
	I0816 12:37:43.583612   22106 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 12:37:43.599789   22106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 12:37:43.613198   22106 docker.go:217] disabling cri-docker service (if available) ...
	I0816 12:37:43.613254   22106 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 12:37:43.626286   22106 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 12:37:43.640299   22106 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 12:37:43.762732   22106 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 12:37:43.923013   22106 docker.go:233] disabling docker service ...
	I0816 12:37:43.923085   22106 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 12:37:43.937191   22106 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 12:37:43.949587   22106 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 12:37:44.069985   22106 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 12:37:44.185311   22106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 12:37:44.199367   22106 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 12:37:44.217870   22106 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 12:37:44.217927   22106 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:37:44.228954   22106 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 12:37:44.229018   22106 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:37:44.240072   22106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:37:44.251064   22106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:37:44.261798   22106 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 12:37:44.272677   22106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:37:44.283104   22106 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:37:44.300285   22106 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:37:44.311213   22106 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 12:37:44.320966   22106 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 12:37:44.321017   22106 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 12:37:44.334167   22106 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 12:37:44.344659   22106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 12:37:44.465534   22106 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 12:37:44.597973   22106 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 12:37:44.598063   22106 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 12:37:44.603066   22106 start.go:563] Will wait 60s for crictl version
	I0816 12:37:44.603115   22106 ssh_runner.go:195] Run: which crictl
	I0816 12:37:44.606849   22106 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 12:37:44.652499   22106 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 12:37:44.652588   22106 ssh_runner.go:195] Run: crio --version
	I0816 12:37:44.681284   22106 ssh_runner.go:195] Run: crio --version
	I0816 12:37:44.710540   22106 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 12:37:44.711910   22106 out.go:177]   - env NO_PROXY=192.168.39.2
	I0816 12:37:44.712951   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetIP
	I0816 12:37:44.715737   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:44.716090   22106 main.go:141] libmachine: (ha-863936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1e:73", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:37:36 +0000 UTC Type:0 Mac:52:54:00:c0:1e:73 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-863936-m02 Clientid:01:52:54:00:c0:1e:73}
	I0816 12:37:44.716114   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:44.716331   22106 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0816 12:37:44.720468   22106 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 12:37:44.733172   22106 mustload.go:65] Loading cluster: ha-863936
	I0816 12:37:44.733378   22106 config.go:182] Loaded profile config "ha-863936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 12:37:44.733640   22106 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:37:44.733679   22106 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:37:44.747821   22106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40863
	I0816 12:37:44.748195   22106 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:37:44.748665   22106 main.go:141] libmachine: Using API Version  1
	I0816 12:37:44.748683   22106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:37:44.748962   22106 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:37:44.749131   22106 main.go:141] libmachine: (ha-863936) Calling .GetState
	I0816 12:37:44.750510   22106 host.go:66] Checking if "ha-863936" exists ...
	I0816 12:37:44.750816   22106 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:37:44.750850   22106 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:37:44.764419   22106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43919
	I0816 12:37:44.764744   22106 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:37:44.765220   22106 main.go:141] libmachine: Using API Version  1
	I0816 12:37:44.765240   22106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:37:44.765521   22106 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:37:44.765698   22106 main.go:141] libmachine: (ha-863936) Calling .DriverName
	I0816 12:37:44.765825   22106 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936 for IP: 192.168.39.101
	I0816 12:37:44.765834   22106 certs.go:194] generating shared ca certs ...
	I0816 12:37:44.765852   22106 certs.go:226] acquiring lock for ca certs: {Name:mkdb46755373c135acf8239d2d4352f4b0b3d1d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:37:44.765973   22106 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key
	I0816 12:37:44.766028   22106 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key
	I0816 12:37:44.766041   22106 certs.go:256] generating profile certs ...
	I0816 12:37:44.766123   22106 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/client.key
	I0816 12:37:44.766153   22106 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.key.f75229f1
	I0816 12:37:44.766174   22106 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.crt.f75229f1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.2 192.168.39.101 192.168.39.254]
	I0816 12:37:44.830541   22106 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.crt.f75229f1 ...
	I0816 12:37:44.830570   22106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.crt.f75229f1: {Name:mkfed86040fee228ea9f3c3ee1e30bba4a154412 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:37:44.830749   22106 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.key.f75229f1 ...
	I0816 12:37:44.830771   22106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.key.f75229f1: {Name:mk2c664260d68b6ab0552ce83b5ab0e9b76f731f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:37:44.830883   22106 certs.go:381] copying /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.crt.f75229f1 -> /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.crt
	I0816 12:37:44.831032   22106 certs.go:385] copying /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.key.f75229f1 -> /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.key
	I0816 12:37:44.831182   22106 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/proxy-client.key
	I0816 12:37:44.831199   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0816 12:37:44.831224   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0816 12:37:44.831243   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0816 12:37:44.831262   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0816 12:37:44.831280   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0816 12:37:44.831298   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0816 12:37:44.831316   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0816 12:37:44.831335   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0816 12:37:44.831395   22106 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem (1338 bytes)
	W0816 12:37:44.831434   22106 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149_empty.pem, impossibly tiny 0 bytes
	I0816 12:37:44.831447   22106 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 12:37:44.831591   22106 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem (1082 bytes)
	I0816 12:37:44.831704   22106 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem (1123 bytes)
	I0816 12:37:44.831740   22106 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem (1675 bytes)
	I0816 12:37:44.831807   22106 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem (1708 bytes)
	I0816 12:37:44.831849   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0816 12:37:44.831871   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem -> /usr/share/ca-certificates/11149.pem
	I0816 12:37:44.831890   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem -> /usr/share/ca-certificates/111492.pem
	I0816 12:37:44.831929   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:37:44.834714   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:37:44.835032   22106 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:37:44.835051   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:37:44.835232   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHPort
	I0816 12:37:44.835417   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:37:44.835565   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHUsername
	I0816 12:37:44.835674   22106 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936/id_rsa Username:docker}
	I0816 12:37:44.905239   22106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0816 12:37:44.910247   22106 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0816 12:37:44.920953   22106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0816 12:37:44.925193   22106 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0816 12:37:44.935453   22106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0816 12:37:44.939756   22106 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0816 12:37:44.949753   22106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0816 12:37:44.953788   22106 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0816 12:37:44.963955   22106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0816 12:37:44.968038   22106 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0816 12:37:44.977647   22106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0816 12:37:44.981682   22106 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0816 12:37:44.991402   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 12:37:45.016672   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 12:37:45.040484   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 12:37:45.063887   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 12:37:45.086552   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0816 12:37:45.111080   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 12:37:45.134678   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 12:37:45.158653   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 12:37:45.181591   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 12:37:45.204802   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem --> /usr/share/ca-certificates/11149.pem (1338 bytes)
	I0816 12:37:45.228118   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /usr/share/ca-certificates/111492.pem (1708 bytes)
	I0816 12:37:45.251573   22106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0816 12:37:45.269286   22106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0816 12:37:45.285849   22106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0816 12:37:45.301292   22106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0816 12:37:45.317478   22106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0816 12:37:45.334653   22106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0816 12:37:45.351552   22106 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0816 12:37:45.367606   22106 ssh_runner.go:195] Run: openssl version
	I0816 12:37:45.373185   22106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111492.pem && ln -fs /usr/share/ca-certificates/111492.pem /etc/ssl/certs/111492.pem"
	I0816 12:37:45.383850   22106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111492.pem
	I0816 12:37:45.388246   22106 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 12:33 /usr/share/ca-certificates/111492.pem
	I0816 12:37:45.388283   22106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111492.pem
	I0816 12:37:45.394072   22106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111492.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 12:37:45.405658   22106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 12:37:45.416660   22106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 12:37:45.421231   22106 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 12:22 /usr/share/ca-certificates/minikubeCA.pem
	I0816 12:37:45.421280   22106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 12:37:45.427251   22106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 12:37:45.438506   22106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11149.pem && ln -fs /usr/share/ca-certificates/11149.pem /etc/ssl/certs/11149.pem"
	I0816 12:37:45.449273   22106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11149.pem
	I0816 12:37:45.453684   22106 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 12:33 /usr/share/ca-certificates/11149.pem
	I0816 12:37:45.453733   22106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11149.pem
	I0816 12:37:45.459563   22106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11149.pem /etc/ssl/certs/51391683.0"
	I0816 12:37:45.470406   22106 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 12:37:45.474505   22106 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0816 12:37:45.474550   22106 kubeadm.go:934] updating node {m02 192.168.39.101 8443 v1.31.0 crio true true} ...
	I0816 12:37:45.474626   22106 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-863936-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.101
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-863936 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 12:37:45.474649   22106 kube-vip.go:115] generating kube-vip config ...
	I0816 12:37:45.474676   22106 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0816 12:37:45.492902   22106 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0816 12:37:45.492972   22106 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0816 12:37:45.493019   22106 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 12:37:45.502996   22106 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0816 12:37:45.503055   22106 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0816 12:37:45.512956   22106 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0816 12:37:45.512978   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0816 12:37:45.513033   22106 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0816 12:37:45.513091   22106 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19423-3966/.minikube/cache/linux/amd64/v1.31.0/kubeadm
	I0816 12:37:45.513124   22106 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19423-3966/.minikube/cache/linux/amd64/v1.31.0/kubelet
	I0816 12:37:45.517363   22106 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0816 12:37:45.517389   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0816 12:38:24.802198   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0816 12:38:24.802276   22106 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0816 12:38:24.807378   22106 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0816 12:38:24.807427   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0816 12:38:36.355721   22106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 12:38:36.370820   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0816 12:38:36.370943   22106 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0816 12:38:36.375474   22106 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0816 12:38:36.375500   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0816 12:38:36.684466   22106 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0816 12:38:36.694187   22106 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0816 12:38:36.710456   22106 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 12:38:36.726742   22106 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0816 12:38:36.742268   22106 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0816 12:38:36.745775   22106 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 12:38:36.757469   22106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 12:38:36.877689   22106 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 12:38:36.893843   22106 host.go:66] Checking if "ha-863936" exists ...
	I0816 12:38:36.894275   22106 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:38:36.894326   22106 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:38:36.909385   22106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44887
	I0816 12:38:36.909852   22106 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:38:36.910330   22106 main.go:141] libmachine: Using API Version  1
	I0816 12:38:36.910349   22106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:38:36.910641   22106 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:38:36.910826   22106 main.go:141] libmachine: (ha-863936) Calling .DriverName
	I0816 12:38:36.910982   22106 start.go:317] joinCluster: &{Name:ha-863936 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-863936 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 12:38:36.911093   22106 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0816 12:38:36.911114   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:38:36.914091   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:38:36.914463   22106 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:38:36.914491   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:38:36.914737   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHPort
	I0816 12:38:36.914950   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:38:36.915092   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHUsername
	I0816 12:38:36.915239   22106 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936/id_rsa Username:docker}
	I0816 12:38:37.057232   22106 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 12:38:37.057277   22106 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ly6k5a.xfrdulb4vc1nup4v --discovery-token-ca-cert-hash sha256:4dc33a2efd2478f5cbf5aa38b4f971f510c5b41ce450063af834610254851641 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-863936-m02 --control-plane --apiserver-advertise-address=192.168.39.101 --apiserver-bind-port=8443"
	I0816 12:38:57.130571   22106 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ly6k5a.xfrdulb4vc1nup4v --discovery-token-ca-cert-hash sha256:4dc33a2efd2478f5cbf5aa38b4f971f510c5b41ce450063af834610254851641 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-863936-m02 --control-plane --apiserver-advertise-address=192.168.39.101 --apiserver-bind-port=8443": (20.073270808s)
	I0816 12:38:57.130611   22106 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0816 12:38:57.758540   22106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-863936-m02 minikube.k8s.io/updated_at=2024_08_16T12_38_57_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ab84f9bc76071a77c857a14f5c66dccc01002b05 minikube.k8s.io/name=ha-863936 minikube.k8s.io/primary=false
	I0816 12:38:57.887510   22106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-863936-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0816 12:38:58.012739   22106 start.go:319] duration metric: took 21.101753547s to joinCluster
	I0816 12:38:58.012807   22106 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 12:38:58.013086   22106 config.go:182] Loaded profile config "ha-863936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 12:38:58.014382   22106 out.go:177] * Verifying Kubernetes components...
	I0816 12:38:58.015777   22106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 12:38:58.266323   22106 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 12:38:58.326752   22106 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 12:38:58.326977   22106 kapi.go:59] client config for ha-863936: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/client.crt", KeyFile:"/home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/client.key", CAFile:"/home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f189a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0816 12:38:58.327034   22106 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.2:8443
	I0816 12:38:58.327221   22106 node_ready.go:35] waiting up to 6m0s for node "ha-863936-m02" to be "Ready" ...
	I0816 12:38:58.327320   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:38:58.327332   22106 round_trippers.go:469] Request Headers:
	I0816 12:38:58.327340   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:38:58.327344   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:38:58.350331   22106 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I0816 12:38:58.828408   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:38:58.828437   22106 round_trippers.go:469] Request Headers:
	I0816 12:38:58.828450   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:38:58.828455   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:38:58.837724   22106 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0816 12:38:59.327834   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:38:59.327861   22106 round_trippers.go:469] Request Headers:
	I0816 12:38:59.327871   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:38:59.327876   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:38:59.331034   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:38:59.828050   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:38:59.828073   22106 round_trippers.go:469] Request Headers:
	I0816 12:38:59.828085   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:38:59.828090   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:38:59.832321   22106 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 12:39:00.328319   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:00.328340   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:00.328348   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:00.328353   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:00.331910   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:00.332603   22106 node_ready.go:53] node "ha-863936-m02" has status "Ready":"False"
	I0816 12:39:00.828079   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:00.828107   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:00.828118   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:00.828126   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:00.834382   22106 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0816 12:39:01.327891   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:01.327914   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:01.327923   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:01.327928   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:01.331659   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:01.828076   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:01.828100   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:01.828112   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:01.828118   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:01.832572   22106 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 12:39:02.327817   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:02.327841   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:02.327849   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:02.327853   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:02.332131   22106 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 12:39:02.332714   22106 node_ready.go:53] node "ha-863936-m02" has status "Ready":"False"
	I0816 12:39:02.827732   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:02.827753   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:02.827761   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:02.827765   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:02.831158   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:03.328253   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:03.328274   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:03.328282   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:03.328289   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:03.335833   22106 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0816 12:39:03.828004   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:03.828023   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:03.828032   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:03.828036   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:03.830735   22106 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 12:39:04.327644   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:04.327669   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:04.327678   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:04.327682   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:04.331296   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:04.827865   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:04.827885   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:04.827893   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:04.827896   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:04.831546   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:04.832154   22106 node_ready.go:53] node "ha-863936-m02" has status "Ready":"False"
	I0816 12:39:05.327517   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:05.327545   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:05.327556   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:05.327562   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:05.330381   22106 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 12:39:05.828427   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:05.828451   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:05.828460   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:05.828465   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:05.832531   22106 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 12:39:06.327878   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:06.327899   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:06.327907   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:06.327910   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:06.331640   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:06.828067   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:06.828087   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:06.828095   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:06.828101   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:06.831261   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:07.327758   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:07.327779   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:07.327787   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:07.327792   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:07.331448   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:07.332038   22106 node_ready.go:53] node "ha-863936-m02" has status "Ready":"False"
	I0816 12:39:07.827436   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:07.827460   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:07.827470   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:07.827475   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:07.830494   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:08.327510   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:08.327534   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:08.327542   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:08.327548   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:08.332651   22106 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0816 12:39:08.827538   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:08.827562   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:08.827571   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:08.827576   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:08.830857   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:09.327698   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:09.327719   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:09.327727   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:09.327730   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:09.331441   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:09.828240   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:09.828263   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:09.828269   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:09.828274   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:09.831561   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:09.832339   22106 node_ready.go:53] node "ha-863936-m02" has status "Ready":"False"
	I0816 12:39:10.327914   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:10.327936   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:10.327944   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:10.327948   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:10.330956   22106 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 12:39:10.828073   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:10.828093   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:10.828101   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:10.828105   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:10.831785   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:11.328326   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:11.328351   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:11.328360   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:11.328365   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:11.331884   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:11.827591   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:11.827613   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:11.827621   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:11.827624   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:11.830668   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:12.328250   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:12.328276   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:12.328288   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:12.328294   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:12.331805   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:12.332422   22106 node_ready.go:53] node "ha-863936-m02" has status "Ready":"False"
	I0816 12:39:12.827614   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:12.827641   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:12.827651   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:12.827657   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:12.831532   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:13.328315   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:13.328339   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:13.328347   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:13.328353   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:13.331899   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:13.827992   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:13.828020   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:13.828032   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:13.828039   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:13.832214   22106 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 12:39:14.328331   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:14.328357   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:14.328366   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:14.328371   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:14.331879   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:14.827609   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:14.827637   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:14.827649   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:14.827655   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:14.831305   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:14.831893   22106 node_ready.go:53] node "ha-863936-m02" has status "Ready":"False"
	I0816 12:39:15.328285   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:15.328312   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:15.328323   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:15.328328   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:15.331836   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:15.828063   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:15.828088   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:15.828101   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:15.828108   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:15.840834   22106 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0816 12:39:16.328382   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:16.328404   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:16.328412   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:16.328417   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:16.332032   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:16.827633   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:16.827654   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:16.827662   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:16.827666   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:16.831155   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:17.327417   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:17.327449   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:17.327457   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:17.327461   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:17.331265   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:17.331872   22106 node_ready.go:49] node "ha-863936-m02" has status "Ready":"True"
	I0816 12:39:17.331889   22106 node_ready.go:38] duration metric: took 19.004645121s for node "ha-863936-m02" to be "Ready" ...
	I0816 12:39:17.331898   22106 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 12:39:17.331957   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods
	I0816 12:39:17.331966   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:17.331973   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:17.331981   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:17.336440   22106 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 12:39:17.345623   22106 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-7gfgm" in "kube-system" namespace to be "Ready" ...
	I0816 12:39:17.345712   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-7gfgm
	I0816 12:39:17.345722   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:17.345730   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:17.345734   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:17.350186   22106 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 12:39:17.351194   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936
	I0816 12:39:17.351207   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:17.351213   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:17.351216   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:17.353859   22106 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 12:39:17.354806   22106 pod_ready.go:93] pod "coredns-6f6b679f8f-7gfgm" in "kube-system" namespace has status "Ready":"True"
	I0816 12:39:17.354822   22106 pod_ready.go:82] duration metric: took 9.175178ms for pod "coredns-6f6b679f8f-7gfgm" in "kube-system" namespace to be "Ready" ...
	I0816 12:39:17.354834   22106 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-ssb5h" in "kube-system" namespace to be "Ready" ...
	I0816 12:39:17.354885   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-ssb5h
	I0816 12:39:17.354895   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:17.354904   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:17.354912   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:17.357694   22106 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 12:39:17.358348   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936
	I0816 12:39:17.358359   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:17.358365   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:17.358368   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:17.360647   22106 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 12:39:17.361047   22106 pod_ready.go:93] pod "coredns-6f6b679f8f-ssb5h" in "kube-system" namespace has status "Ready":"True"
	I0816 12:39:17.361061   22106 pod_ready.go:82] duration metric: took 6.22116ms for pod "coredns-6f6b679f8f-ssb5h" in "kube-system" namespace to be "Ready" ...
	I0816 12:39:17.361070   22106 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-863936" in "kube-system" namespace to be "Ready" ...
	I0816 12:39:17.361122   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-863936
	I0816 12:39:17.361132   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:17.361141   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:17.361146   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:17.363551   22106 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 12:39:17.364299   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936
	I0816 12:39:17.364312   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:17.364321   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:17.364328   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:17.366668   22106 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 12:39:17.367071   22106 pod_ready.go:93] pod "etcd-ha-863936" in "kube-system" namespace has status "Ready":"True"
	I0816 12:39:17.367087   22106 pod_ready.go:82] duration metric: took 6.010864ms for pod "etcd-ha-863936" in "kube-system" namespace to be "Ready" ...
	I0816 12:39:17.367099   22106 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-863936-m02" in "kube-system" namespace to be "Ready" ...
	I0816 12:39:17.367159   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-863936-m02
	I0816 12:39:17.367169   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:17.367188   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:17.367196   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:17.370108   22106 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 12:39:17.370764   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:17.370779   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:17.370786   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:17.370789   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:17.373172   22106 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 12:39:17.373650   22106 pod_ready.go:93] pod "etcd-ha-863936-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 12:39:17.373667   22106 pod_ready.go:82] duration metric: took 6.560533ms for pod "etcd-ha-863936-m02" in "kube-system" namespace to be "Ready" ...
	I0816 12:39:17.373685   22106 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-863936" in "kube-system" namespace to be "Ready" ...
	I0816 12:39:17.528070   22106 request.go:632] Waited for 154.326739ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-863936
	I0816 12:39:17.528141   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-863936
	I0816 12:39:17.528148   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:17.528155   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:17.528158   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:17.531759   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:17.727755   22106 request.go:632] Waited for 195.323563ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-863936
	I0816 12:39:17.727817   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936
	I0816 12:39:17.727822   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:17.727830   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:17.727838   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:17.730878   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:17.731434   22106 pod_ready.go:93] pod "kube-apiserver-ha-863936" in "kube-system" namespace has status "Ready":"True"
	I0816 12:39:17.731452   22106 pod_ready.go:82] duration metric: took 357.759007ms for pod "kube-apiserver-ha-863936" in "kube-system" namespace to be "Ready" ...
	I0816 12:39:17.731465   22106 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-863936-m02" in "kube-system" namespace to be "Ready" ...
	I0816 12:39:17.927625   22106 request.go:632] Waited for 196.086028ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-863936-m02
	I0816 12:39:17.927679   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-863936-m02
	I0816 12:39:17.927686   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:17.927695   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:17.927701   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:17.930446   22106 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 12:39:18.128100   22106 request.go:632] Waited for 197.13209ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:18.128173   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:18.128180   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:18.128188   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:18.128196   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:18.131748   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:18.132266   22106 pod_ready.go:93] pod "kube-apiserver-ha-863936-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 12:39:18.132289   22106 pod_ready.go:82] duration metric: took 400.816169ms for pod "kube-apiserver-ha-863936-m02" in "kube-system" namespace to be "Ready" ...
	I0816 12:39:18.132301   22106 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-863936" in "kube-system" namespace to be "Ready" ...
	I0816 12:39:18.327778   22106 request.go:632] Waited for 195.404436ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-863936
	I0816 12:39:18.327839   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-863936
	I0816 12:39:18.327845   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:18.327852   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:18.327856   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:18.330979   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:18.527899   22106 request.go:632] Waited for 196.351485ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-863936
	I0816 12:39:18.527973   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936
	I0816 12:39:18.527983   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:18.527991   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:18.527998   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:18.531595   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:18.532029   22106 pod_ready.go:93] pod "kube-controller-manager-ha-863936" in "kube-system" namespace has status "Ready":"True"
	I0816 12:39:18.532046   22106 pod_ready.go:82] duration metric: took 399.737901ms for pod "kube-controller-manager-ha-863936" in "kube-system" namespace to be "Ready" ...
	I0816 12:39:18.532057   22106 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-863936-m02" in "kube-system" namespace to be "Ready" ...
	I0816 12:39:18.728195   22106 request.go:632] Waited for 196.05883ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-863936-m02
	I0816 12:39:18.728249   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-863936-m02
	I0816 12:39:18.728254   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:18.728261   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:18.728265   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:18.731338   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:18.927599   22106 request.go:632] Waited for 195.289378ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:18.927668   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:18.927674   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:18.927681   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:18.927686   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:18.930788   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:18.931536   22106 pod_ready.go:93] pod "kube-controller-manager-ha-863936-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 12:39:18.931553   22106 pod_ready.go:82] duration metric: took 399.485231ms for pod "kube-controller-manager-ha-863936-m02" in "kube-system" namespace to be "Ready" ...
	I0816 12:39:18.931562   22106 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7lvfc" in "kube-system" namespace to be "Ready" ...
	I0816 12:39:19.127787   22106 request.go:632] Waited for 196.163483ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7lvfc
	I0816 12:39:19.127874   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7lvfc
	I0816 12:39:19.127883   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:19.127892   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:19.127900   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:19.131246   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:19.328207   22106 request.go:632] Waited for 196.36073ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:19.328281   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:19.328287   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:19.328296   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:19.328300   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:19.331668   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:19.332189   22106 pod_ready.go:93] pod "kube-proxy-7lvfc" in "kube-system" namespace has status "Ready":"True"
	I0816 12:39:19.332209   22106 pod_ready.go:82] duration metric: took 400.637905ms for pod "kube-proxy-7lvfc" in "kube-system" namespace to be "Ready" ...
	I0816 12:39:19.332217   22106 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g75mg" in "kube-system" namespace to be "Ready" ...
	I0816 12:39:19.528262   22106 request.go:632] Waited for 195.977836ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g75mg
	I0816 12:39:19.528317   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g75mg
	I0816 12:39:19.528322   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:19.528331   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:19.528337   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:19.531651   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:19.727478   22106 request.go:632] Waited for 195.290304ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-863936
	I0816 12:39:19.727555   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936
	I0816 12:39:19.727564   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:19.727572   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:19.727577   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:19.730371   22106 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 12:39:19.731361   22106 pod_ready.go:93] pod "kube-proxy-g75mg" in "kube-system" namespace has status "Ready":"True"
	I0816 12:39:19.731383   22106 pod_ready.go:82] duration metric: took 399.159306ms for pod "kube-proxy-g75mg" in "kube-system" namespace to be "Ready" ...
	I0816 12:39:19.731394   22106 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-863936" in "kube-system" namespace to be "Ready" ...
	I0816 12:39:19.928120   22106 request.go:632] Waited for 196.616943ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-863936
	I0816 12:39:19.928191   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-863936
	I0816 12:39:19.928198   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:19.928209   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:19.928215   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:19.932268   22106 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 12:39:20.127965   22106 request.go:632] Waited for 195.079329ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-863936
	I0816 12:39:20.128044   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936
	I0816 12:39:20.128067   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:20.128078   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:20.128086   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:20.131267   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:20.131865   22106 pod_ready.go:93] pod "kube-scheduler-ha-863936" in "kube-system" namespace has status "Ready":"True"
	I0816 12:39:20.131880   22106 pod_ready.go:82] duration metric: took 400.477557ms for pod "kube-scheduler-ha-863936" in "kube-system" namespace to be "Ready" ...
	I0816 12:39:20.131890   22106 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-863936-m02" in "kube-system" namespace to be "Ready" ...
	I0816 12:39:20.328055   22106 request.go:632] Waited for 196.107585ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-863936-m02
	I0816 12:39:20.328107   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-863936-m02
	I0816 12:39:20.328111   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:20.328119   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:20.328125   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:20.331220   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:20.528159   22106 request.go:632] Waited for 196.390149ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:20.528231   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:20.528237   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:20.528248   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:20.528258   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:20.531257   22106 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 12:39:20.531961   22106 pod_ready.go:93] pod "kube-scheduler-ha-863936-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 12:39:20.531979   22106 pod_ready.go:82] duration metric: took 400.081662ms for pod "kube-scheduler-ha-863936-m02" in "kube-system" namespace to be "Ready" ...
	I0816 12:39:20.531990   22106 pod_ready.go:39] duration metric: took 3.200081497s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 12:39:20.532005   22106 api_server.go:52] waiting for apiserver process to appear ...
	I0816 12:39:20.532062   22106 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 12:39:20.548728   22106 api_server.go:72] duration metric: took 22.535890335s to wait for apiserver process to appear ...
	I0816 12:39:20.548756   22106 api_server.go:88] waiting for apiserver healthz status ...
	I0816 12:39:20.548774   22106 api_server.go:253] Checking apiserver healthz at https://192.168.39.2:8443/healthz ...
	I0816 12:39:20.553303   22106 api_server.go:279] https://192.168.39.2:8443/healthz returned 200:
	ok
	I0816 12:39:20.553367   22106 round_trippers.go:463] GET https://192.168.39.2:8443/version
	I0816 12:39:20.553375   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:20.553383   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:20.553386   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:20.554238   22106 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0816 12:39:20.554352   22106 api_server.go:141] control plane version: v1.31.0
	I0816 12:39:20.554369   22106 api_server.go:131] duration metric: took 5.606374ms to wait for apiserver health ...
	I0816 12:39:20.554379   22106 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 12:39:20.727788   22106 request.go:632] Waited for 173.337204ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods
	I0816 12:39:20.727865   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods
	I0816 12:39:20.727871   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:20.727879   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:20.727886   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:20.732600   22106 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 12:39:20.737952   22106 system_pods.go:59] 17 kube-system pods found
	I0816 12:39:20.737977   22106 system_pods.go:61] "coredns-6f6b679f8f-7gfgm" [797ae351-63bf-4994-a9bd-901367887b58] Running
	I0816 12:39:20.737983   22106 system_pods.go:61] "coredns-6f6b679f8f-ssb5h" [5162fb17-6897-40d2-9c2c-80157ea46e07] Running
	I0816 12:39:20.737987   22106 system_pods.go:61] "etcd-ha-863936" [cc32212e-19e1-4ff6-9940-70a580978946] Running
	I0816 12:39:20.737990   22106 system_pods.go:61] "etcd-ha-863936-m02" [2ee4ba71-e936-499e-988a-6a0a3b0c6d65] Running
	I0816 12:39:20.737994   22106 system_pods.go:61] "kindnet-dddkq" [87bd9636-168b-4f61-9382-0914014af5c0] Running
	I0816 12:39:20.737997   22106 system_pods.go:61] "kindnet-qmrb2" [66996322-476e-4322-a1df-bd8cc820cb59] Running
	I0816 12:39:20.738000   22106 system_pods.go:61] "kube-apiserver-ha-863936" [ec7e5aa8-ffe7-4b42-950b-7fd3911e83e0] Running
	I0816 12:39:20.738004   22106 system_pods.go:61] "kube-apiserver-ha-863936-m02" [a32eab45-93ac-4993-b5dd-f73eb91029ce] Running
	I0816 12:39:20.738007   22106 system_pods.go:61] "kube-controller-manager-ha-863936" [b46326a0-950f-4b23-82a4-7793da0d9e9c] Running
	I0816 12:39:20.738012   22106 system_pods.go:61] "kube-controller-manager-ha-863936-m02" [c0bf3d0c-b461-460b-8523-7b0c76741e17] Running
	I0816 12:39:20.738015   22106 system_pods.go:61] "kube-proxy-7lvfc" [d3e6918e-a097-4037-b962-ed996efda26f] Running
	I0816 12:39:20.738018   22106 system_pods.go:61] "kube-proxy-g75mg" [8d22ea17-7ddd-4c07-89d5-0ebaa170066c] Running
	I0816 12:39:20.738021   22106 system_pods.go:61] "kube-scheduler-ha-863936" [51e497db-1e2d-4020-b030-23702fc7a568] Running
	I0816 12:39:20.738024   22106 system_pods.go:61] "kube-scheduler-ha-863936-m02" [ec98ee42-008b-4f36-95cc-3defde74c964] Running
	I0816 12:39:20.738028   22106 system_pods.go:61] "kube-vip-ha-863936" [55dba92f-60c5-416c-9165-cbde743fbfe2] Running
	I0816 12:39:20.738032   22106 system_pods.go:61] "kube-vip-ha-863936-m02" [b385c963-3f91-4810-9cf7-101fa14e28c6] Running
	I0816 12:39:20.738039   22106 system_pods.go:61] "storage-provisioner" [e6e7b7e6-00b6-42e2-9680-e6660e76bc6f] Running
	I0816 12:39:20.738047   22106 system_pods.go:74] duration metric: took 183.660899ms to wait for pod list to return data ...
	I0816 12:39:20.738059   22106 default_sa.go:34] waiting for default service account to be created ...
	I0816 12:39:20.927437   22106 request.go:632] Waited for 189.309729ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/default/serviceaccounts
	I0816 12:39:20.927494   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/default/serviceaccounts
	I0816 12:39:20.927499   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:20.927505   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:20.927514   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:20.931457   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:20.931679   22106 default_sa.go:45] found service account: "default"
	I0816 12:39:20.931696   22106 default_sa.go:55] duration metric: took 193.631337ms for default service account to be created ...
	I0816 12:39:20.931705   22106 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 12:39:21.127633   22106 request.go:632] Waited for 195.859654ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods
	I0816 12:39:21.127713   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods
	I0816 12:39:21.127723   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:21.127732   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:21.127740   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:21.132641   22106 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 12:39:21.136822   22106 system_pods.go:86] 17 kube-system pods found
	I0816 12:39:21.136846   22106 system_pods.go:89] "coredns-6f6b679f8f-7gfgm" [797ae351-63bf-4994-a9bd-901367887b58] Running
	I0816 12:39:21.136852   22106 system_pods.go:89] "coredns-6f6b679f8f-ssb5h" [5162fb17-6897-40d2-9c2c-80157ea46e07] Running
	I0816 12:39:21.136856   22106 system_pods.go:89] "etcd-ha-863936" [cc32212e-19e1-4ff6-9940-70a580978946] Running
	I0816 12:39:21.136860   22106 system_pods.go:89] "etcd-ha-863936-m02" [2ee4ba71-e936-499e-988a-6a0a3b0c6d65] Running
	I0816 12:39:21.136864   22106 system_pods.go:89] "kindnet-dddkq" [87bd9636-168b-4f61-9382-0914014af5c0] Running
	I0816 12:39:21.136869   22106 system_pods.go:89] "kindnet-qmrb2" [66996322-476e-4322-a1df-bd8cc820cb59] Running
	I0816 12:39:21.136873   22106 system_pods.go:89] "kube-apiserver-ha-863936" [ec7e5aa8-ffe7-4b42-950b-7fd3911e83e0] Running
	I0816 12:39:21.136876   22106 system_pods.go:89] "kube-apiserver-ha-863936-m02" [a32eab45-93ac-4993-b5dd-f73eb91029ce] Running
	I0816 12:39:21.136880   22106 system_pods.go:89] "kube-controller-manager-ha-863936" [b46326a0-950f-4b23-82a4-7793da0d9e9c] Running
	I0816 12:39:21.136884   22106 system_pods.go:89] "kube-controller-manager-ha-863936-m02" [c0bf3d0c-b461-460b-8523-7b0c76741e17] Running
	I0816 12:39:21.136889   22106 system_pods.go:89] "kube-proxy-7lvfc" [d3e6918e-a097-4037-b962-ed996efda26f] Running
	I0816 12:39:21.136893   22106 system_pods.go:89] "kube-proxy-g75mg" [8d22ea17-7ddd-4c07-89d5-0ebaa170066c] Running
	I0816 12:39:21.136902   22106 system_pods.go:89] "kube-scheduler-ha-863936" [51e497db-1e2d-4020-b030-23702fc7a568] Running
	I0816 12:39:21.136923   22106 system_pods.go:89] "kube-scheduler-ha-863936-m02" [ec98ee42-008b-4f36-95cc-3defde74c964] Running
	I0816 12:39:21.136933   22106 system_pods.go:89] "kube-vip-ha-863936" [55dba92f-60c5-416c-9165-cbde743fbfe2] Running
	I0816 12:39:21.136938   22106 system_pods.go:89] "kube-vip-ha-863936-m02" [b385c963-3f91-4810-9cf7-101fa14e28c6] Running
	I0816 12:39:21.136943   22106 system_pods.go:89] "storage-provisioner" [e6e7b7e6-00b6-42e2-9680-e6660e76bc6f] Running
	I0816 12:39:21.136956   22106 system_pods.go:126] duration metric: took 205.243032ms to wait for k8s-apps to be running ...
	I0816 12:39:21.136967   22106 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 12:39:21.137011   22106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 12:39:21.153172   22106 system_svc.go:56] duration metric: took 16.194838ms WaitForService to wait for kubelet
	I0816 12:39:21.153206   22106 kubeadm.go:582] duration metric: took 23.140371377s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 12:39:21.153231   22106 node_conditions.go:102] verifying NodePressure condition ...
	I0816 12:39:21.327569   22106 request.go:632] Waited for 174.246241ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes
	I0816 12:39:21.327624   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes
	I0816 12:39:21.327629   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:21.327637   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:21.327640   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:21.331685   22106 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 12:39:21.332618   22106 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 12:39:21.332641   22106 node_conditions.go:123] node cpu capacity is 2
	I0816 12:39:21.332654   22106 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 12:39:21.332660   22106 node_conditions.go:123] node cpu capacity is 2
	I0816 12:39:21.332667   22106 node_conditions.go:105] duration metric: took 179.429674ms to run NodePressure ...
	I0816 12:39:21.332684   22106 start.go:241] waiting for startup goroutines ...
	I0816 12:39:21.332713   22106 start.go:255] writing updated cluster config ...
	I0816 12:39:21.335242   22106 out.go:201] 
	I0816 12:39:21.337077   22106 config.go:182] Loaded profile config "ha-863936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 12:39:21.337195   22106 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/config.json ...
	I0816 12:39:21.338879   22106 out.go:177] * Starting "ha-863936-m03" control-plane node in "ha-863936" cluster
	I0816 12:39:21.340238   22106 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 12:39:21.340264   22106 cache.go:56] Caching tarball of preloaded images
	I0816 12:39:21.340378   22106 preload.go:172] Found /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 12:39:21.340394   22106 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0816 12:39:21.340485   22106 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/config.json ...
	I0816 12:39:21.340659   22106 start.go:360] acquireMachinesLock for ha-863936-m03: {Name:mk0b4b88ee8893cefe753073bc08069ac50e5641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 12:39:21.340698   22106 start.go:364] duration metric: took 21.46µs to acquireMachinesLock for "ha-863936-m03"
	I0816 12:39:21.340715   22106 start.go:93] Provisioning new machine with config: &{Name:ha-863936 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-863936 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 12:39:21.340805   22106 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0816 12:39:21.342415   22106 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 12:39:21.342506   22106 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:39:21.342542   22106 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:39:21.357466   22106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35695
	I0816 12:39:21.357918   22106 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:39:21.358353   22106 main.go:141] libmachine: Using API Version  1
	I0816 12:39:21.358374   22106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:39:21.358661   22106 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:39:21.358813   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetMachineName
	I0816 12:39:21.358960   22106 main.go:141] libmachine: (ha-863936-m03) Calling .DriverName
	I0816 12:39:21.359165   22106 start.go:159] libmachine.API.Create for "ha-863936" (driver="kvm2")
	I0816 12:39:21.359191   22106 client.go:168] LocalClient.Create starting
	I0816 12:39:21.359219   22106 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem
	I0816 12:39:21.359256   22106 main.go:141] libmachine: Decoding PEM data...
	I0816 12:39:21.359276   22106 main.go:141] libmachine: Parsing certificate...
	I0816 12:39:21.359342   22106 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem
	I0816 12:39:21.359372   22106 main.go:141] libmachine: Decoding PEM data...
	I0816 12:39:21.359389   22106 main.go:141] libmachine: Parsing certificate...
	I0816 12:39:21.359413   22106 main.go:141] libmachine: Running pre-create checks...
	I0816 12:39:21.359424   22106 main.go:141] libmachine: (ha-863936-m03) Calling .PreCreateCheck
	I0816 12:39:21.359602   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetConfigRaw
	I0816 12:39:21.359988   22106 main.go:141] libmachine: Creating machine...
	I0816 12:39:21.360000   22106 main.go:141] libmachine: (ha-863936-m03) Calling .Create
	I0816 12:39:21.360136   22106 main.go:141] libmachine: (ha-863936-m03) Creating KVM machine...
	I0816 12:39:21.361486   22106 main.go:141] libmachine: (ha-863936-m03) DBG | found existing default KVM network
	I0816 12:39:21.361652   22106 main.go:141] libmachine: (ha-863936-m03) DBG | found existing private KVM network mk-ha-863936
	I0816 12:39:21.361767   22106 main.go:141] libmachine: (ha-863936-m03) Setting up store path in /home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m03 ...
	I0816 12:39:21.361788   22106 main.go:141] libmachine: (ha-863936-m03) Building disk image from file:///home/jenkins/minikube-integration/19423-3966/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso
	I0816 12:39:21.361860   22106 main.go:141] libmachine: (ha-863936-m03) DBG | I0816 12:39:21.361775   23071 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19423-3966/.minikube
	I0816 12:39:21.361947   22106 main.go:141] libmachine: (ha-863936-m03) Downloading /home/jenkins/minikube-integration/19423-3966/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19423-3966/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso...
	I0816 12:39:21.588422   22106 main.go:141] libmachine: (ha-863936-m03) DBG | I0816 12:39:21.588305   23071 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m03/id_rsa...
	I0816 12:39:21.689781   22106 main.go:141] libmachine: (ha-863936-m03) DBG | I0816 12:39:21.689670   23071 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m03/ha-863936-m03.rawdisk...
	I0816 12:39:21.689815   22106 main.go:141] libmachine: (ha-863936-m03) DBG | Writing magic tar header
	I0816 12:39:21.689829   22106 main.go:141] libmachine: (ha-863936-m03) DBG | Writing SSH key tar header
	I0816 12:39:21.689840   22106 main.go:141] libmachine: (ha-863936-m03) DBG | I0816 12:39:21.689803   23071 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m03 ...
	I0816 12:39:21.689929   22106 main.go:141] libmachine: (ha-863936-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m03
	I0816 12:39:21.689961   22106 main.go:141] libmachine: (ha-863936-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-3966/.minikube/machines
	I0816 12:39:21.689974   22106 main.go:141] libmachine: (ha-863936-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-3966/.minikube
	I0816 12:39:21.690013   22106 main.go:141] libmachine: (ha-863936-m03) Setting executable bit set on /home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m03 (perms=drwx------)
	I0816 12:39:21.690024   22106 main.go:141] libmachine: (ha-863936-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-3966
	I0816 12:39:21.690039   22106 main.go:141] libmachine: (ha-863936-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0816 12:39:21.690050   22106 main.go:141] libmachine: (ha-863936-m03) DBG | Checking permissions on dir: /home/jenkins
	I0816 12:39:21.690061   22106 main.go:141] libmachine: (ha-863936-m03) DBG | Checking permissions on dir: /home
	I0816 12:39:21.690073   22106 main.go:141] libmachine: (ha-863936-m03) DBG | Skipping /home - not owner
	I0816 12:39:21.690085   22106 main.go:141] libmachine: (ha-863936-m03) Setting executable bit set on /home/jenkins/minikube-integration/19423-3966/.minikube/machines (perms=drwxr-xr-x)
	I0816 12:39:21.690101   22106 main.go:141] libmachine: (ha-863936-m03) Setting executable bit set on /home/jenkins/minikube-integration/19423-3966/.minikube (perms=drwxr-xr-x)
	I0816 12:39:21.690116   22106 main.go:141] libmachine: (ha-863936-m03) Setting executable bit set on /home/jenkins/minikube-integration/19423-3966 (perms=drwxrwxr-x)
	I0816 12:39:21.690133   22106 main.go:141] libmachine: (ha-863936-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0816 12:39:21.690146   22106 main.go:141] libmachine: (ha-863936-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0816 12:39:21.690157   22106 main.go:141] libmachine: (ha-863936-m03) Creating domain...
	I0816 12:39:21.691185   22106 main.go:141] libmachine: (ha-863936-m03) define libvirt domain using xml: 
	I0816 12:39:21.691205   22106 main.go:141] libmachine: (ha-863936-m03) <domain type='kvm'>
	I0816 12:39:21.691215   22106 main.go:141] libmachine: (ha-863936-m03)   <name>ha-863936-m03</name>
	I0816 12:39:21.691223   22106 main.go:141] libmachine: (ha-863936-m03)   <memory unit='MiB'>2200</memory>
	I0816 12:39:21.691231   22106 main.go:141] libmachine: (ha-863936-m03)   <vcpu>2</vcpu>
	I0816 12:39:21.691244   22106 main.go:141] libmachine: (ha-863936-m03)   <features>
	I0816 12:39:21.691256   22106 main.go:141] libmachine: (ha-863936-m03)     <acpi/>
	I0816 12:39:21.691266   22106 main.go:141] libmachine: (ha-863936-m03)     <apic/>
	I0816 12:39:21.691276   22106 main.go:141] libmachine: (ha-863936-m03)     <pae/>
	I0816 12:39:21.691292   22106 main.go:141] libmachine: (ha-863936-m03)     
	I0816 12:39:21.691332   22106 main.go:141] libmachine: (ha-863936-m03)   </features>
	I0816 12:39:21.691356   22106 main.go:141] libmachine: (ha-863936-m03)   <cpu mode='host-passthrough'>
	I0816 12:39:21.691387   22106 main.go:141] libmachine: (ha-863936-m03)   
	I0816 12:39:21.691415   22106 main.go:141] libmachine: (ha-863936-m03)   </cpu>
	I0816 12:39:21.691445   22106 main.go:141] libmachine: (ha-863936-m03)   <os>
	I0816 12:39:21.691460   22106 main.go:141] libmachine: (ha-863936-m03)     <type>hvm</type>
	I0816 12:39:21.691472   22106 main.go:141] libmachine: (ha-863936-m03)     <boot dev='cdrom'/>
	I0816 12:39:21.691479   22106 main.go:141] libmachine: (ha-863936-m03)     <boot dev='hd'/>
	I0816 12:39:21.691489   22106 main.go:141] libmachine: (ha-863936-m03)     <bootmenu enable='no'/>
	I0816 12:39:21.691500   22106 main.go:141] libmachine: (ha-863936-m03)   </os>
	I0816 12:39:21.691509   22106 main.go:141] libmachine: (ha-863936-m03)   <devices>
	I0816 12:39:21.691518   22106 main.go:141] libmachine: (ha-863936-m03)     <disk type='file' device='cdrom'>
	I0816 12:39:21.691531   22106 main.go:141] libmachine: (ha-863936-m03)       <source file='/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m03/boot2docker.iso'/>
	I0816 12:39:21.691542   22106 main.go:141] libmachine: (ha-863936-m03)       <target dev='hdc' bus='scsi'/>
	I0816 12:39:21.691550   22106 main.go:141] libmachine: (ha-863936-m03)       <readonly/>
	I0816 12:39:21.691561   22106 main.go:141] libmachine: (ha-863936-m03)     </disk>
	I0816 12:39:21.691571   22106 main.go:141] libmachine: (ha-863936-m03)     <disk type='file' device='disk'>
	I0816 12:39:21.691584   22106 main.go:141] libmachine: (ha-863936-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0816 12:39:21.691600   22106 main.go:141] libmachine: (ha-863936-m03)       <source file='/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m03/ha-863936-m03.rawdisk'/>
	I0816 12:39:21.691611   22106 main.go:141] libmachine: (ha-863936-m03)       <target dev='hda' bus='virtio'/>
	I0816 12:39:21.691622   22106 main.go:141] libmachine: (ha-863936-m03)     </disk>
	I0816 12:39:21.691630   22106 main.go:141] libmachine: (ha-863936-m03)     <interface type='network'>
	I0816 12:39:21.691642   22106 main.go:141] libmachine: (ha-863936-m03)       <source network='mk-ha-863936'/>
	I0816 12:39:21.691655   22106 main.go:141] libmachine: (ha-863936-m03)       <model type='virtio'/>
	I0816 12:39:21.691667   22106 main.go:141] libmachine: (ha-863936-m03)     </interface>
	I0816 12:39:21.691678   22106 main.go:141] libmachine: (ha-863936-m03)     <interface type='network'>
	I0816 12:39:21.691688   22106 main.go:141] libmachine: (ha-863936-m03)       <source network='default'/>
	I0816 12:39:21.691698   22106 main.go:141] libmachine: (ha-863936-m03)       <model type='virtio'/>
	I0816 12:39:21.691707   22106 main.go:141] libmachine: (ha-863936-m03)     </interface>
	I0816 12:39:21.691717   22106 main.go:141] libmachine: (ha-863936-m03)     <serial type='pty'>
	I0816 12:39:21.691736   22106 main.go:141] libmachine: (ha-863936-m03)       <target port='0'/>
	I0816 12:39:21.691755   22106 main.go:141] libmachine: (ha-863936-m03)     </serial>
	I0816 12:39:21.691784   22106 main.go:141] libmachine: (ha-863936-m03)     <console type='pty'>
	I0816 12:39:21.691806   22106 main.go:141] libmachine: (ha-863936-m03)       <target type='serial' port='0'/>
	I0816 12:39:21.691827   22106 main.go:141] libmachine: (ha-863936-m03)     </console>
	I0816 12:39:21.691838   22106 main.go:141] libmachine: (ha-863936-m03)     <rng model='virtio'>
	I0816 12:39:21.691851   22106 main.go:141] libmachine: (ha-863936-m03)       <backend model='random'>/dev/random</backend>
	I0816 12:39:21.691867   22106 main.go:141] libmachine: (ha-863936-m03)     </rng>
	I0816 12:39:21.691879   22106 main.go:141] libmachine: (ha-863936-m03)     
	I0816 12:39:21.691888   22106 main.go:141] libmachine: (ha-863936-m03)     
	I0816 12:39:21.691897   22106 main.go:141] libmachine: (ha-863936-m03)   </devices>
	I0816 12:39:21.691909   22106 main.go:141] libmachine: (ha-863936-m03) </domain>
	I0816 12:39:21.691920   22106 main.go:141] libmachine: (ha-863936-m03) 
	I0816 12:39:21.698387   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:1b:b9:b5 in network default
	I0816 12:39:21.698904   22106 main.go:141] libmachine: (ha-863936-m03) Ensuring networks are active...
	I0816 12:39:21.698923   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:21.699565   22106 main.go:141] libmachine: (ha-863936-m03) Ensuring network default is active
	I0816 12:39:21.699871   22106 main.go:141] libmachine: (ha-863936-m03) Ensuring network mk-ha-863936 is active
	I0816 12:39:21.700365   22106 main.go:141] libmachine: (ha-863936-m03) Getting domain xml...
	I0816 12:39:21.701033   22106 main.go:141] libmachine: (ha-863936-m03) Creating domain...
	I0816 12:39:22.916117   22106 main.go:141] libmachine: (ha-863936-m03) Waiting to get IP...
	I0816 12:39:22.916874   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:22.917291   22106 main.go:141] libmachine: (ha-863936-m03) DBG | unable to find current IP address of domain ha-863936-m03 in network mk-ha-863936
	I0816 12:39:22.917317   22106 main.go:141] libmachine: (ha-863936-m03) DBG | I0816 12:39:22.917275   23071 retry.go:31] will retry after 233.955582ms: waiting for machine to come up
	I0816 12:39:23.152974   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:23.153467   22106 main.go:141] libmachine: (ha-863936-m03) DBG | unable to find current IP address of domain ha-863936-m03 in network mk-ha-863936
	I0816 12:39:23.153493   22106 main.go:141] libmachine: (ha-863936-m03) DBG | I0816 12:39:23.153415   23071 retry.go:31] will retry after 270.571352ms: waiting for machine to come up
	I0816 12:39:23.425833   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:23.426386   22106 main.go:141] libmachine: (ha-863936-m03) DBG | unable to find current IP address of domain ha-863936-m03 in network mk-ha-863936
	I0816 12:39:23.426411   22106 main.go:141] libmachine: (ha-863936-m03) DBG | I0816 12:39:23.426333   23071 retry.go:31] will retry after 308.115392ms: waiting for machine to come up
	I0816 12:39:23.735782   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:23.736291   22106 main.go:141] libmachine: (ha-863936-m03) DBG | unable to find current IP address of domain ha-863936-m03 in network mk-ha-863936
	I0816 12:39:23.736326   22106 main.go:141] libmachine: (ha-863936-m03) DBG | I0816 12:39:23.736237   23071 retry.go:31] will retry after 580.049804ms: waiting for machine to come up
	I0816 12:39:24.318069   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:24.318561   22106 main.go:141] libmachine: (ha-863936-m03) DBG | unable to find current IP address of domain ha-863936-m03 in network mk-ha-863936
	I0816 12:39:24.318586   22106 main.go:141] libmachine: (ha-863936-m03) DBG | I0816 12:39:24.318523   23071 retry.go:31] will retry after 602.942822ms: waiting for machine to come up
	I0816 12:39:24.923074   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:24.923490   22106 main.go:141] libmachine: (ha-863936-m03) DBG | unable to find current IP address of domain ha-863936-m03 in network mk-ha-863936
	I0816 12:39:24.923516   22106 main.go:141] libmachine: (ha-863936-m03) DBG | I0816 12:39:24.923446   23071 retry.go:31] will retry after 579.631175ms: waiting for machine to come up
	I0816 12:39:25.504124   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:25.504540   22106 main.go:141] libmachine: (ha-863936-m03) DBG | unable to find current IP address of domain ha-863936-m03 in network mk-ha-863936
	I0816 12:39:25.504566   22106 main.go:141] libmachine: (ha-863936-m03) DBG | I0816 12:39:25.504503   23071 retry.go:31] will retry after 943.910472ms: waiting for machine to come up
	I0816 12:39:26.450255   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:26.450645   22106 main.go:141] libmachine: (ha-863936-m03) DBG | unable to find current IP address of domain ha-863936-m03 in network mk-ha-863936
	I0816 12:39:26.450696   22106 main.go:141] libmachine: (ha-863936-m03) DBG | I0816 12:39:26.450641   23071 retry.go:31] will retry after 1.228766387s: waiting for machine to come up
	I0816 12:39:27.680944   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:27.681389   22106 main.go:141] libmachine: (ha-863936-m03) DBG | unable to find current IP address of domain ha-863936-m03 in network mk-ha-863936
	I0816 12:39:27.681417   22106 main.go:141] libmachine: (ha-863936-m03) DBG | I0816 12:39:27.681342   23071 retry.go:31] will retry after 1.495017949s: waiting for machine to come up
	I0816 12:39:29.178303   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:29.178728   22106 main.go:141] libmachine: (ha-863936-m03) DBG | unable to find current IP address of domain ha-863936-m03 in network mk-ha-863936
	I0816 12:39:29.178756   22106 main.go:141] libmachine: (ha-863936-m03) DBG | I0816 12:39:29.178677   23071 retry.go:31] will retry after 2.251323948s: waiting for machine to come up
	I0816 12:39:31.431594   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:31.432007   22106 main.go:141] libmachine: (ha-863936-m03) DBG | unable to find current IP address of domain ha-863936-m03 in network mk-ha-863936
	I0816 12:39:31.432038   22106 main.go:141] libmachine: (ha-863936-m03) DBG | I0816 12:39:31.431964   23071 retry.go:31] will retry after 2.837656375s: waiting for machine to come up
	I0816 12:39:34.271694   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:34.272287   22106 main.go:141] libmachine: (ha-863936-m03) DBG | unable to find current IP address of domain ha-863936-m03 in network mk-ha-863936
	I0816 12:39:34.272311   22106 main.go:141] libmachine: (ha-863936-m03) DBG | I0816 12:39:34.272238   23071 retry.go:31] will retry after 2.568098948s: waiting for machine to come up
	I0816 12:39:36.842648   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:36.843094   22106 main.go:141] libmachine: (ha-863936-m03) DBG | unable to find current IP address of domain ha-863936-m03 in network mk-ha-863936
	I0816 12:39:36.843117   22106 main.go:141] libmachine: (ha-863936-m03) DBG | I0816 12:39:36.843049   23071 retry.go:31] will retry after 3.039763146s: waiting for machine to come up
	I0816 12:39:39.885857   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:39.886300   22106 main.go:141] libmachine: (ha-863936-m03) DBG | unable to find current IP address of domain ha-863936-m03 in network mk-ha-863936
	I0816 12:39:39.886334   22106 main.go:141] libmachine: (ha-863936-m03) DBG | I0816 12:39:39.886244   23071 retry.go:31] will retry after 4.12414469s: waiting for machine to come up
	I0816 12:39:44.013251   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:44.013799   22106 main.go:141] libmachine: (ha-863936-m03) Found IP for machine: 192.168.39.116
	I0816 12:39:44.013824   22106 main.go:141] libmachine: (ha-863936-m03) Reserving static IP address...
	I0816 12:39:44.013837   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has current primary IP address 192.168.39.116 and MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:44.014226   22106 main.go:141] libmachine: (ha-863936-m03) DBG | unable to find host DHCP lease matching {name: "ha-863936-m03", mac: "52:54:00:ec:05:59", ip: "192.168.39.116"} in network mk-ha-863936
	I0816 12:39:44.085190   22106 main.go:141] libmachine: (ha-863936-m03) DBG | Getting to WaitForSSH function...
	I0816 12:39:44.085217   22106 main.go:141] libmachine: (ha-863936-m03) Reserved static IP address: 192.168.39.116
	I0816 12:39:44.085229   22106 main.go:141] libmachine: (ha-863936-m03) Waiting for SSH to be available...
	I0816 12:39:44.087613   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:44.087988   22106 main.go:141] libmachine: (ha-863936-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:05:59", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:39:35 +0000 UTC Type:0 Mac:52:54:00:ec:05:59 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ec:05:59}
	I0816 12:39:44.088019   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined IP address 192.168.39.116 and MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:44.088107   22106 main.go:141] libmachine: (ha-863936-m03) DBG | Using SSH client type: external
	I0816 12:39:44.088132   22106 main.go:141] libmachine: (ha-863936-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m03/id_rsa (-rw-------)
	I0816 12:39:44.088162   22106 main.go:141] libmachine: (ha-863936-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 12:39:44.088176   22106 main.go:141] libmachine: (ha-863936-m03) DBG | About to run SSH command:
	I0816 12:39:44.088198   22106 main.go:141] libmachine: (ha-863936-m03) DBG | exit 0
	I0816 12:39:44.213073   22106 main.go:141] libmachine: (ha-863936-m03) DBG | SSH cmd err, output: <nil>: 
	I0816 12:39:44.213332   22106 main.go:141] libmachine: (ha-863936-m03) KVM machine creation complete!
	I0816 12:39:44.213667   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetConfigRaw
	I0816 12:39:44.214144   22106 main.go:141] libmachine: (ha-863936-m03) Calling .DriverName
	I0816 12:39:44.214319   22106 main.go:141] libmachine: (ha-863936-m03) Calling .DriverName
	I0816 12:39:44.214508   22106 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0816 12:39:44.214522   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetState
	I0816 12:39:44.215811   22106 main.go:141] libmachine: Detecting operating system of created instance...
	I0816 12:39:44.215827   22106 main.go:141] libmachine: Waiting for SSH to be available...
	I0816 12:39:44.215835   22106 main.go:141] libmachine: Getting to WaitForSSH function...
	I0816 12:39:44.215843   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHHostname
	I0816 12:39:44.218240   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:44.218645   22106 main.go:141] libmachine: (ha-863936-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:05:59", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:39:35 +0000 UTC Type:0 Mac:52:54:00:ec:05:59 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-863936-m03 Clientid:01:52:54:00:ec:05:59}
	I0816 12:39:44.218675   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined IP address 192.168.39.116 and MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:44.218760   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHPort
	I0816 12:39:44.218924   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHKeyPath
	I0816 12:39:44.219081   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHKeyPath
	I0816 12:39:44.219326   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHUsername
	I0816 12:39:44.219501   22106 main.go:141] libmachine: Using SSH client type: native
	I0816 12:39:44.219695   22106 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0816 12:39:44.219709   22106 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0816 12:39:44.324435   22106 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 12:39:44.324465   22106 main.go:141] libmachine: Detecting the provisioner...
	I0816 12:39:44.324478   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHHostname
	I0816 12:39:44.327086   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:44.327395   22106 main.go:141] libmachine: (ha-863936-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:05:59", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:39:35 +0000 UTC Type:0 Mac:52:54:00:ec:05:59 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-863936-m03 Clientid:01:52:54:00:ec:05:59}
	I0816 12:39:44.327423   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined IP address 192.168.39.116 and MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:44.327600   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHPort
	I0816 12:39:44.327773   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHKeyPath
	I0816 12:39:44.327943   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHKeyPath
	I0816 12:39:44.328090   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHUsername
	I0816 12:39:44.328256   22106 main.go:141] libmachine: Using SSH client type: native
	I0816 12:39:44.328415   22106 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0816 12:39:44.328432   22106 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0816 12:39:44.433470   22106 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0816 12:39:44.433538   22106 main.go:141] libmachine: found compatible host: buildroot
	I0816 12:39:44.433546   22106 main.go:141] libmachine: Provisioning with buildroot...
	I0816 12:39:44.433553   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetMachineName
	I0816 12:39:44.433800   22106 buildroot.go:166] provisioning hostname "ha-863936-m03"
	I0816 12:39:44.433823   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetMachineName
	I0816 12:39:44.434019   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHHostname
	I0816 12:39:44.436864   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:44.437379   22106 main.go:141] libmachine: (ha-863936-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:05:59", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:39:35 +0000 UTC Type:0 Mac:52:54:00:ec:05:59 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-863936-m03 Clientid:01:52:54:00:ec:05:59}
	I0816 12:39:44.437408   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined IP address 192.168.39.116 and MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:44.437564   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHPort
	I0816 12:39:44.437762   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHKeyPath
	I0816 12:39:44.437955   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHKeyPath
	I0816 12:39:44.438139   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHUsername
	I0816 12:39:44.438335   22106 main.go:141] libmachine: Using SSH client type: native
	I0816 12:39:44.438537   22106 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0816 12:39:44.438556   22106 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-863936-m03 && echo "ha-863936-m03" | sudo tee /etc/hostname
	I0816 12:39:44.563716   22106 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-863936-m03
	
	I0816 12:39:44.563742   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHHostname
	I0816 12:39:44.566487   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:44.566844   22106 main.go:141] libmachine: (ha-863936-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:05:59", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:39:35 +0000 UTC Type:0 Mac:52:54:00:ec:05:59 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-863936-m03 Clientid:01:52:54:00:ec:05:59}
	I0816 12:39:44.566871   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined IP address 192.168.39.116 and MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:44.567111   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHPort
	I0816 12:39:44.567319   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHKeyPath
	I0816 12:39:44.567496   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHKeyPath
	I0816 12:39:44.567613   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHUsername
	I0816 12:39:44.567789   22106 main.go:141] libmachine: Using SSH client type: native
	I0816 12:39:44.567976   22106 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0816 12:39:44.567994   22106 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-863936-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-863936-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-863936-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 12:39:44.681978   22106 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 12:39:44.682010   22106 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-3966/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-3966/.minikube}
	I0816 12:39:44.682023   22106 buildroot.go:174] setting up certificates
	I0816 12:39:44.682033   22106 provision.go:84] configureAuth start
	I0816 12:39:44.682041   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetMachineName
	I0816 12:39:44.682294   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetIP
	I0816 12:39:44.684600   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:44.684925   22106 main.go:141] libmachine: (ha-863936-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:05:59", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:39:35 +0000 UTC Type:0 Mac:52:54:00:ec:05:59 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-863936-m03 Clientid:01:52:54:00:ec:05:59}
	I0816 12:39:44.684955   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined IP address 192.168.39.116 and MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:44.685109   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHHostname
	I0816 12:39:44.687476   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:44.687806   22106 main.go:141] libmachine: (ha-863936-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:05:59", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:39:35 +0000 UTC Type:0 Mac:52:54:00:ec:05:59 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-863936-m03 Clientid:01:52:54:00:ec:05:59}
	I0816 12:39:44.687834   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined IP address 192.168.39.116 and MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:44.687928   22106 provision.go:143] copyHostCerts
	I0816 12:39:44.687959   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem
	I0816 12:39:44.688002   22106 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem, removing ...
	I0816 12:39:44.688020   22106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem
	I0816 12:39:44.688093   22106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem (1082 bytes)
	I0816 12:39:44.688186   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem
	I0816 12:39:44.688215   22106 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem, removing ...
	I0816 12:39:44.688224   22106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem
	I0816 12:39:44.688261   22106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem (1123 bytes)
	I0816 12:39:44.688324   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem
	I0816 12:39:44.688352   22106 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem, removing ...
	I0816 12:39:44.688361   22106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem
	I0816 12:39:44.688395   22106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem (1675 bytes)
	I0816 12:39:44.688470   22106 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem org=jenkins.ha-863936-m03 san=[127.0.0.1 192.168.39.116 ha-863936-m03 localhost minikube]
	I0816 12:39:44.848981   22106 provision.go:177] copyRemoteCerts
	I0816 12:39:44.849044   22106 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 12:39:44.849073   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHHostname
	I0816 12:39:44.851543   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:44.851859   22106 main.go:141] libmachine: (ha-863936-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:05:59", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:39:35 +0000 UTC Type:0 Mac:52:54:00:ec:05:59 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-863936-m03 Clientid:01:52:54:00:ec:05:59}
	I0816 12:39:44.851884   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined IP address 192.168.39.116 and MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:44.852088   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHPort
	I0816 12:39:44.852259   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHKeyPath
	I0816 12:39:44.852403   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHUsername
	I0816 12:39:44.852547   22106 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m03/id_rsa Username:docker}
	I0816 12:39:44.935198   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 12:39:44.935272   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 12:39:44.959658   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 12:39:44.959735   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 12:39:44.985094   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 12:39:44.985166   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0816 12:39:45.009372   22106 provision.go:87] duration metric: took 327.327581ms to configureAuth
	I0816 12:39:45.009405   22106 buildroot.go:189] setting minikube options for container-runtime
	I0816 12:39:45.009620   22106 config.go:182] Loaded profile config "ha-863936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 12:39:45.009688   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHHostname
	I0816 12:39:45.012702   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:45.013070   22106 main.go:141] libmachine: (ha-863936-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:05:59", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:39:35 +0000 UTC Type:0 Mac:52:54:00:ec:05:59 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-863936-m03 Clientid:01:52:54:00:ec:05:59}
	I0816 12:39:45.013102   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined IP address 192.168.39.116 and MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:45.013282   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHPort
	I0816 12:39:45.013464   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHKeyPath
	I0816 12:39:45.013667   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHKeyPath
	I0816 12:39:45.013889   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHUsername
	I0816 12:39:45.014066   22106 main.go:141] libmachine: Using SSH client type: native
	I0816 12:39:45.014285   22106 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0816 12:39:45.014303   22106 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 12:39:45.292636   22106 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 12:39:45.292665   22106 main.go:141] libmachine: Checking connection to Docker...
	I0816 12:39:45.292675   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetURL
	I0816 12:39:45.294064   22106 main.go:141] libmachine: (ha-863936-m03) DBG | Using libvirt version 6000000
	I0816 12:39:45.296263   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:45.296581   22106 main.go:141] libmachine: (ha-863936-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:05:59", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:39:35 +0000 UTC Type:0 Mac:52:54:00:ec:05:59 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-863936-m03 Clientid:01:52:54:00:ec:05:59}
	I0816 12:39:45.296608   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined IP address 192.168.39.116 and MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:45.296770   22106 main.go:141] libmachine: Docker is up and running!
	I0816 12:39:45.296785   22106 main.go:141] libmachine: Reticulating splines...
	I0816 12:39:45.296793   22106 client.go:171] duration metric: took 23.937594799s to LocalClient.Create
	I0816 12:39:45.296820   22106 start.go:167] duration metric: took 23.937668178s to libmachine.API.Create "ha-863936"
	I0816 12:39:45.296831   22106 start.go:293] postStartSetup for "ha-863936-m03" (driver="kvm2")
	I0816 12:39:45.296842   22106 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 12:39:45.296858   22106 main.go:141] libmachine: (ha-863936-m03) Calling .DriverName
	I0816 12:39:45.297073   22106 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 12:39:45.297098   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHHostname
	I0816 12:39:45.299166   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:45.299488   22106 main.go:141] libmachine: (ha-863936-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:05:59", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:39:35 +0000 UTC Type:0 Mac:52:54:00:ec:05:59 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-863936-m03 Clientid:01:52:54:00:ec:05:59}
	I0816 12:39:45.299514   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined IP address 192.168.39.116 and MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:45.299630   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHPort
	I0816 12:39:45.299783   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHKeyPath
	I0816 12:39:45.299942   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHUsername
	I0816 12:39:45.300065   22106 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m03/id_rsa Username:docker}
	I0816 12:39:45.383766   22106 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 12:39:45.388142   22106 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 12:39:45.388161   22106 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/addons for local assets ...
	I0816 12:39:45.388242   22106 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/files for local assets ...
	I0816 12:39:45.388326   22106 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem -> 111492.pem in /etc/ssl/certs
	I0816 12:39:45.388338   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem -> /etc/ssl/certs/111492.pem
	I0816 12:39:45.388432   22106 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 12:39:45.398171   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /etc/ssl/certs/111492.pem (1708 bytes)
	I0816 12:39:45.422664   22106 start.go:296] duration metric: took 125.819541ms for postStartSetup
	I0816 12:39:45.422716   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetConfigRaw
	I0816 12:39:45.423243   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetIP
	I0816 12:39:45.425865   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:45.426320   22106 main.go:141] libmachine: (ha-863936-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:05:59", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:39:35 +0000 UTC Type:0 Mac:52:54:00:ec:05:59 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-863936-m03 Clientid:01:52:54:00:ec:05:59}
	I0816 12:39:45.426353   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined IP address 192.168.39.116 and MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:45.426678   22106 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/config.json ...
	I0816 12:39:45.426870   22106 start.go:128] duration metric: took 24.086054434s to createHost
	I0816 12:39:45.426893   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHHostname
	I0816 12:39:45.429438   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:45.429827   22106 main.go:141] libmachine: (ha-863936-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:05:59", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:39:35 +0000 UTC Type:0 Mac:52:54:00:ec:05:59 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-863936-m03 Clientid:01:52:54:00:ec:05:59}
	I0816 12:39:45.429857   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined IP address 192.168.39.116 and MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:45.430024   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHPort
	I0816 12:39:45.430210   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHKeyPath
	I0816 12:39:45.430386   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHKeyPath
	I0816 12:39:45.430539   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHUsername
	I0816 12:39:45.430715   22106 main.go:141] libmachine: Using SSH client type: native
	I0816 12:39:45.430903   22106 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0816 12:39:45.430916   22106 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 12:39:45.537998   22106 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723811985.515221918
	
	I0816 12:39:45.538019   22106 fix.go:216] guest clock: 1723811985.515221918
	I0816 12:39:45.538028   22106 fix.go:229] Guest: 2024-08-16 12:39:45.515221918 +0000 UTC Remote: 2024-08-16 12:39:45.426882078 +0000 UTC m=+192.431234971 (delta=88.33984ms)
	I0816 12:39:45.538049   22106 fix.go:200] guest clock delta is within tolerance: 88.33984ms
	I0816 12:39:45.538056   22106 start.go:83] releasing machines lock for "ha-863936-m03", held for 24.197348079s
	I0816 12:39:45.538075   22106 main.go:141] libmachine: (ha-863936-m03) Calling .DriverName
	I0816 12:39:45.538325   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetIP
	I0816 12:39:45.540772   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:45.541097   22106 main.go:141] libmachine: (ha-863936-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:05:59", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:39:35 +0000 UTC Type:0 Mac:52:54:00:ec:05:59 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-863936-m03 Clientid:01:52:54:00:ec:05:59}
	I0816 12:39:45.541120   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined IP address 192.168.39.116 and MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:45.543397   22106 out.go:177] * Found network options:
	I0816 12:39:45.544579   22106 out.go:177]   - NO_PROXY=192.168.39.2,192.168.39.101
	W0816 12:39:45.545721   22106 proxy.go:119] fail to check proxy env: Error ip not in block
	W0816 12:39:45.545742   22106 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 12:39:45.545755   22106 main.go:141] libmachine: (ha-863936-m03) Calling .DriverName
	I0816 12:39:45.546283   22106 main.go:141] libmachine: (ha-863936-m03) Calling .DriverName
	I0816 12:39:45.546457   22106 main.go:141] libmachine: (ha-863936-m03) Calling .DriverName
	I0816 12:39:45.546539   22106 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 12:39:45.546567   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHHostname
	W0816 12:39:45.546777   22106 proxy.go:119] fail to check proxy env: Error ip not in block
	W0816 12:39:45.546805   22106 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 12:39:45.546856   22106 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 12:39:45.546875   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHHostname
	I0816 12:39:45.549485   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:45.549808   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:45.549957   22106 main.go:141] libmachine: (ha-863936-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:05:59", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:39:35 +0000 UTC Type:0 Mac:52:54:00:ec:05:59 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-863936-m03 Clientid:01:52:54:00:ec:05:59}
	I0816 12:39:45.549987   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined IP address 192.168.39.116 and MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:45.550044   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHPort
	I0816 12:39:45.550179   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHKeyPath
	I0816 12:39:45.550310   22106 main.go:141] libmachine: (ha-863936-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:05:59", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:39:35 +0000 UTC Type:0 Mac:52:54:00:ec:05:59 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-863936-m03 Clientid:01:52:54:00:ec:05:59}
	I0816 12:39:45.550329   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined IP address 192.168.39.116 and MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:45.550361   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHUsername
	I0816 12:39:45.550473   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHPort
	I0816 12:39:45.550566   22106 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m03/id_rsa Username:docker}
	I0816 12:39:45.550646   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHKeyPath
	I0816 12:39:45.550789   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHUsername
	I0816 12:39:45.550916   22106 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m03/id_rsa Username:docker}
	I0816 12:39:45.784207   22106 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 12:39:45.791086   22106 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 12:39:45.791165   22106 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 12:39:45.807570   22106 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 12:39:45.807594   22106 start.go:495] detecting cgroup driver to use...
	I0816 12:39:45.807690   22106 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 12:39:45.823478   22106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 12:39:45.837750   22106 docker.go:217] disabling cri-docker service (if available) ...
	I0816 12:39:45.837808   22106 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 12:39:45.851187   22106 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 12:39:45.864391   22106 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 12:39:45.993300   22106 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 12:39:46.132685   22106 docker.go:233] disabling docker service ...
	I0816 12:39:46.132753   22106 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 12:39:46.149062   22106 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 12:39:46.163221   22106 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 12:39:46.309389   22106 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 12:39:46.431780   22106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 12:39:46.447205   22106 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 12:39:46.467249   22106 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 12:39:46.467316   22106 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:39:46.480187   22106 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 12:39:46.480244   22106 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:39:46.491467   22106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:39:46.503863   22106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:39:46.516208   22106 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 12:39:46.528759   22106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:39:46.541212   22106 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:39:46.560021   22106 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:39:46.572038   22106 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 12:39:46.582979   22106 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 12:39:46.583030   22106 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 12:39:46.598010   22106 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 12:39:46.609096   22106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 12:39:46.744055   22106 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 12:39:46.892116   22106 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 12:39:46.892194   22106 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 12:39:46.897419   22106 start.go:563] Will wait 60s for crictl version
	I0816 12:39:46.897490   22106 ssh_runner.go:195] Run: which crictl
	I0816 12:39:46.901432   22106 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 12:39:46.943865   22106 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 12:39:46.943950   22106 ssh_runner.go:195] Run: crio --version
	I0816 12:39:46.972800   22106 ssh_runner.go:195] Run: crio --version
	I0816 12:39:47.005284   22106 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 12:39:47.006707   22106 out.go:177]   - env NO_PROXY=192.168.39.2
	I0816 12:39:47.007924   22106 out.go:177]   - env NO_PROXY=192.168.39.2,192.168.39.101
	I0816 12:39:47.009234   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetIP
	I0816 12:39:47.011740   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:47.012103   22106 main.go:141] libmachine: (ha-863936-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:05:59", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:39:35 +0000 UTC Type:0 Mac:52:54:00:ec:05:59 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-863936-m03 Clientid:01:52:54:00:ec:05:59}
	I0816 12:39:47.012130   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined IP address 192.168.39.116 and MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:47.012260   22106 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0816 12:39:47.016138   22106 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 12:39:47.028223   22106 mustload.go:65] Loading cluster: ha-863936
	I0816 12:39:47.028463   22106 config.go:182] Loaded profile config "ha-863936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 12:39:47.028807   22106 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:39:47.028848   22106 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:39:47.043959   22106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37957
	I0816 12:39:47.044352   22106 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:39:47.044793   22106 main.go:141] libmachine: Using API Version  1
	I0816 12:39:47.044812   22106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:39:47.045173   22106 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:39:47.045371   22106 main.go:141] libmachine: (ha-863936) Calling .GetState
	I0816 12:39:47.046860   22106 host.go:66] Checking if "ha-863936" exists ...
	I0816 12:39:47.047119   22106 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:39:47.047148   22106 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:39:47.061012   22106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43707
	I0816 12:39:47.061370   22106 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:39:47.061759   22106 main.go:141] libmachine: Using API Version  1
	I0816 12:39:47.061780   22106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:39:47.062050   22106 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:39:47.062234   22106 main.go:141] libmachine: (ha-863936) Calling .DriverName
	I0816 12:39:47.062390   22106 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936 for IP: 192.168.39.116
	I0816 12:39:47.062401   22106 certs.go:194] generating shared ca certs ...
	I0816 12:39:47.062415   22106 certs.go:226] acquiring lock for ca certs: {Name:mkdb46755373c135acf8239d2d4352f4b0b3d1d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:39:47.062555   22106 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key
	I0816 12:39:47.062609   22106 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key
	I0816 12:39:47.062621   22106 certs.go:256] generating profile certs ...
	I0816 12:39:47.062709   22106 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/client.key
	I0816 12:39:47.062740   22106 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.key.fd4b6242
	I0816 12:39:47.062759   22106 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.crt.fd4b6242 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.2 192.168.39.101 192.168.39.116 192.168.39.254]
	I0816 12:39:47.332156   22106 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.crt.fd4b6242 ...
	I0816 12:39:47.332187   22106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.crt.fd4b6242: {Name:mk0783a32718663628076e9a86ffe5813a5bef31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:39:47.332347   22106 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.key.fd4b6242 ...
	I0816 12:39:47.332357   22106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.key.fd4b6242: {Name:mk54e687be730ef92f1235055c48ec58a7b5a2aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:39:47.332423   22106 certs.go:381] copying /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.crt.fd4b6242 -> /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.crt
	I0816 12:39:47.332574   22106 certs.go:385] copying /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.key.fd4b6242 -> /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.key
	I0816 12:39:47.332730   22106 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/proxy-client.key
	I0816 12:39:47.332748   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0816 12:39:47.332768   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0816 12:39:47.332787   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0816 12:39:47.332805   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0816 12:39:47.332822   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0816 12:39:47.332836   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0816 12:39:47.332849   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0816 12:39:47.332867   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0816 12:39:47.332952   22106 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem (1338 bytes)
	W0816 12:39:47.332991   22106 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149_empty.pem, impossibly tiny 0 bytes
	I0816 12:39:47.333005   22106 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 12:39:47.333037   22106 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem (1082 bytes)
	I0816 12:39:47.333067   22106 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem (1123 bytes)
	I0816 12:39:47.333098   22106 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem (1675 bytes)
	I0816 12:39:47.333151   22106 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem (1708 bytes)
	I0816 12:39:47.333189   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem -> /usr/share/ca-certificates/111492.pem
	I0816 12:39:47.333211   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0816 12:39:47.333229   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem -> /usr/share/ca-certificates/11149.pem
	I0816 12:39:47.333270   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:39:47.336364   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:39:47.336766   22106 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:39:47.336793   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:39:47.336996   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHPort
	I0816 12:39:47.337221   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:39:47.337368   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHUsername
	I0816 12:39:47.337501   22106 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936/id_rsa Username:docker}
	I0816 12:39:47.409377   22106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0816 12:39:47.414598   22106 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0816 12:39:47.426795   22106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0816 12:39:47.431854   22106 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0816 12:39:47.443184   22106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0816 12:39:47.451247   22106 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0816 12:39:47.461963   22106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0816 12:39:47.466038   22106 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0816 12:39:47.476237   22106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0816 12:39:47.480646   22106 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0816 12:39:47.491284   22106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0816 12:39:47.495307   22106 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0816 12:39:47.505716   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 12:39:47.533064   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 12:39:47.558569   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 12:39:47.582587   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 12:39:47.605768   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0816 12:39:47.628844   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 12:39:47.653715   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 12:39:47.678391   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 12:39:47.702330   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /usr/share/ca-certificates/111492.pem (1708 bytes)
	I0816 12:39:47.726281   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 12:39:47.751043   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem --> /usr/share/ca-certificates/11149.pem (1338 bytes)
	I0816 12:39:47.777410   22106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0816 12:39:47.795096   22106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0816 12:39:47.812870   22106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0816 12:39:47.830301   22106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0816 12:39:47.848085   22106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0816 12:39:47.866159   22106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0816 12:39:47.882810   22106 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0816 12:39:47.899353   22106 ssh_runner.go:195] Run: openssl version
	I0816 12:39:47.905031   22106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111492.pem && ln -fs /usr/share/ca-certificates/111492.pem /etc/ssl/certs/111492.pem"
	I0816 12:39:47.915599   22106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111492.pem
	I0816 12:39:47.920005   22106 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 12:33 /usr/share/ca-certificates/111492.pem
	I0816 12:39:47.920054   22106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111492.pem
	I0816 12:39:47.925869   22106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111492.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 12:39:47.936028   22106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 12:39:47.946226   22106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 12:39:47.950887   22106 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 12:22 /usr/share/ca-certificates/minikubeCA.pem
	I0816 12:39:47.950937   22106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 12:39:47.956305   22106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 12:39:47.966967   22106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11149.pem && ln -fs /usr/share/ca-certificates/11149.pem /etc/ssl/certs/11149.pem"
	I0816 12:39:47.978398   22106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11149.pem
	I0816 12:39:47.982636   22106 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 12:33 /usr/share/ca-certificates/11149.pem
	I0816 12:39:47.982686   22106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11149.pem
	I0816 12:39:47.988111   22106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11149.pem /etc/ssl/certs/51391683.0"
	I0816 12:39:47.998232   22106 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 12:39:48.002033   22106 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0816 12:39:48.002087   22106 kubeadm.go:934] updating node {m03 192.168.39.116 8443 v1.31.0 crio true true} ...
	I0816 12:39:48.002163   22106 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-863936-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-863936 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 12:39:48.002187   22106 kube-vip.go:115] generating kube-vip config ...
	I0816 12:39:48.002215   22106 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0816 12:39:48.020229   22106 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0816 12:39:48.020300   22106 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0816 12:39:48.020365   22106 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 12:39:48.030631   22106 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0816 12:39:48.030689   22106 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0816 12:39:48.040332   22106 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0816 12:39:48.040357   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0816 12:39:48.040387   22106 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256
	I0816 12:39:48.040430   22106 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0816 12:39:48.040433   22106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 12:39:48.040337   22106 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256
	I0816 12:39:48.040496   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0816 12:39:48.040587   22106 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0816 12:39:48.055398   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0816 12:39:48.055478   22106 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0816 12:39:48.055493   22106 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0816 12:39:48.055505   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0816 12:39:48.055551   22106 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0816 12:39:48.055581   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0816 12:39:48.081435   22106 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0816 12:39:48.081471   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0816 12:39:48.900755   22106 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0816 12:39:48.910655   22106 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0816 12:39:48.929582   22106 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 12:39:48.947656   22106 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0816 12:39:48.964345   22106 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0816 12:39:48.968662   22106 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 12:39:48.981049   22106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 12:39:49.113493   22106 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 12:39:49.140698   22106 host.go:66] Checking if "ha-863936" exists ...
	I0816 12:39:49.141190   22106 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:39:49.141237   22106 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:39:49.156476   22106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39717
	I0816 12:39:49.156852   22106 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:39:49.157361   22106 main.go:141] libmachine: Using API Version  1
	I0816 12:39:49.157400   22106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:39:49.157748   22106 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:39:49.157915   22106 main.go:141] libmachine: (ha-863936) Calling .DriverName
	I0816 12:39:49.158050   22106 start.go:317] joinCluster: &{Name:ha-863936 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-863936 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.116 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 12:39:49.158216   22106 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0816 12:39:49.158239   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:39:49.161272   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:39:49.161849   22106 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:39:49.161876   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:39:49.162129   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHPort
	I0816 12:39:49.162319   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:39:49.162498   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHUsername
	I0816 12:39:49.162651   22106 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936/id_rsa Username:docker}
	I0816 12:39:49.308570   22106 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.116 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 12:39:49.308615   22106 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token u6h2w3.uj2dx2uo7mssayjl --discovery-token-ca-cert-hash sha256:4dc33a2efd2478f5cbf5aa38b4f971f510c5b41ce450063af834610254851641 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-863936-m03 --control-plane --apiserver-advertise-address=192.168.39.116 --apiserver-bind-port=8443"
	I0816 12:40:11.090647   22106 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token u6h2w3.uj2dx2uo7mssayjl --discovery-token-ca-cert-hash sha256:4dc33a2efd2478f5cbf5aa38b4f971f510c5b41ce450063af834610254851641 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-863936-m03 --control-plane --apiserver-advertise-address=192.168.39.116 --apiserver-bind-port=8443": (21.781996337s)
	I0816 12:40:11.090686   22106 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0816 12:40:11.713290   22106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-863936-m03 minikube.k8s.io/updated_at=2024_08_16T12_40_11_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ab84f9bc76071a77c857a14f5c66dccc01002b05 minikube.k8s.io/name=ha-863936 minikube.k8s.io/primary=false
	I0816 12:40:11.848300   22106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-863936-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0816 12:40:11.955312   22106 start.go:319] duration metric: took 22.797258761s to joinCluster
	I0816 12:40:11.955390   22106 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.116 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 12:40:11.955718   22106 config.go:182] Loaded profile config "ha-863936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 12:40:11.958009   22106 out.go:177] * Verifying Kubernetes components...
	I0816 12:40:11.959732   22106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 12:40:12.229920   22106 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 12:40:12.277487   22106 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 12:40:12.277772   22106 kapi.go:59] client config for ha-863936: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/client.crt", KeyFile:"/home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/client.key", CAFile:"/home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f189a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0816 12:40:12.277857   22106 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.2:8443
	I0816 12:40:12.278091   22106 node_ready.go:35] waiting up to 6m0s for node "ha-863936-m03" to be "Ready" ...
	I0816 12:40:12.278182   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:12.278195   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:12.278206   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:12.278212   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:12.282256   22106 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 12:40:12.778472   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:12.778495   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:12.778507   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:12.778514   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:12.781886   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:13.278748   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:13.278775   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:13.278787   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:13.278793   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:13.283264   22106 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 12:40:13.778960   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:13.778987   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:13.778999   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:13.779004   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:13.782687   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:14.279100   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:14.279125   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:14.279138   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:14.279143   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:14.283127   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:14.283959   22106 node_ready.go:53] node "ha-863936-m03" has status "Ready":"False"
	I0816 12:40:14.778564   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:14.778582   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:14.778590   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:14.778594   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:14.782146   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:15.279149   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:15.279171   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:15.279190   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:15.279194   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:15.282879   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:15.779236   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:15.779259   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:15.779266   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:15.779270   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:15.782706   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:16.278302   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:16.278330   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:16.278341   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:16.278346   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:16.281685   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:16.779155   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:16.779179   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:16.779189   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:16.779197   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:16.783557   22106 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 12:40:16.784096   22106 node_ready.go:53] node "ha-863936-m03" has status "Ready":"False"
	I0816 12:40:17.278627   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:17.278649   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:17.278660   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:17.278668   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:17.282931   22106 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 12:40:17.779106   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:17.779153   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:17.779165   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:17.779170   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:17.782823   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:18.278705   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:18.278732   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:18.278742   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:18.278749   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:18.281818   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:18.779227   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:18.779246   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:18.779255   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:18.779259   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:18.782255   22106 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 12:40:19.278470   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:19.278491   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:19.278498   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:19.278502   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:19.282409   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:19.282994   22106 node_ready.go:53] node "ha-863936-m03" has status "Ready":"False"
	I0816 12:40:19.778380   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:19.778401   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:19.778408   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:19.778412   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:19.831668   22106 round_trippers.go:574] Response Status: 200 OK in 53 milliseconds
	I0816 12:40:20.278686   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:20.278707   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:20.278716   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:20.278720   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:20.282475   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:20.778533   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:20.778561   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:20.778577   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:20.778583   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:20.782177   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:21.278736   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:21.278761   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:21.278772   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:21.278779   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:21.282659   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:21.283360   22106 node_ready.go:53] node "ha-863936-m03" has status "Ready":"False"
	I0816 12:40:21.779177   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:21.779196   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:21.779204   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:21.779209   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:21.782339   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:22.278604   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:22.278626   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:22.278635   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:22.278639   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:22.282134   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:22.778983   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:22.779008   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:22.779017   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:22.779022   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:22.782768   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:23.278365   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:23.278387   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:23.278395   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:23.278400   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:23.282675   22106 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 12:40:23.778440   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:23.778461   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:23.778469   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:23.778474   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:23.782033   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:23.782575   22106 node_ready.go:53] node "ha-863936-m03" has status "Ready":"False"
	I0816 12:40:24.279307   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:24.279343   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:24.279352   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:24.279360   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:24.282719   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:24.778754   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:24.778775   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:24.778786   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:24.778792   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:24.782183   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:25.278298   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:25.278324   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:25.278334   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:25.278341   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:25.282223   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:25.779215   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:25.779239   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:25.779245   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:25.779250   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:25.782593   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:25.783197   22106 node_ready.go:53] node "ha-863936-m03" has status "Ready":"False"
	I0816 12:40:26.279008   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:26.279029   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:26.279036   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:26.279040   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:26.282803   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:26.778232   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:26.778254   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:26.778262   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:26.778266   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:26.781679   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:27.278612   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:27.278638   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:27.278650   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:27.278655   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:27.282968   22106 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 12:40:27.778330   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:27.778367   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:27.778377   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:27.778382   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:27.781449   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:28.278931   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:28.278953   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:28.278961   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:28.278965   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:28.282714   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:28.283388   22106 node_ready.go:53] node "ha-863936-m03" has status "Ready":"False"
	I0816 12:40:28.778360   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:28.778381   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:28.778389   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:28.778392   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:28.781029   22106 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 12:40:29.279267   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:29.279291   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:29.279301   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:29.279308   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:29.283008   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:29.778429   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:29.778450   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:29.778461   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:29.778467   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:29.782957   22106 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 12:40:30.278492   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:30.278520   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:30.278530   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:30.278536   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:30.282565   22106 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 12:40:30.778612   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:30.778633   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:30.778641   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:30.778645   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:30.781743   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:30.782623   22106 node_ready.go:53] node "ha-863936-m03" has status "Ready":"False"
	I0816 12:40:31.278323   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:31.278350   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:31.278361   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:31.278365   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:31.282230   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:31.779052   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:31.779073   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:31.779083   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:31.779089   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:31.781908   22106 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 12:40:31.782448   22106 node_ready.go:49] node "ha-863936-m03" has status "Ready":"True"
	I0816 12:40:31.782467   22106 node_ready.go:38] duration metric: took 19.504360065s for node "ha-863936-m03" to be "Ready" ...
	I0816 12:40:31.782475   22106 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 12:40:31.782533   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods
	I0816 12:40:31.782543   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:31.782550   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:31.782555   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:31.788627   22106 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0816 12:40:31.795832   22106 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-7gfgm" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:31.795920   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-7gfgm
	I0816 12:40:31.795931   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:31.795941   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:31.795951   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:31.798749   22106 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 12:40:31.799323   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936
	I0816 12:40:31.799338   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:31.799349   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:31.799354   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:31.802274   22106 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 12:40:31.802790   22106 pod_ready.go:93] pod "coredns-6f6b679f8f-7gfgm" in "kube-system" namespace has status "Ready":"True"
	I0816 12:40:31.802806   22106 pod_ready.go:82] duration metric: took 6.951459ms for pod "coredns-6f6b679f8f-7gfgm" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:31.802817   22106 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-ssb5h" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:31.802892   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-ssb5h
	I0816 12:40:31.802903   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:31.802912   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:31.802920   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:31.805842   22106 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 12:40:31.806670   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936
	I0816 12:40:31.806687   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:31.806697   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:31.806704   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:31.809446   22106 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 12:40:31.809991   22106 pod_ready.go:93] pod "coredns-6f6b679f8f-ssb5h" in "kube-system" namespace has status "Ready":"True"
	I0816 12:40:31.810012   22106 pod_ready.go:82] duration metric: took 7.186952ms for pod "coredns-6f6b679f8f-ssb5h" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:31.810030   22106 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-863936" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:31.810159   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-863936
	I0816 12:40:31.810179   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:31.810190   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:31.810195   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:31.813055   22106 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 12:40:31.813625   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936
	I0816 12:40:31.813638   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:31.813646   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:31.813653   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:31.815932   22106 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 12:40:31.816479   22106 pod_ready.go:93] pod "etcd-ha-863936" in "kube-system" namespace has status "Ready":"True"
	I0816 12:40:31.816493   22106 pod_ready.go:82] duration metric: took 6.455136ms for pod "etcd-ha-863936" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:31.816501   22106 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-863936-m02" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:31.816543   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-863936-m02
	I0816 12:40:31.816550   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:31.816557   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:31.816562   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:31.818930   22106 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 12:40:31.819533   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:40:31.819547   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:31.819554   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:31.819557   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:31.821944   22106 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 12:40:31.822437   22106 pod_ready.go:93] pod "etcd-ha-863936-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 12:40:31.822451   22106 pod_ready.go:82] duration metric: took 5.944552ms for pod "etcd-ha-863936-m02" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:31.822458   22106 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-863936-m03" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:31.980023   22106 request.go:632] Waited for 157.516461ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-863936-m03
	I0816 12:40:31.980075   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-863936-m03
	I0816 12:40:31.980080   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:31.980089   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:31.980096   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:31.983243   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:32.179679   22106 request.go:632] Waited for 195.791488ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:32.179741   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:32.179749   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:32.179759   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:32.179768   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:32.183094   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:32.183726   22106 pod_ready.go:93] pod "etcd-ha-863936-m03" in "kube-system" namespace has status "Ready":"True"
	I0816 12:40:32.183747   22106 pod_ready.go:82] duration metric: took 361.282787ms for pod "etcd-ha-863936-m03" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:32.183770   22106 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-863936" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:32.379843   22106 request.go:632] Waited for 195.98205ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-863936
	I0816 12:40:32.379908   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-863936
	I0816 12:40:32.379915   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:32.379929   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:32.379939   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:32.384347   22106 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 12:40:32.579735   22106 request.go:632] Waited for 194.320249ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-863936
	I0816 12:40:32.579824   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936
	I0816 12:40:32.579836   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:32.579844   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:32.579849   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:32.583255   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:32.583891   22106 pod_ready.go:93] pod "kube-apiserver-ha-863936" in "kube-system" namespace has status "Ready":"True"
	I0816 12:40:32.583909   22106 pod_ready.go:82] duration metric: took 400.128194ms for pod "kube-apiserver-ha-863936" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:32.583919   22106 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-863936-m02" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:32.780134   22106 request.go:632] Waited for 196.057891ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-863936-m02
	I0816 12:40:32.780196   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-863936-m02
	I0816 12:40:32.780202   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:32.780209   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:32.780213   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:32.783424   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:32.979527   22106 request.go:632] Waited for 195.450448ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:40:32.979589   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:40:32.979596   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:32.979603   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:32.979606   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:32.983008   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:32.983583   22106 pod_ready.go:93] pod "kube-apiserver-ha-863936-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 12:40:32.983605   22106 pod_ready.go:82] duration metric: took 399.678344ms for pod "kube-apiserver-ha-863936-m02" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:32.983617   22106 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-863936-m03" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:33.179483   22106 request.go:632] Waited for 195.78335ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-863936-m03
	I0816 12:40:33.179548   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-863936-m03
	I0816 12:40:33.179556   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:33.179563   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:33.179567   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:33.182954   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:33.379921   22106 request.go:632] Waited for 196.353072ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:33.379978   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:33.379983   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:33.379989   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:33.379996   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:33.383619   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:33.384225   22106 pod_ready.go:93] pod "kube-apiserver-ha-863936-m03" in "kube-system" namespace has status "Ready":"True"
	I0816 12:40:33.384243   22106 pod_ready.go:82] duration metric: took 400.618667ms for pod "kube-apiserver-ha-863936-m03" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:33.384254   22106 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-863936" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:33.579467   22106 request.go:632] Waited for 195.152422ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-863936
	I0816 12:40:33.579544   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-863936
	I0816 12:40:33.579550   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:33.579557   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:33.579561   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:33.582685   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:33.779846   22106 request.go:632] Waited for 196.387517ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-863936
	I0816 12:40:33.779912   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936
	I0816 12:40:33.779925   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:33.779935   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:33.779944   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:33.785595   22106 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0816 12:40:33.786659   22106 pod_ready.go:93] pod "kube-controller-manager-ha-863936" in "kube-system" namespace has status "Ready":"True"
	I0816 12:40:33.786684   22106 pod_ready.go:82] duration metric: took 402.421297ms for pod "kube-controller-manager-ha-863936" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:33.786698   22106 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-863936-m02" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:33.979597   22106 request.go:632] Waited for 192.829532ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-863936-m02
	I0816 12:40:33.979650   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-863936-m02
	I0816 12:40:33.979655   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:33.979663   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:33.979667   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:33.982926   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:34.179274   22106 request.go:632] Waited for 195.397989ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:40:34.179329   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:40:34.179336   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:34.179346   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:34.179355   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:34.182593   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:34.183339   22106 pod_ready.go:93] pod "kube-controller-manager-ha-863936-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 12:40:34.183357   22106 pod_ready.go:82] duration metric: took 396.647187ms for pod "kube-controller-manager-ha-863936-m02" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:34.183370   22106 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-863936-m03" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:34.379371   22106 request.go:632] Waited for 195.903008ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-863936-m03
	I0816 12:40:34.379446   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-863936-m03
	I0816 12:40:34.379451   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:34.379462   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:34.379473   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:34.382770   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:34.579862   22106 request.go:632] Waited for 196.312482ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:34.579913   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:34.579918   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:34.579925   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:34.579928   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:34.583461   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:34.583954   22106 pod_ready.go:93] pod "kube-controller-manager-ha-863936-m03" in "kube-system" namespace has status "Ready":"True"
	I0816 12:40:34.583972   22106 pod_ready.go:82] duration metric: took 400.581972ms for pod "kube-controller-manager-ha-863936-m03" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:34.583984   22106 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-25gzj" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:34.779470   22106 request.go:632] Waited for 195.416164ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-25gzj
	I0816 12:40:34.779533   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-25gzj
	I0816 12:40:34.779539   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:34.779551   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:34.779560   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:34.782820   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:34.979908   22106 request.go:632] Waited for 196.334527ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:34.979965   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:34.979970   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:34.979978   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:34.979983   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:34.983250   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:34.983739   22106 pod_ready.go:93] pod "kube-proxy-25gzj" in "kube-system" namespace has status "Ready":"True"
	I0816 12:40:34.983756   22106 pod_ready.go:82] duration metric: took 399.761031ms for pod "kube-proxy-25gzj" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:34.983768   22106 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7lvfc" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:35.179868   22106 request.go:632] Waited for 196.036481ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7lvfc
	I0816 12:40:35.179923   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7lvfc
	I0816 12:40:35.179929   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:35.179937   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:35.179940   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:35.183162   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:35.379150   22106 request.go:632] Waited for 195.284661ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:40:35.379226   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:40:35.379232   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:35.379239   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:35.379243   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:35.382532   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:35.383088   22106 pod_ready.go:93] pod "kube-proxy-7lvfc" in "kube-system" namespace has status "Ready":"True"
	I0816 12:40:35.383107   22106 pod_ready.go:82] duration metric: took 399.332611ms for pod "kube-proxy-7lvfc" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:35.383116   22106 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g75mg" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:35.580093   22106 request.go:632] Waited for 196.911721ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g75mg
	I0816 12:40:35.580184   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g75mg
	I0816 12:40:35.580194   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:35.580204   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:35.580210   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:35.584457   22106 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 12:40:35.780082   22106 request.go:632] Waited for 194.340611ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-863936
	I0816 12:40:35.780145   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936
	I0816 12:40:35.780151   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:35.780158   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:35.780162   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:35.783397   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:35.784071   22106 pod_ready.go:93] pod "kube-proxy-g75mg" in "kube-system" namespace has status "Ready":"True"
	I0816 12:40:35.784090   22106 pod_ready.go:82] duration metric: took 400.967246ms for pod "kube-proxy-g75mg" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:35.784101   22106 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-863936" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:35.979311   22106 request.go:632] Waited for 195.12957ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-863936
	I0816 12:40:35.979386   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-863936
	I0816 12:40:35.979396   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:35.979403   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:35.979407   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:35.986447   22106 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0816 12:40:36.179743   22106 request.go:632] Waited for 192.239359ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-863936
	I0816 12:40:36.179811   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936
	I0816 12:40:36.179818   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:36.179826   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:36.179831   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:36.183193   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:36.183606   22106 pod_ready.go:93] pod "kube-scheduler-ha-863936" in "kube-system" namespace has status "Ready":"True"
	I0816 12:40:36.183625   22106 pod_ready.go:82] duration metric: took 399.516281ms for pod "kube-scheduler-ha-863936" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:36.183636   22106 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-863936-m02" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:36.379990   22106 request.go:632] Waited for 196.29926ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-863936-m02
	I0816 12:40:36.380046   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-863936-m02
	I0816 12:40:36.380051   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:36.380058   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:36.380063   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:36.383859   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:36.580013   22106 request.go:632] Waited for 195.391551ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:40:36.580071   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:40:36.580076   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:36.580085   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:36.580089   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:36.583266   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:36.583762   22106 pod_ready.go:93] pod "kube-scheduler-ha-863936-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 12:40:36.583780   22106 pod_ready.go:82] duration metric: took 400.136201ms for pod "kube-scheduler-ha-863936-m02" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:36.583793   22106 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-863936-m03" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:36.779994   22106 request.go:632] Waited for 196.132372ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-863936-m03
	I0816 12:40:36.780066   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-863936-m03
	I0816 12:40:36.780072   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:36.780078   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:36.780111   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:36.783562   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:36.979513   22106 request.go:632] Waited for 195.236791ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:36.979562   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:36.979567   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:36.979574   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:36.979580   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:36.982705   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:36.983397   22106 pod_ready.go:93] pod "kube-scheduler-ha-863936-m03" in "kube-system" namespace has status "Ready":"True"
	I0816 12:40:36.983412   22106 pod_ready.go:82] duration metric: took 399.611985ms for pod "kube-scheduler-ha-863936-m03" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:36.983424   22106 pod_ready.go:39] duration metric: took 5.200938239s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 12:40:36.983453   22106 api_server.go:52] waiting for apiserver process to appear ...
	I0816 12:40:36.983504   22106 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 12:40:36.999977   22106 api_server.go:72] duration metric: took 25.04455467s to wait for apiserver process to appear ...
	I0816 12:40:36.999996   22106 api_server.go:88] waiting for apiserver healthz status ...
	I0816 12:40:37.000013   22106 api_server.go:253] Checking apiserver healthz at https://192.168.39.2:8443/healthz ...
	I0816 12:40:37.004167   22106 api_server.go:279] https://192.168.39.2:8443/healthz returned 200:
	ok
	I0816 12:40:37.004245   22106 round_trippers.go:463] GET https://192.168.39.2:8443/version
	I0816 12:40:37.004254   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:37.004262   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:37.004266   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:37.005260   22106 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0816 12:40:37.005325   22106 api_server.go:141] control plane version: v1.31.0
	I0816 12:40:37.005358   22106 api_server.go:131] duration metric: took 5.348086ms to wait for apiserver health ...
	I0816 12:40:37.005368   22106 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 12:40:37.179338   22106 request.go:632] Waited for 173.906545ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods
	I0816 12:40:37.179395   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods
	I0816 12:40:37.179404   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:37.179414   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:37.179424   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:37.184969   22106 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0816 12:40:37.191790   22106 system_pods.go:59] 24 kube-system pods found
	I0816 12:40:37.191817   22106 system_pods.go:61] "coredns-6f6b679f8f-7gfgm" [797ae351-63bf-4994-a9bd-901367887b58] Running
	I0816 12:40:37.191824   22106 system_pods.go:61] "coredns-6f6b679f8f-ssb5h" [5162fb17-6897-40d2-9c2c-80157ea46e07] Running
	I0816 12:40:37.191829   22106 system_pods.go:61] "etcd-ha-863936" [cc32212e-19e1-4ff6-9940-70a580978946] Running
	I0816 12:40:37.191834   22106 system_pods.go:61] "etcd-ha-863936-m02" [2ee4ba71-e936-499e-988a-6a0a3b0c6d65] Running
	I0816 12:40:37.191838   22106 system_pods.go:61] "etcd-ha-863936-m03" [7df0a1f8-b762-4019-96d4-ba0c9431169e] Running
	I0816 12:40:37.191843   22106 system_pods.go:61] "kindnet-dddkq" [87bd9636-168b-4f61-9382-0914014af5c0] Running
	I0816 12:40:37.191847   22106 system_pods.go:61] "kindnet-qmrb2" [66996322-476e-4322-a1df-bd8cc820cb59] Running
	I0816 12:40:37.191851   22106 system_pods.go:61] "kindnet-zqs4l" [b9054301-c9d9-4f2e-94c9-4557d6f4af2c] Running
	I0816 12:40:37.191857   22106 system_pods.go:61] "kube-apiserver-ha-863936" [ec7e5aa8-ffe7-4b42-950b-7fd3911e83e0] Running
	I0816 12:40:37.191862   22106 system_pods.go:61] "kube-apiserver-ha-863936-m02" [a32eab45-93ac-4993-b5dd-f73eb91029ce] Running
	I0816 12:40:37.191867   22106 system_pods.go:61] "kube-apiserver-ha-863936-m03" [0ad1dc81-9baf-46cf-854a-61fcbb617fab] Running
	I0816 12:40:37.191873   22106 system_pods.go:61] "kube-controller-manager-ha-863936" [b46326a0-950f-4b23-82a4-7793da0d9e9c] Running
	I0816 12:40:37.191881   22106 system_pods.go:61] "kube-controller-manager-ha-863936-m02" [c0bf3d0c-b461-460b-8523-7b0c76741e17] Running
	I0816 12:40:37.191888   22106 system_pods.go:61] "kube-controller-manager-ha-863936-m03" [9f20b501-1733-41f6-a303-26e384227d1d] Running
	I0816 12:40:37.191893   22106 system_pods.go:61] "kube-proxy-25gzj" [8014f69d-cbe6-4369-8dbc-95bb5a429c22] Running
	I0816 12:40:37.191900   22106 system_pods.go:61] "kube-proxy-7lvfc" [d3e6918e-a097-4037-b962-ed996efda26f] Running
	I0816 12:40:37.191905   22106 system_pods.go:61] "kube-proxy-g75mg" [8d22ea17-7ddd-4c07-89d5-0ebaa170066c] Running
	I0816 12:40:37.191911   22106 system_pods.go:61] "kube-scheduler-ha-863936" [51e497db-1e2d-4020-b030-23702fc7a568] Running
	I0816 12:40:37.191919   22106 system_pods.go:61] "kube-scheduler-ha-863936-m02" [ec98ee42-008b-4f36-95cc-3defde74c964] Running
	I0816 12:40:37.191925   22106 system_pods.go:61] "kube-scheduler-ha-863936-m03" [4b3cb586-9afe-4d2d-845b-e6fd397c75d5] Running
	I0816 12:40:37.191930   22106 system_pods.go:61] "kube-vip-ha-863936" [55dba92f-60c5-416c-9165-cbde743fbfe2] Running
	I0816 12:40:37.191936   22106 system_pods.go:61] "kube-vip-ha-863936-m02" [b385c963-3f91-4810-9cf7-101fa14e28c6] Running
	I0816 12:40:37.191942   22106 system_pods.go:61] "kube-vip-ha-863936-m03" [3c5c462a-b019-4973-89aa-af666e620286] Running
	I0816 12:40:37.191947   22106 system_pods.go:61] "storage-provisioner" [e6e7b7e6-00b6-42e2-9680-e6660e76bc6f] Running
	I0816 12:40:37.191956   22106 system_pods.go:74] duration metric: took 186.580365ms to wait for pod list to return data ...
	I0816 12:40:37.191967   22106 default_sa.go:34] waiting for default service account to be created ...
	I0816 12:40:37.379407   22106 request.go:632] Waited for 187.360835ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/default/serviceaccounts
	I0816 12:40:37.379471   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/default/serviceaccounts
	I0816 12:40:37.379478   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:37.379485   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:37.379488   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:37.383234   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:37.383368   22106 default_sa.go:45] found service account: "default"
	I0816 12:40:37.383390   22106 default_sa.go:55] duration metric: took 191.415483ms for default service account to be created ...
	I0816 12:40:37.383404   22106 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 12:40:37.579826   22106 request.go:632] Waited for 196.353434ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods
	I0816 12:40:37.579907   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods
	I0816 12:40:37.579917   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:37.579927   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:37.579936   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:37.585456   22106 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0816 12:40:37.591619   22106 system_pods.go:86] 24 kube-system pods found
	I0816 12:40:37.591648   22106 system_pods.go:89] "coredns-6f6b679f8f-7gfgm" [797ae351-63bf-4994-a9bd-901367887b58] Running
	I0816 12:40:37.591654   22106 system_pods.go:89] "coredns-6f6b679f8f-ssb5h" [5162fb17-6897-40d2-9c2c-80157ea46e07] Running
	I0816 12:40:37.591659   22106 system_pods.go:89] "etcd-ha-863936" [cc32212e-19e1-4ff6-9940-70a580978946] Running
	I0816 12:40:37.591662   22106 system_pods.go:89] "etcd-ha-863936-m02" [2ee4ba71-e936-499e-988a-6a0a3b0c6d65] Running
	I0816 12:40:37.591666   22106 system_pods.go:89] "etcd-ha-863936-m03" [7df0a1f8-b762-4019-96d4-ba0c9431169e] Running
	I0816 12:40:37.591670   22106 system_pods.go:89] "kindnet-dddkq" [87bd9636-168b-4f61-9382-0914014af5c0] Running
	I0816 12:40:37.591675   22106 system_pods.go:89] "kindnet-qmrb2" [66996322-476e-4322-a1df-bd8cc820cb59] Running
	I0816 12:40:37.591679   22106 system_pods.go:89] "kindnet-zqs4l" [b9054301-c9d9-4f2e-94c9-4557d6f4af2c] Running
	I0816 12:40:37.591683   22106 system_pods.go:89] "kube-apiserver-ha-863936" [ec7e5aa8-ffe7-4b42-950b-7fd3911e83e0] Running
	I0816 12:40:37.591686   22106 system_pods.go:89] "kube-apiserver-ha-863936-m02" [a32eab45-93ac-4993-b5dd-f73eb91029ce] Running
	I0816 12:40:37.591690   22106 system_pods.go:89] "kube-apiserver-ha-863936-m03" [0ad1dc81-9baf-46cf-854a-61fcbb617fab] Running
	I0816 12:40:37.591694   22106 system_pods.go:89] "kube-controller-manager-ha-863936" [b46326a0-950f-4b23-82a4-7793da0d9e9c] Running
	I0816 12:40:37.591700   22106 system_pods.go:89] "kube-controller-manager-ha-863936-m02" [c0bf3d0c-b461-460b-8523-7b0c76741e17] Running
	I0816 12:40:37.591706   22106 system_pods.go:89] "kube-controller-manager-ha-863936-m03" [9f20b501-1733-41f6-a303-26e384227d1d] Running
	I0816 12:40:37.591710   22106 system_pods.go:89] "kube-proxy-25gzj" [8014f69d-cbe6-4369-8dbc-95bb5a429c22] Running
	I0816 12:40:37.591714   22106 system_pods.go:89] "kube-proxy-7lvfc" [d3e6918e-a097-4037-b962-ed996efda26f] Running
	I0816 12:40:37.591718   22106 system_pods.go:89] "kube-proxy-g75mg" [8d22ea17-7ddd-4c07-89d5-0ebaa170066c] Running
	I0816 12:40:37.591722   22106 system_pods.go:89] "kube-scheduler-ha-863936" [51e497db-1e2d-4020-b030-23702fc7a568] Running
	I0816 12:40:37.591725   22106 system_pods.go:89] "kube-scheduler-ha-863936-m02" [ec98ee42-008b-4f36-95cc-3defde74c964] Running
	I0816 12:40:37.591731   22106 system_pods.go:89] "kube-scheduler-ha-863936-m03" [4b3cb586-9afe-4d2d-845b-e6fd397c75d5] Running
	I0816 12:40:37.591734   22106 system_pods.go:89] "kube-vip-ha-863936" [55dba92f-60c5-416c-9165-cbde743fbfe2] Running
	I0816 12:40:37.591737   22106 system_pods.go:89] "kube-vip-ha-863936-m02" [b385c963-3f91-4810-9cf7-101fa14e28c6] Running
	I0816 12:40:37.591740   22106 system_pods.go:89] "kube-vip-ha-863936-m03" [3c5c462a-b019-4973-89aa-af666e620286] Running
	I0816 12:40:37.591743   22106 system_pods.go:89] "storage-provisioner" [e6e7b7e6-00b6-42e2-9680-e6660e76bc6f] Running
	I0816 12:40:37.591749   22106 system_pods.go:126] duration metric: took 208.336649ms to wait for k8s-apps to be running ...
	I0816 12:40:37.591758   22106 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 12:40:37.591801   22106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 12:40:37.608424   22106 system_svc.go:56] duration metric: took 16.656838ms WaitForService to wait for kubelet
	I0816 12:40:37.608446   22106 kubeadm.go:582] duration metric: took 25.65302687s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 12:40:37.608467   22106 node_conditions.go:102] verifying NodePressure condition ...
	I0816 12:40:37.779945   22106 request.go:632] Waited for 171.399328ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes
	I0816 12:40:37.780025   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes
	I0816 12:40:37.780036   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:37.780047   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:37.780054   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:37.784395   22106 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 12:40:37.786298   22106 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 12:40:37.786331   22106 node_conditions.go:123] node cpu capacity is 2
	I0816 12:40:37.786346   22106 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 12:40:37.786351   22106 node_conditions.go:123] node cpu capacity is 2
	I0816 12:40:37.786357   22106 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 12:40:37.786362   22106 node_conditions.go:123] node cpu capacity is 2
	I0816 12:40:37.786368   22106 node_conditions.go:105] duration metric: took 177.896291ms to run NodePressure ...
	I0816 12:40:37.786382   22106 start.go:241] waiting for startup goroutines ...
	I0816 12:40:37.786414   22106 start.go:255] writing updated cluster config ...
	I0816 12:40:37.786855   22106 ssh_runner.go:195] Run: rm -f paused
	I0816 12:40:37.840521   22106 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 12:40:37.843493   22106 out.go:177] * Done! kubectl is now configured to use "ha-863936" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 16 12:44:16 ha-863936 crio[680]: time="2024-08-16 12:44:16.008177070Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=37f50f74-6b04-414c-be88-07cb006437a0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 12:44:16 ha-863936 crio[680]: time="2024-08-16 12:44:16.008237852Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=37f50f74-6b04-414c-be88-07cb006437a0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 12:44:16 ha-863936 crio[680]: time="2024-08-16 12:44:16.008497825Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e73d7f930e1761fdc56db10356ad46f9f2acd1f4aead50f57efa3558af4e0b18,PodSandboxId:5f9b33b7fe6f25a53393dfc965ee81bb65952c3ab4fc610bd3fa7395f2ed6d3a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723812042160061472,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zqpfx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c52c866f-81c3-423f-a604-f792834e341e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a32107a6690bf88691db72f9019b8ec9c8c6e1ce5e447fa21e37000fbe8fe696,PodSandboxId:13e4c008cfb7ea17cb823e290756e07b0177dd0379a53dafaff6302e03252b5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723811856865690872,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-ssb5h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5162fb17-6897-40d2-9c2c-80157ea46e07,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fb58a4d7b8e84f3429f58f729e1f93b54a8aadb737f10f997bf64e601d6edd6,PodSandboxId:7061cc0bd22ace243b66f598d9799b3e59733e06ba7f688f1f4a72a56387bfd3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723811856826422012,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-7gfgm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
797ae351-63bf-4994-a9bd-901367887b58,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7eccab4aea0bb6f0bc7c4549fb8ee6bdfb5c2805f3bd08c2c101869d2d91f44,PodSandboxId:17d99db1f4e4f93d1c171d0d47f3cd255f97dd2c89e9bdad7274573d55fc5109,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1723811856781500226,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6e7b7e6-00b6-42e2-9680-e6660e76bc6f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b83ba25619ab61e7a449e6a735eb3cca59c184e250a87f2af5b712fbc37fc331,PodSandboxId:d524a508e86ff890d883786349c2b55fe61dc345620d11bc49cfc83efa8c5816,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1723811844925839346,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dddkq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87bd9636-168b-4f61-9382-0914014af5c0,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4aa588906cdcd0cf4fcb973469df4a1d02cafe5e3388d6c516665f0d1af8ceb4,PodSandboxId:e0fda91da3630c4c4c4612e48a47583f0c6a77f263ee246204a23e60b2f9156c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172381184
0918258276,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g75mg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d22ea17-7ddd-4c07-89d5-0ebaa170066c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50ae5af99f5970011dec9ba89fd0047f1f9b657bdad8b1e90a1718aa00bdd86a,PodSandboxId:440481aadacb06709d51c423b632e279ae02e3d4dbb17c738b0eff0b2c6c4ee1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172381183232
7212273,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f21ce045d97e5d71d18a00985c30116f,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee882e5e99dadc7370d79fccecde5adec2c82fc5cf4d93a04c88222c888fc1a9,PodSandboxId:6ebc21b6e76559aefefb4672c28d96c9b1f956e38bb4a72c99eda68a533786ac,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723811829558483285,Labels:map[string]string{
io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fb02b673e0a97e6d66a5a7404114d26,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f34879b3d9bde9efea1f6d29ab90f29e4d7c261b375b99154b6292d41fc10559,PodSandboxId:30242516e8e9ac227e7aba5fcf3357980c39bf1d53d5180208366d9151a9f6e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723811829571304558,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4eb1802b446ee0233a6ed400bf8fd33,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a0281c780fc2216d75b34fb0ab5edeca5d750269010e7a86e842bc53970539d,PodSandboxId:40cdcfe4bd9df902d0159353292c04634d78c4dfe6f98b844b9ee744dd1f4204,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723811829473844589,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0bcffbcfcc9f18fc26b991d99b329e9,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2beea397951195fcf59b5f00713ebd9cc8a260e3975fa901a4733ac52610bd62,PodSandboxId:07219fcbf99eb43de5a7eaff62f9fbdfb6ea996deb4608e094841f000b349224,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723811829421176392,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1758561f6a2148ce3a7eabea3ce99a1a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=37f50f74-6b04-414c-be88-07cb006437a0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 12:44:16 ha-863936 crio[680]: time="2024-08-16 12:44:16.009348542Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:e73d7f930e1761fdc56db10356ad46f9f2acd1f4aead50f57efa3558af4e0b18,Verbose:false,}" file="otel-collector/interceptors.go:62" id=b18900b6-d85f-4f11-ba87-f7d30b429796 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 16 12:44:16 ha-863936 crio[680]: time="2024-08-16 12:44:16.009470901Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:e73d7f930e1761fdc56db10356ad46f9f2acd1f4aead50f57efa3558af4e0b18,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1723812042208920241,StartedAt:1723812042238334250,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox:1.28,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zqpfx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c52c866f-81c3-423f-a604-f792834e341e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/c52c866f-81c3-423f-a604-f792834e341e/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/c52c866f-81c3-423f-a604-f792834e341e/containers/busybox/b5295f70,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/c52c866f-81c3-423f-a604-f792834e341e/volumes/kubernetes.io~projected/kube-api-access-v8dtx,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/default_busybox-7dff88458-zqpfx_c52c866f-81c3-423f-a604-f792834e341e/busybox/0.log,Resources:&ContainerR
esources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:1000,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=b18900b6-d85f-4f11-ba87-f7d30b429796 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 16 12:44:16 ha-863936 crio[680]: time="2024-08-16 12:44:16.009930182Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:a32107a6690bf88691db72f9019b8ec9c8c6e1ce5e447fa21e37000fbe8fe696,Verbose:false,}" file="otel-collector/interceptors.go:62" id=1d936379-a83d-4253-b75e-332f49872772 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 16 12:44:16 ha-863936 crio[680]: time="2024-08-16 12:44:16.011183759Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:a32107a6690bf88691db72f9019b8ec9c8c6e1ce5e447fa21e37000fbe8fe696,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1723811856939788269,StartedAt:1723811856990525675,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/coredns/coredns:v1.11.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-ssb5h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5162fb17-6897-40d2-9c2c-80157ea46e07,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerP
ort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/coredns,HostPath:/var/lib/kubelet/pods/5162fb17-6897-40d2-9c2c-80157ea46e07/volumes/kubernetes.io~configmap/config-volume,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/5162fb17-6897-40d2-9c2c-80157ea46e07/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/5162fb17-6897-40d2-9c2c-80157ea46e07/containers/coredns/58cd6674,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGA
TION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/5162fb17-6897-40d2-9c2c-80157ea46e07/volumes/kubernetes.io~projected/kube-api-access-mxs7m,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_coredns-6f6b679f8f-ssb5h_5162fb17-6897-40d2-9c2c-80157ea46e07/coredns/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:178257920,OomScoreAdj:967,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:178257920,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=1d936379-a83d-4253-b75e-332f49872772 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 16 12:44:16 ha-863936 crio[680]: time="2024-08-16 12:44:16.011736031Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:8fb58a4d7b8e84f3429f58f729e1f93b54a8aadb737f10f997bf64e601d6edd6,Verbose:false,}" file="otel-collector/interceptors.go:62" id=f0e4ec25-5949-4f37-9aba-3ffd853178f1 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 16 12:44:16 ha-863936 crio[680]: time="2024-08-16 12:44:16.011836306Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:8fb58a4d7b8e84f3429f58f729e1f93b54a8aadb737f10f997bf64e601d6edd6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1723811856888910169,StartedAt:1723811856940087700,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/coredns/coredns:v1.11.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-7gfgm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 797ae351-63bf-4994-a9bd-901367887b58,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerP
ort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/coredns,HostPath:/var/lib/kubelet/pods/797ae351-63bf-4994-a9bd-901367887b58/volumes/kubernetes.io~configmap/config-volume,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/797ae351-63bf-4994-a9bd-901367887b58/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/797ae351-63bf-4994-a9bd-901367887b58/containers/coredns/683f56c4,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGA
TION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/797ae351-63bf-4994-a9bd-901367887b58/volumes/kubernetes.io~projected/kube-api-access-htff5,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_coredns-6f6b679f8f-7gfgm_797ae351-63bf-4994-a9bd-901367887b58/coredns/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:178257920,OomScoreAdj:967,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:178257920,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=f0e4ec25-5949-4f37-9aba-3ffd853178f1 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 16 12:44:16 ha-863936 crio[680]: time="2024-08-16 12:44:16.012443273Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:c7eccab4aea0bb6f0bc7c4549fb8ee6bdfb5c2805f3bd08c2c101869d2d91f44,Verbose:false,}" file="otel-collector/interceptors.go:62" id=d8f4275b-abe6-4514-a370-bd0b1f8c752b name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 16 12:44:16 ha-863936 crio[680]: time="2024-08-16 12:44:16.012642538Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:c7eccab4aea0bb6f0bc7c4549fb8ee6bdfb5c2805f3bd08c2c101869d2d91f44,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1723811856859907428,StartedAt:1723811856916336896,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:gcr.io/k8s-minikube/storage-provisioner:v5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6e7b7e6-00b6-42e2-9680-e6660e76bc6f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/tmp,HostPath:/tmp,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/e6e7b7e6-00b6-42e2-9680-e6660e76bc6f/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/e6e7b7e6-00b6-42e2-9680-e6660e76bc6f/containers/storage-provisioner/9a17338d,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/e6e7b7e6-00b6-42e2-9680-e6660e76bc6f/volumes/kubernetes.io~projected/kube-api-access-mf5mm,Readonly:true,SelinuxRelabel:false,Propag
ation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_storage-provisioner_e6e7b7e6-00b6-42e2-9680-e6660e76bc6f/storage-provisioner/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:1000,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=d8f4275b-abe6-4514-a370-bd0b1f8c752b name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 16 12:44:16 ha-863936 crio[680]: time="2024-08-16 12:44:16.013287941Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:b83ba25619ab61e7a449e6a735eb3cca59c184e250a87f2af5b712fbc37fc331,Verbose:false,}" file="otel-collector/interceptors.go:62" id=2a7f7e27-4ed7-4631-82d2-60670a71963e name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 16 12:44:16 ha-863936 crio[680]: time="2024-08-16 12:44:16.013446369Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:b83ba25619ab61e7a449e6a735eb3cca59c184e250a87f2af5b712fbc37fc331,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1723811844972210872,StartedAt:1723811845002015949,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:docker.io/kindest/kindnetd:v20240813-c6f155d6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dddkq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87bd9636-168b-4f61-9382-0914014af5c0,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/run/xtables.lock,HostPath:/run/xtables.lock,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/lib/modules,HostPath:/lib/modules,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/87bd9636-168b-4f61-9382-0914014af5c0/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/87bd9636-168b-4f61-9382-0914014af5c0/containers/kindnet-cni/3aa5efc3,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/cni/net.d,HostPath:/etc/cn
i/net.d,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/87bd9636-168b-4f61-9382-0914014af5c0/volumes/kubernetes.io~projected/kube-api-access-s2llm,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kindnet-dddkq_87bd9636-168b-4f61-9382-0914014af5c0/kindnet-cni/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:10000,CpuShares:102,MemoryLimitInBytes:52428800,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:52428800,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=2a7f7e27-4ed7-4631-82d2-60670a71963e n
ame=/runtime.v1.RuntimeService/ContainerStatus
	Aug 16 12:44:16 ha-863936 crio[680]: time="2024-08-16 12:44:16.014102531Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:4aa588906cdcd0cf4fcb973469df4a1d02cafe5e3388d6c516665f0d1af8ceb4,Verbose:false,}" file="otel-collector/interceptors.go:62" id=954f5545-c221-4f4e-8c68-79a8ea23b0b2 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 16 12:44:16 ha-863936 crio[680]: time="2024-08-16 12:44:16.014201207Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:4aa588906cdcd0cf4fcb973469df4a1d02cafe5e3388d6c516665f0d1af8ceb4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1723811840993390003,StartedAt:1723811841033645089,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-proxy:v1.31.0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g75mg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d22ea17-7ddd-4c07-89d5-0ebaa170066c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.te
rminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/run/xtables.lock,HostPath:/run/xtables.lock,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/lib/modules,HostPath:/lib/modules,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/8d22ea17-7ddd-4c07-89d5-0ebaa170066c/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/8d22ea17-7ddd-4c07-89d5-0ebaa170066c/containers/kube-proxy/21a1d77f,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/kube-proxy,HostPath:/var/lib/kubel
et/pods/8d22ea17-7ddd-4c07-89d5-0ebaa170066c/volumes/kubernetes.io~configmap/kube-proxy,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/8d22ea17-7ddd-4c07-89d5-0ebaa170066c/volumes/kubernetes.io~projected/kube-api-access-jdqsw,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-proxy-g75mg_8d22ea17-7ddd-4c07-89d5-0ebaa170066c/kube-proxy/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/
interceptors.go:74" id=954f5545-c221-4f4e-8c68-79a8ea23b0b2 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 16 12:44:16 ha-863936 crio[680]: time="2024-08-16 12:44:16.014684066Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:50ae5af99f5970011dec9ba89fd0047f1f9b657bdad8b1e90a1718aa00bdd86a,Verbose:false,}" file="otel-collector/interceptors.go:62" id=3db8f2f2-3166-46eb-ab8c-2174a18c4c5f name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 16 12:44:16 ha-863936 crio[680]: time="2024-08-16 12:44:16.014770567Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:50ae5af99f5970011dec9ba89fd0047f1f9b657bdad8b1e90a1718aa00bdd86a,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1723811832390291375,StartedAt:1723811832418488616,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip:v0.8.0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f21ce045d97e5d71d18a00985c30116f,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminati
onMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/f21ce045d97e5d71d18a00985c30116f/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/f21ce045d97e5d71d18a00985c30116f/containers/kube-vip/1325b7e0,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/admin.conf,HostPath:/etc/kubernetes/super-admin.conf,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-vip-ha-863936_f21ce045d97e5d71d18a00985c30116f/kube-vip/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj
:1000,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=3db8f2f2-3166-46eb-ab8c-2174a18c4c5f name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 16 12:44:16 ha-863936 crio[680]: time="2024-08-16 12:44:16.015498244Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:ee882e5e99dadc7370d79fccecde5adec2c82fc5cf4d93a04c88222c888fc1a9,Verbose:false,}" file="otel-collector/interceptors.go:62" id=7b5a3a1e-4bc1-4d26-8a22-5d175596d793 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 16 12:44:16 ha-863936 crio[680]: time="2024-08-16 12:44:16.015800549Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:ee882e5e99dadc7370d79fccecde5adec2c82fc5cf4d93a04c88222c888fc1a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1723811829692103263,StartedAt:1723811829778582627,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-apiserver:v1.31.0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fb02b673e0a97e6d66a5a7404114d26,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/4fb02b673e0a97e6d66a5a7404114d26/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/4fb02b673e0a97e6d66a5a7404114d26/containers/kube-apiserver/af299803,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/
minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-apiserver-ha-863936_4fb02b673e0a97e6d66a5a7404114d26/kube-apiserver/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:256,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=7b5a3a1e-4bc1-4d26-8a22-5d175596d793 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 16 12:44:16 ha-863936 crio[680]: time="2024-08-16 12:44:16.016346728Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:f34879b3d9bde9efea1f6d29ab90f29e4d7c261b375b99154b6292d41fc10559,Verbose:false,}" file="otel-collector/interceptors.go:62" id=f50c2b37-3291-4d6a-902d-7461cebffe20 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 16 12:44:16 ha-863936 crio[680]: time="2024-08-16 12:44:16.016436923Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:f34879b3d9bde9efea1f6d29ab90f29e4d7c261b375b99154b6292d41fc10559,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1723811829665671977,StartedAt:1723811829741204388,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/etcd:3.5.15-0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4eb1802b446ee0233a6ed400bf8fd33,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy
: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/e4eb1802b446ee0233a6ed400bf8fd33/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/e4eb1802b446ee0233a6ed400bf8fd33/containers/etcd/fc621fa8,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/etcd,HostPath:/var/lib/minikube/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs/etcd,HostPath:/var/lib/minikube/certs/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_etcd-ha-863936_e4eb18
02b446ee0233a6ed400bf8fd33/etcd/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=f50c2b37-3291-4d6a-902d-7461cebffe20 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 16 12:44:16 ha-863936 crio[680]: time="2024-08-16 12:44:16.016991801Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:4a0281c780fc2216d75b34fb0ab5edeca5d750269010e7a86e842bc53970539d,Verbose:false,}" file="otel-collector/interceptors.go:62" id=fc01eb61-214c-4cd7-8736-92eca1d2d88f name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 16 12:44:16 ha-863936 crio[680]: time="2024-08-16 12:44:16.017072946Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:4a0281c780fc2216d75b34fb0ab5edeca5d750269010e7a86e842bc53970539d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1723811829579211558,StartedAt:1723811829700571789,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-scheduler:v1.31.0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0bcffbcfcc9f18fc26b991d99b329e9,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/c0bcffbcfcc9f18fc26b991d99b329e9/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/c0bcffbcfcc9f18fc26b991d99b329e9/containers/kube-scheduler/7166aee1,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/scheduler.conf,HostPath:/etc/kubernetes/scheduler.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-scheduler-ha-863936_c0bcffbcfcc9f18fc26b991d99b329e9/kube-scheduler/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,C
puShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=fc01eb61-214c-4cd7-8736-92eca1d2d88f name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 16 12:44:16 ha-863936 crio[680]: time="2024-08-16 12:44:16.017586831Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:2beea397951195fcf59b5f00713ebd9cc8a260e3975fa901a4733ac52610bd62,Verbose:false,}" file="otel-collector/interceptors.go:62" id=5ec616ee-adad-4b14-b0c2-84a07eff22e0 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 16 12:44:16 ha-863936 crio[680]: time="2024-08-16 12:44:16.017678258Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:2beea397951195fcf59b5f00713ebd9cc8a260e3975fa901a4733ac52610bd62,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1723811829521348154,StartedAt:1723811829633983016,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-controller-manager:v1.31.0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1758561f6a2148ce3a7eabea3ce99a1a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/1758561f6a2148ce3a7eabea3ce99a1a/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/1758561f6a2148ce3a7eabea3ce99a1a/containers/kube-controller-manager/a63c3289,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/controller-manager.conf,HostPath:/etc/kubernetes/controller-manager.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*
IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,HostPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-controller-manager-ha-863936_1758561f6a2148ce3a7eabea3ce99a1a/kube-controller-manager/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:204,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*H
ugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=5ec616ee-adad-4b14-b0c2-84a07eff22e0 name=/runtime.v1.RuntimeService/ContainerStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e73d7f930e176       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   5f9b33b7fe6f2       busybox-7dff88458-zqpfx
	a32107a6690bf       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   13e4c008cfb7e       coredns-6f6b679f8f-ssb5h
	8fb58a4d7b8e8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   7061cc0bd22ac       coredns-6f6b679f8f-7gfgm
	c7eccab4aea0b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   17d99db1f4e4f       storage-provisioner
	b83ba25619ab6       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    6 minutes ago       Running             kindnet-cni               0                   d524a508e86ff       kindnet-dddkq
	4aa588906cdcd       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      6 minutes ago       Running             kube-proxy                0                   e0fda91da3630       kube-proxy-g75mg
	50ae5af99f597       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   440481aadacb0       kube-vip-ha-863936
	f34879b3d9bde       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      7 minutes ago       Running             etcd                      0                   30242516e8e9a       etcd-ha-863936
	ee882e5e99dad       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      7 minutes ago       Running             kube-apiserver            0                   6ebc21b6e7655       kube-apiserver-ha-863936
	4a0281c780fc2       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      7 minutes ago       Running             kube-scheduler            0                   40cdcfe4bd9df       kube-scheduler-ha-863936
	2beea39795119       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      7 minutes ago       Running             kube-controller-manager   0                   07219fcbf99eb       kube-controller-manager-ha-863936
	
	
	==> coredns [8fb58a4d7b8e84f3429f58f729e1f93b54a8aadb737f10f997bf64e601d6edd6] <==
	[INFO] 10.244.0.4:57950 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.002003074s
	[INFO] 10.244.2.2:35545 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.013578902s
	[INFO] 10.244.2.2:50915 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000198375s
	[INFO] 10.244.2.2:54351 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003032048s
	[INFO] 10.244.2.2:33554 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000202349s
	[INFO] 10.244.2.2:49854 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000138224s
	[INFO] 10.244.2.2:52911 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000113497s
	[INFO] 10.244.1.2:58083 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001926786s
	[INFO] 10.244.1.2:40090 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000179243s
	[INFO] 10.244.0.4:38072 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001911453s
	[INFO] 10.244.0.4:48123 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000124668s
	[INFO] 10.244.2.2:45589 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104297s
	[INFO] 10.244.2.2:47676 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000096845s
	[INFO] 10.244.2.2:34029 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090037s
	[INFO] 10.244.2.2:44387 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000085042s
	[INFO] 10.244.1.2:39606 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000160442s
	[INFO] 10.244.1.2:35616 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000085764s
	[INFO] 10.244.1.2:41949 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000261174s
	[INFO] 10.244.1.2:33001 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071351s
	[INFO] 10.244.0.4:57464 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000150636s
	[INFO] 10.244.2.2:55232 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000242943s
	[INFO] 10.244.2.2:35398 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000209274s
	[INFO] 10.244.1.2:40761 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122103s
	[INFO] 10.244.1.2:46518 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000133408s
	[INFO] 10.244.1.2:41022 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000117384s
	
	
	==> coredns [a32107a6690bf88691db72f9019b8ec9c8c6e1ce5e447fa21e37000fbe8fe696] <==
	[INFO] 10.244.1.2:36903 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001963097s
	[INFO] 10.244.2.2:42077 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000197227s
	[INFO] 10.244.2.2:53338 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000203298s
	[INFO] 10.244.1.2:37962 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128488s
	[INFO] 10.244.1.2:53685 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000098031s
	[INFO] 10.244.1.2:33689 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000277395s
	[INFO] 10.244.1.2:40131 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001237471s
	[INFO] 10.244.1.2:39633 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000131283s
	[INFO] 10.244.1.2:60171 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000121735s
	[INFO] 10.244.0.4:60191 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114357s
	[INFO] 10.244.0.4:41890 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000066371s
	[INFO] 10.244.0.4:55945 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000119788s
	[INFO] 10.244.0.4:57226 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001318461s
	[INFO] 10.244.0.4:56732 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000093503s
	[INFO] 10.244.0.4:52075 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000104691s
	[INFO] 10.244.0.4:60105 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121048s
	[INFO] 10.244.0.4:43134 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000066121s
	[INFO] 10.244.0.4:44998 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063593s
	[INFO] 10.244.2.2:47337 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00013984s
	[INFO] 10.244.2.2:54916 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000155787s
	[INFO] 10.244.1.2:40477 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000149375s
	[INFO] 10.244.0.4:48877 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125695s
	[INFO] 10.244.0.4:37769 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000100407s
	[INFO] 10.244.0.4:53971 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000045729s
	[INFO] 10.244.0.4:37660 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000216606s
	
	
	==> describe nodes <==
	Name:               ha-863936
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-863936
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab84f9bc76071a77c857a14f5c66dccc01002b05
	                    minikube.k8s.io/name=ha-863936
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_16T12_37_16_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 12:37:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-863936
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 12:44:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 12:40:50 +0000   Fri, 16 Aug 2024 12:37:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 12:40:50 +0000   Fri, 16 Aug 2024 12:37:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 12:40:50 +0000   Fri, 16 Aug 2024 12:37:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 12:40:50 +0000   Fri, 16 Aug 2024 12:37:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.2
	  Hostname:    ha-863936
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 10f8ad5d72f24178a58c9bc9c1f37801
	  System UUID:                10f8ad5d-72f2-4178-a58c-9bc9c1f37801
	  Boot ID:                    4cc922cf-4096-4ce6-955a-2954b5f98b77
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-zqpfx              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m38s
	  kube-system                 coredns-6f6b679f8f-7gfgm             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m56s
	  kube-system                 coredns-6f6b679f8f-ssb5h             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m56s
	  kube-system                 etcd-ha-863936                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         7m
	  kube-system                 kindnet-dddkq                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m56s
	  kube-system                 kube-apiserver-ha-863936             250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m
	  kube-system                 kube-controller-manager-ha-863936    200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m
	  kube-system                 kube-proxy-g75mg                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m56s
	  kube-system                 kube-scheduler-ha-863936             100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m
	  kube-system                 kube-vip-ha-863936                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m55s  kube-proxy       
	  Normal  Starting                 7m1s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m1s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m     kubelet          Node ha-863936 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m     kubelet          Node ha-863936 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m     kubelet          Node ha-863936 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m57s  node-controller  Node ha-863936 event: Registered Node ha-863936 in Controller
	  Normal  NodeReady                6m40s  kubelet          Node ha-863936 status is now: NodeReady
	  Normal  RegisteredNode           5m13s  node-controller  Node ha-863936 event: Registered Node ha-863936 in Controller
	  Normal  RegisteredNode           3m59s  node-controller  Node ha-863936 event: Registered Node ha-863936 in Controller
	
	
	Name:               ha-863936-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-863936-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab84f9bc76071a77c857a14f5c66dccc01002b05
	                    minikube.k8s.io/name=ha-863936
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_16T12_38_57_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 12:38:55 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-863936-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 12:41:48 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 16 Aug 2024 12:40:58 +0000   Fri, 16 Aug 2024 12:42:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 16 Aug 2024 12:40:58 +0000   Fri, 16 Aug 2024 12:42:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 16 Aug 2024 12:40:58 +0000   Fri, 16 Aug 2024 12:42:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 16 Aug 2024 12:40:58 +0000   Fri, 16 Aug 2024 12:42:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.101
	  Hostname:    ha-863936-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c538a90b7afb4607a2068ae6c8689740
	  System UUID:                c538a90b-7afb-4607-a206-8ae6c8689740
	  Boot ID:                    905428ee-99b5-4544-bd9e-3ece49443b02
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-t5tjw                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m38s
	  kube-system                 etcd-ha-863936-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m19s
	  kube-system                 kindnet-qmrb2                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m21s
	  kube-system                 kube-apiserver-ha-863936-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m19s
	  kube-system                 kube-controller-manager-ha-863936-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m18s
	  kube-system                 kube-proxy-7lvfc                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m21s
	  kube-system                 kube-scheduler-ha-863936-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m11s
	  kube-system                 kube-vip-ha-863936-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m16s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m21s (x8 over 5m21s)  kubelet          Node ha-863936-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m21s (x8 over 5m21s)  kubelet          Node ha-863936-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m21s (x7 over 5m21s)  kubelet          Node ha-863936-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m17s                  node-controller  Node ha-863936-m02 event: Registered Node ha-863936-m02 in Controller
	  Normal  RegisteredNode           5m13s                  node-controller  Node ha-863936-m02 event: Registered Node ha-863936-m02 in Controller
	  Normal  RegisteredNode           3m59s                  node-controller  Node ha-863936-m02 event: Registered Node ha-863936-m02 in Controller
	  Normal  NodeNotReady             107s                   node-controller  Node ha-863936-m02 status is now: NodeNotReady
	
	
	Name:               ha-863936-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-863936-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab84f9bc76071a77c857a14f5c66dccc01002b05
	                    minikube.k8s.io/name=ha-863936
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_16T12_40_11_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 12:40:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-863936-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 12:44:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 12:41:10 +0000   Fri, 16 Aug 2024 12:40:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 12:41:10 +0000   Fri, 16 Aug 2024 12:40:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 12:41:10 +0000   Fri, 16 Aug 2024 12:40:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 12:41:10 +0000   Fri, 16 Aug 2024 12:40:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.116
	  Hostname:    ha-863936-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b54ef01aeadc4a70aaecea24c80f74de
	  System UUID:                b54ef01a-eadc-4a70-aaec-ea24c80f74de
	  Boot ID:                    01b09bab-fb3a-4947-8e0c-d6a621aada21
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-gm458                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m38s
	  kube-system                 etcd-ha-863936-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m8s
	  kube-system                 kindnet-zqs4l                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m8s
	  kube-system                 kube-apiserver-ha-863936-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 kube-controller-manager-ha-863936-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 kube-proxy-25gzj                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 kube-scheduler-ha-863936-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 kube-vip-ha-863936-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m4s                 kube-proxy       
	  Normal  Starting                 4m8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m8s (x8 over 4m8s)  kubelet          Node ha-863936-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m8s (x8 over 4m8s)  kubelet          Node ha-863936-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m8s (x7 over 4m8s)  kubelet          Node ha-863936-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m7s                 node-controller  Node ha-863936-m03 event: Registered Node ha-863936-m03 in Controller
	  Normal  RegisteredNode           4m3s                 node-controller  Node ha-863936-m03 event: Registered Node ha-863936-m03 in Controller
	  Normal  RegisteredNode           3m59s                node-controller  Node ha-863936-m03 event: Registered Node ha-863936-m03 in Controller
	
	
	Name:               ha-863936-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-863936-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab84f9bc76071a77c857a14f5c66dccc01002b05
	                    minikube.k8s.io/name=ha-863936
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_16T12_41_15_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 12:41:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-863936-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 12:44:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 12:41:46 +0000   Fri, 16 Aug 2024 12:41:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 12:41:46 +0000   Fri, 16 Aug 2024 12:41:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 12:41:46 +0000   Fri, 16 Aug 2024 12:41:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 12:41:46 +0000   Fri, 16 Aug 2024 12:41:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.74
	  Hostname:    ha-863936-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 13346cf592d54450aa4bb72c3dba17c9
	  System UUID:                13346cf5-92d5-4450-aa4b-b72c3dba17c9
	  Boot ID:                    51e69c7f-b3b6-4d26-8d6c-cea0170d4a5a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-c6wlb       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m1s
	  kube-system                 kube-proxy-lsjgf    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m56s                kube-proxy       
	  Normal  NodeHasSufficientMemory  3m1s (x2 over 3m2s)  kubelet          Node ha-863936-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m1s (x2 over 3m2s)  kubelet          Node ha-863936-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m1s (x2 over 3m2s)  kubelet          Node ha-863936-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m59s                node-controller  Node ha-863936-m04 event: Registered Node ha-863936-m04 in Controller
	  Normal  RegisteredNode           2m58s                node-controller  Node ha-863936-m04 event: Registered Node ha-863936-m04 in Controller
	  Normal  RegisteredNode           2m57s                node-controller  Node ha-863936-m04 event: Registered Node ha-863936-m04 in Controller
	  Normal  NodeReady                2m41s                kubelet          Node ha-863936-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug16 12:36] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.049893] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039040] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.779878] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.388981] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.556022] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.777615] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.058123] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055634] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.181681] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.119869] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.269746] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[Aug16 12:37] systemd-fstab-generator[771]: Ignoring "noauto" option for root device
	[  +4.293923] systemd-fstab-generator[906]: Ignoring "noauto" option for root device
	[  +0.058457] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.209516] systemd-fstab-generator[1329]: Ignoring "noauto" option for root device
	[  +0.086313] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.133654] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.050308] kauditd_printk_skb: 34 callbacks suppressed
	[Aug16 12:39] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [f34879b3d9bde9efea1f6d29ab90f29e4d7c261b375b99154b6292d41fc10559] <==
	{"level":"warn","ts":"2024-08-16T12:44:16.312088Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"a92ff6c78f5f37a8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T12:44:16.323816Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"a92ff6c78f5f37a8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T12:44:16.325077Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"a92ff6c78f5f37a8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T12:44:16.329810Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"a92ff6c78f5f37a8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T12:44:16.333565Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"a92ff6c78f5f37a8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T12:44:16.339507Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"a92ff6c78f5f37a8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T12:44:16.346379Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"a92ff6c78f5f37a8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T12:44:16.347332Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"a92ff6c78f5f37a8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T12:44:16.356739Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"a92ff6c78f5f37a8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T12:44:16.360163Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"a92ff6c78f5f37a8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T12:44:16.366199Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"a92ff6c78f5f37a8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T12:44:16.383732Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"a92ff6c78f5f37a8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T12:44:16.400837Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"a92ff6c78f5f37a8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T12:44:16.407619Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"a92ff6c78f5f37a8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T12:44:16.408236Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"a92ff6c78f5f37a8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T12:44:16.418868Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"a92ff6c78f5f37a8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T12:44:16.423361Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"a92ff6c78f5f37a8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T12:44:16.424615Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"a92ff6c78f5f37a8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T12:44:16.427622Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"a92ff6c78f5f37a8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T12:44:16.500490Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"a92ff6c78f5f37a8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T12:44:16.508070Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"a92ff6c78f5f37a8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T12:44:16.516039Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"a92ff6c78f5f37a8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T12:44:16.517118Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.101:2380/version","remote-member-id":"a92ff6c78f5f37a8","error":"Get \"https://192.168.39.101:2380/version\": dial tcp 192.168.39.101:2380: i/o timeout"}
	{"level":"warn","ts":"2024-08-16T12:44:16.517169Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"a92ff6c78f5f37a8","error":"Get \"https://192.168.39.101:2380/version\": dial tcp 192.168.39.101:2380: i/o timeout"}
	{"level":"warn","ts":"2024-08-16T12:44:16.522784Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"a92ff6c78f5f37a8","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 12:44:16 up 7 min,  0 users,  load average: 0.42, 0.28, 0.15
	Linux ha-863936 5.10.207 #1 SMP Wed Aug 14 19:18:01 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [b83ba25619ab61e7a449e6a735eb3cca59c184e250a87f2af5b712fbc37fc331] <==
	I0816 12:43:46.069823       1 main.go:322] Node ha-863936-m04 has CIDR [10.244.3.0/24] 
	I0816 12:43:56.068495       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0816 12:43:56.068575       1 main.go:322] Node ha-863936-m02 has CIDR [10.244.1.0/24] 
	I0816 12:43:56.068806       1 main.go:295] Handling node with IPs: map[192.168.39.116:{}]
	I0816 12:43:56.068814       1 main.go:322] Node ha-863936-m03 has CIDR [10.244.2.0/24] 
	I0816 12:43:56.068914       1 main.go:295] Handling node with IPs: map[192.168.39.74:{}]
	I0816 12:43:56.068992       1 main.go:322] Node ha-863936-m04 has CIDR [10.244.3.0/24] 
	I0816 12:43:56.069116       1 main.go:295] Handling node with IPs: map[192.168.39.2:{}]
	I0816 12:43:56.069209       1 main.go:299] handling current node
	I0816 12:44:06.072871       1 main.go:295] Handling node with IPs: map[192.168.39.2:{}]
	I0816 12:44:06.072926       1 main.go:299] handling current node
	I0816 12:44:06.072981       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0816 12:44:06.072987       1 main.go:322] Node ha-863936-m02 has CIDR [10.244.1.0/24] 
	I0816 12:44:06.073121       1 main.go:295] Handling node with IPs: map[192.168.39.116:{}]
	I0816 12:44:06.073147       1 main.go:322] Node ha-863936-m03 has CIDR [10.244.2.0/24] 
	I0816 12:44:06.073201       1 main.go:295] Handling node with IPs: map[192.168.39.74:{}]
	I0816 12:44:06.073206       1 main.go:322] Node ha-863936-m04 has CIDR [10.244.3.0/24] 
	I0816 12:44:16.077052       1 main.go:295] Handling node with IPs: map[192.168.39.2:{}]
	I0816 12:44:16.077096       1 main.go:299] handling current node
	I0816 12:44:16.077110       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0816 12:44:16.077116       1 main.go:322] Node ha-863936-m02 has CIDR [10.244.1.0/24] 
	I0816 12:44:16.077216       1 main.go:295] Handling node with IPs: map[192.168.39.116:{}]
	I0816 12:44:16.077238       1 main.go:322] Node ha-863936-m03 has CIDR [10.244.2.0/24] 
	I0816 12:44:16.077291       1 main.go:295] Handling node with IPs: map[192.168.39.74:{}]
	I0816 12:44:16.077312       1 main.go:322] Node ha-863936-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [ee882e5e99dadc7370d79fccecde5adec2c82fc5cf4d93a04c88222c888fc1a9] <==
	W0816 12:37:14.433357       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.2]
	I0816 12:37:14.434364       1 controller.go:615] quota admission added evaluator for: endpoints
	I0816 12:37:14.438668       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0816 12:37:14.653639       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0816 12:37:15.765627       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0816 12:37:15.782676       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0816 12:37:15.950060       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0816 12:37:19.704519       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0816 12:37:20.357612       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0816 12:40:45.061392       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53080: use of closed network connection
	E0816 12:40:45.253892       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53100: use of closed network connection
	E0816 12:40:45.447692       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53124: use of closed network connection
	E0816 12:40:45.653914       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53136: use of closed network connection
	E0816 12:40:45.836291       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53158: use of closed network connection
	E0816 12:40:46.026318       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53176: use of closed network connection
	E0816 12:40:46.206323       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53200: use of closed network connection
	E0816 12:40:46.381802       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53218: use of closed network connection
	E0816 12:40:46.572604       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53236: use of closed network connection
	E0816 12:40:46.860849       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51704: use of closed network connection
	E0816 12:40:47.031367       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51730: use of closed network connection
	E0816 12:40:47.215777       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51750: use of closed network connection
	E0816 12:40:47.385810       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51758: use of closed network connection
	E0816 12:40:47.566136       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51772: use of closed network connection
	E0816 12:40:47.738757       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51796: use of closed network connection
	W0816 12:42:14.447226       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.116 192.168.39.2]
	
	
	==> kube-controller-manager [2beea397951195fcf59b5f00713ebd9cc8a260e3975fa901a4733ac52610bd62] <==
	I0816 12:41:15.332255       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-863936-m04" podCIDRs=["10.244.3.0/24"]
	I0816 12:41:15.332762       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863936-m04"
	I0816 12:41:15.333204       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863936-m04"
	I0816 12:41:15.357606       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863936-m04"
	I0816 12:41:15.454340       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863936-m04"
	I0816 12:41:15.865264       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863936-m04"
	I0816 12:41:17.660549       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863936-m04"
	I0816 12:41:18.343928       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863936-m04"
	I0816 12:41:18.408931       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863936-m04"
	I0816 12:41:19.606298       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863936-m04"
	I0816 12:41:19.606662       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-863936-m04"
	I0816 12:41:19.682214       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863936-m04"
	I0816 12:41:25.705173       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863936-m04"
	I0816 12:41:35.728377       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863936-m04"
	I0816 12:41:35.729126       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-863936-m04"
	I0816 12:41:35.742298       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863936-m04"
	I0816 12:41:37.620362       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863936-m04"
	I0816 12:41:46.018491       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863936-m04"
	I0816 12:42:29.633456       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863936-m02"
	I0816 12:42:29.634075       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-863936-m04"
	I0816 12:42:29.653483       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863936-m02"
	I0816 12:42:29.785285       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="13.692871ms"
	I0816 12:42:29.785365       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="36.842µs"
	I0816 12:42:32.652242       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863936-m02"
	I0816 12:42:34.885086       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863936-m02"
	
	
	==> kube-proxy [4aa588906cdcd0cf4fcb973469df4a1d02cafe5e3388d6c516665f0d1af8ceb4] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0816 12:37:21.157152       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0816 12:37:21.180318       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.2"]
	E0816 12:37:21.180543       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 12:37:21.233094       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0816 12:37:21.233152       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0816 12:37:21.233178       1 server_linux.go:169] "Using iptables Proxier"
	I0816 12:37:21.235918       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 12:37:21.236251       1 server.go:483] "Version info" version="v1.31.0"
	I0816 12:37:21.236279       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 12:37:21.237617       1 config.go:197] "Starting service config controller"
	I0816 12:37:21.237665       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 12:37:21.237685       1 config.go:104] "Starting endpoint slice config controller"
	I0816 12:37:21.237703       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 12:37:21.238479       1 config.go:326] "Starting node config controller"
	I0816 12:37:21.238504       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 12:37:21.338389       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0816 12:37:21.338433       1 shared_informer.go:320] Caches are synced for service config
	I0816 12:37:21.338640       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [4a0281c780fc2216d75b34fb0ab5edeca5d750269010e7a86e842bc53970539d] <==
	E0816 12:40:08.738668       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-zqs4l\": pod kindnet-zqs4l is already assigned to node \"ha-863936-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-zqs4l" node="ha-863936-m03"
	E0816 12:40:08.739389       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod b9054301-c9d9-4f2e-94c9-4557d6f4af2c(kube-system/kindnet-zqs4l) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-zqs4l"
	E0816 12:40:08.739626       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-zqs4l\": pod kindnet-zqs4l is already assigned to node \"ha-863936-m03\"" pod="kube-system/kindnet-zqs4l"
	I0816 12:40:08.739839       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-zqs4l" node="ha-863936-m03"
	E0816 12:40:08.762522       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-25gzj\": pod kube-proxy-25gzj is already assigned to node \"ha-863936-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-25gzj" node="ha-863936-m03"
	E0816 12:40:08.762585       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 8014f69d-cbe6-4369-8dbc-95bb5a429c22(kube-system/kube-proxy-25gzj) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-25gzj"
	E0816 12:40:08.762600       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-25gzj\": pod kube-proxy-25gzj is already assigned to node \"ha-863936-m03\"" pod="kube-system/kube-proxy-25gzj"
	I0816 12:40:08.762640       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-25gzj" node="ha-863936-m03"
	E0816 12:40:38.693364       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-gm458\": pod busybox-7dff88458-gm458 is already assigned to node \"ha-863936-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-gm458" node="ha-863936-m02"
	E0816 12:40:38.693487       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-gm458\": pod busybox-7dff88458-gm458 is already assigned to node \"ha-863936-m03\"" pod="default/busybox-7dff88458-gm458"
	E0816 12:40:38.739428       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-zqpfx\": pod busybox-7dff88458-zqpfx is already assigned to node \"ha-863936\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-zqpfx" node="ha-863936-m02"
	E0816 12:40:38.739543       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-zqpfx\": pod busybox-7dff88458-zqpfx is already assigned to node \"ha-863936\"" pod="default/busybox-7dff88458-zqpfx"
	I0816 12:40:38.740159       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="ac686aab-89e4-4f07-8123-835111b35e68" pod="default/busybox-7dff88458-t5tjw" assumedNode="ha-863936-m02" currentNode="ha-863936-m03"
	E0816 12:40:38.740246       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-t5tjw\": pod busybox-7dff88458-t5tjw is already assigned to node \"ha-863936-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-t5tjw" node="ha-863936-m03"
	E0816 12:40:38.740275       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod ac686aab-89e4-4f07-8123-835111b35e68(default/busybox-7dff88458-t5tjw) was assumed on ha-863936-m03 but assigned to ha-863936-m02" pod="default/busybox-7dff88458-t5tjw"
	E0816 12:40:38.740288       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-t5tjw\": pod busybox-7dff88458-t5tjw is already assigned to node \"ha-863936-m02\"" pod="default/busybox-7dff88458-t5tjw"
	I0816 12:40:38.740306       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-t5tjw" node="ha-863936-m02"
	E0816 12:41:15.413439       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-c6wlb\": pod kindnet-c6wlb is already assigned to node \"ha-863936-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-c6wlb" node="ha-863936-m04"
	E0816 12:41:15.418107       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod d6429c25-2e31-4126-9629-0389aeec7999(kube-system/kindnet-c6wlb) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-c6wlb"
	E0816 12:41:15.420071       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-c6wlb\": pod kindnet-c6wlb is already assigned to node \"ha-863936-m04\"" pod="kube-system/kindnet-c6wlb"
	I0816 12:41:15.420190       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-c6wlb" node="ha-863936-m04"
	E0816 12:41:15.413578       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-lsjgf\": pod kube-proxy-lsjgf is already assigned to node \"ha-863936-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-lsjgf" node="ha-863936-m04"
	E0816 12:41:15.424458       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 71a9943c-8ebe-4a91-876f-8e47aca3f719(kube-system/kube-proxy-lsjgf) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-lsjgf"
	E0816 12:41:15.425608       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-lsjgf\": pod kube-proxy-lsjgf is already assigned to node \"ha-863936-m04\"" pod="kube-system/kube-proxy-lsjgf"
	I0816 12:41:15.425683       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-lsjgf" node="ha-863936-m04"
	
	
	==> kubelet <==
	Aug 16 12:43:06 ha-863936 kubelet[1336]: E0816 12:43:06.024814    1336 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812186024097645,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:43:15 ha-863936 kubelet[1336]: E0816 12:43:15.929192    1336 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 12:43:15 ha-863936 kubelet[1336]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 12:43:15 ha-863936 kubelet[1336]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 12:43:15 ha-863936 kubelet[1336]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 12:43:15 ha-863936 kubelet[1336]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 12:43:16 ha-863936 kubelet[1336]: E0816 12:43:16.026888    1336 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812196026552732,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:43:16 ha-863936 kubelet[1336]: E0816 12:43:16.026934    1336 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812196026552732,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:43:26 ha-863936 kubelet[1336]: E0816 12:43:26.029364    1336 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812206028302269,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:43:26 ha-863936 kubelet[1336]: E0816 12:43:26.029781    1336 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812206028302269,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:43:36 ha-863936 kubelet[1336]: E0816 12:43:36.032011    1336 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812216031661261,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:43:36 ha-863936 kubelet[1336]: E0816 12:43:36.032049    1336 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812216031661261,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:43:46 ha-863936 kubelet[1336]: E0816 12:43:46.033626    1336 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812226033333739,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:43:46 ha-863936 kubelet[1336]: E0816 12:43:46.033688    1336 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812226033333739,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:43:56 ha-863936 kubelet[1336]: E0816 12:43:56.035790    1336 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812236035332547,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:43:56 ha-863936 kubelet[1336]: E0816 12:43:56.036289    1336 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812236035332547,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:44:06 ha-863936 kubelet[1336]: E0816 12:44:06.039282    1336 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812246038612761,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:44:06 ha-863936 kubelet[1336]: E0816 12:44:06.039310    1336 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812246038612761,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:44:15 ha-863936 kubelet[1336]: E0816 12:44:15.932014    1336 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 12:44:15 ha-863936 kubelet[1336]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 12:44:15 ha-863936 kubelet[1336]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 12:44:15 ha-863936 kubelet[1336]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 12:44:15 ha-863936 kubelet[1336]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 12:44:16 ha-863936 kubelet[1336]: E0816 12:44:16.040997    1336 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812256040525046,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:44:16 ha-863936 kubelet[1336]: E0816 12:44:16.041025    1336 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812256040525046,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-863936 -n ha-863936
helpers_test.go:261: (dbg) Run:  kubectl --context ha-863936 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (49.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-863936 status -v=7 --alsologtostderr: exit status 3 (3.200617076s)

                                                
                                                
-- stdout --
	ha-863936
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-863936-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-863936-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-863936-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 12:44:21.078752   27091 out.go:345] Setting OutFile to fd 1 ...
	I0816 12:44:21.078868   27091 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 12:44:21.078881   27091 out.go:358] Setting ErrFile to fd 2...
	I0816 12:44:21.078889   27091 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 12:44:21.079073   27091 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-3966/.minikube/bin
	I0816 12:44:21.079225   27091 out.go:352] Setting JSON to false
	I0816 12:44:21.079248   27091 mustload.go:65] Loading cluster: ha-863936
	I0816 12:44:21.079375   27091 notify.go:220] Checking for updates...
	I0816 12:44:21.079570   27091 config.go:182] Loaded profile config "ha-863936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 12:44:21.079583   27091 status.go:255] checking status of ha-863936 ...
	I0816 12:44:21.079999   27091 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:21.080060   27091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:21.095464   27091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34709
	I0816 12:44:21.095848   27091 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:21.096376   27091 main.go:141] libmachine: Using API Version  1
	I0816 12:44:21.096396   27091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:21.096789   27091 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:21.097028   27091 main.go:141] libmachine: (ha-863936) Calling .GetState
	I0816 12:44:21.098647   27091 status.go:330] ha-863936 host status = "Running" (err=<nil>)
	I0816 12:44:21.098662   27091 host.go:66] Checking if "ha-863936" exists ...
	I0816 12:44:21.098947   27091 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:21.098989   27091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:21.113519   27091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36257
	I0816 12:44:21.113830   27091 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:21.114239   27091 main.go:141] libmachine: Using API Version  1
	I0816 12:44:21.114267   27091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:21.114510   27091 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:21.114674   27091 main.go:141] libmachine: (ha-863936) Calling .GetIP
	I0816 12:44:21.116842   27091 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:44:21.117269   27091 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:44:21.117288   27091 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:44:21.117454   27091 host.go:66] Checking if "ha-863936" exists ...
	I0816 12:44:21.117762   27091 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:21.117801   27091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:21.131703   27091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33279
	I0816 12:44:21.132081   27091 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:21.132535   27091 main.go:141] libmachine: Using API Version  1
	I0816 12:44:21.132557   27091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:21.132838   27091 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:21.133042   27091 main.go:141] libmachine: (ha-863936) Calling .DriverName
	I0816 12:44:21.133237   27091 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 12:44:21.133274   27091 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:44:21.136114   27091 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:44:21.136510   27091 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:44:21.136529   27091 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:44:21.136675   27091 main.go:141] libmachine: (ha-863936) Calling .GetSSHPort
	I0816 12:44:21.136859   27091 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:44:21.137044   27091 main.go:141] libmachine: (ha-863936) Calling .GetSSHUsername
	I0816 12:44:21.137212   27091 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936/id_rsa Username:docker}
	I0816 12:44:21.212686   27091 ssh_runner.go:195] Run: systemctl --version
	I0816 12:44:21.219052   27091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 12:44:21.234091   27091 kubeconfig.go:125] found "ha-863936" server: "https://192.168.39.254:8443"
	I0816 12:44:21.234117   27091 api_server.go:166] Checking apiserver status ...
	I0816 12:44:21.234157   27091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 12:44:21.249589   27091 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1179/cgroup
	W0816 12:44:21.259071   27091 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1179/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 12:44:21.259139   27091 ssh_runner.go:195] Run: ls
	I0816 12:44:21.264153   27091 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0816 12:44:21.268509   27091 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0816 12:44:21.268535   27091 status.go:422] ha-863936 apiserver status = Running (err=<nil>)
	I0816 12:44:21.268547   27091 status.go:257] ha-863936 status: &{Name:ha-863936 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 12:44:21.268570   27091 status.go:255] checking status of ha-863936-m02 ...
	I0816 12:44:21.268996   27091 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:21.269036   27091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:21.283846   27091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45603
	I0816 12:44:21.284227   27091 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:21.284667   27091 main.go:141] libmachine: Using API Version  1
	I0816 12:44:21.284687   27091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:21.285218   27091 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:21.285441   27091 main.go:141] libmachine: (ha-863936-m02) Calling .GetState
	I0816 12:44:21.286947   27091 status.go:330] ha-863936-m02 host status = "Running" (err=<nil>)
	I0816 12:44:21.286962   27091 host.go:66] Checking if "ha-863936-m02" exists ...
	I0816 12:44:21.287341   27091 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:21.287380   27091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:21.301802   27091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41253
	I0816 12:44:21.302150   27091 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:21.302655   27091 main.go:141] libmachine: Using API Version  1
	I0816 12:44:21.302678   27091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:21.303025   27091 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:21.303231   27091 main.go:141] libmachine: (ha-863936-m02) Calling .GetIP
	I0816 12:44:21.306139   27091 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:44:21.306513   27091 main.go:141] libmachine: (ha-863936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1e:73", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:37:36 +0000 UTC Type:0 Mac:52:54:00:c0:1e:73 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-863936-m02 Clientid:01:52:54:00:c0:1e:73}
	I0816 12:44:21.306551   27091 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:44:21.306627   27091 host.go:66] Checking if "ha-863936-m02" exists ...
	I0816 12:44:21.307038   27091 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:21.307079   27091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:21.321482   27091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37591
	I0816 12:44:21.321849   27091 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:21.322258   27091 main.go:141] libmachine: Using API Version  1
	I0816 12:44:21.322272   27091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:21.322543   27091 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:21.322732   27091 main.go:141] libmachine: (ha-863936-m02) Calling .DriverName
	I0816 12:44:21.322945   27091 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 12:44:21.322967   27091 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHHostname
	I0816 12:44:21.325473   27091 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:44:21.325902   27091 main.go:141] libmachine: (ha-863936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1e:73", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:37:36 +0000 UTC Type:0 Mac:52:54:00:c0:1e:73 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-863936-m02 Clientid:01:52:54:00:c0:1e:73}
	I0816 12:44:21.325929   27091 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:44:21.326064   27091 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHPort
	I0816 12:44:21.326230   27091 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHKeyPath
	I0816 12:44:21.326364   27091 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHUsername
	I0816 12:44:21.326494   27091 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m02/id_rsa Username:docker}
	W0816 12:44:23.893204   27091 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.101:22: connect: no route to host
	W0816 12:44:23.893301   27091 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.101:22: connect: no route to host
	E0816 12:44:23.893320   27091 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.101:22: connect: no route to host
	I0816 12:44:23.893327   27091 status.go:257] ha-863936-m02 status: &{Name:ha-863936-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0816 12:44:23.893343   27091 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.101:22: connect: no route to host
	I0816 12:44:23.893352   27091 status.go:255] checking status of ha-863936-m03 ...
	I0816 12:44:23.893733   27091 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:23.893779   27091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:23.909591   27091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40921
	I0816 12:44:23.909950   27091 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:23.910452   27091 main.go:141] libmachine: Using API Version  1
	I0816 12:44:23.910473   27091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:23.910784   27091 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:23.910948   27091 main.go:141] libmachine: (ha-863936-m03) Calling .GetState
	I0816 12:44:23.912347   27091 status.go:330] ha-863936-m03 host status = "Running" (err=<nil>)
	I0816 12:44:23.912364   27091 host.go:66] Checking if "ha-863936-m03" exists ...
	I0816 12:44:23.912746   27091 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:23.912789   27091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:23.927018   27091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37987
	I0816 12:44:23.927392   27091 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:23.927785   27091 main.go:141] libmachine: Using API Version  1
	I0816 12:44:23.927804   27091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:23.928136   27091 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:23.928320   27091 main.go:141] libmachine: (ha-863936-m03) Calling .GetIP
	I0816 12:44:23.930871   27091 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:44:23.931307   27091 main.go:141] libmachine: (ha-863936-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:05:59", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:39:35 +0000 UTC Type:0 Mac:52:54:00:ec:05:59 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-863936-m03 Clientid:01:52:54:00:ec:05:59}
	I0816 12:44:23.931329   27091 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined IP address 192.168.39.116 and MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:44:23.931486   27091 host.go:66] Checking if "ha-863936-m03" exists ...
	I0816 12:44:23.931773   27091 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:23.931803   27091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:23.945782   27091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37183
	I0816 12:44:23.946174   27091 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:23.946638   27091 main.go:141] libmachine: Using API Version  1
	I0816 12:44:23.946662   27091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:23.946944   27091 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:23.947215   27091 main.go:141] libmachine: (ha-863936-m03) Calling .DriverName
	I0816 12:44:23.947479   27091 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 12:44:23.947501   27091 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHHostname
	I0816 12:44:23.949821   27091 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:44:23.950164   27091 main.go:141] libmachine: (ha-863936-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:05:59", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:39:35 +0000 UTC Type:0 Mac:52:54:00:ec:05:59 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-863936-m03 Clientid:01:52:54:00:ec:05:59}
	I0816 12:44:23.950194   27091 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined IP address 192.168.39.116 and MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:44:23.950338   27091 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHPort
	I0816 12:44:23.950478   27091 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHKeyPath
	I0816 12:44:23.950623   27091 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHUsername
	I0816 12:44:23.950752   27091 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m03/id_rsa Username:docker}
	I0816 12:44:24.036304   27091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 12:44:24.050596   27091 kubeconfig.go:125] found "ha-863936" server: "https://192.168.39.254:8443"
	I0816 12:44:24.050627   27091 api_server.go:166] Checking apiserver status ...
	I0816 12:44:24.050662   27091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 12:44:24.066000   27091 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1495/cgroup
	W0816 12:44:24.074994   27091 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1495/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 12:44:24.075043   27091 ssh_runner.go:195] Run: ls
	I0816 12:44:24.079065   27091 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0816 12:44:24.085562   27091 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0816 12:44:24.085584   27091 status.go:422] ha-863936-m03 apiserver status = Running (err=<nil>)
	I0816 12:44:24.085593   27091 status.go:257] ha-863936-m03 status: &{Name:ha-863936-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 12:44:24.085608   27091 status.go:255] checking status of ha-863936-m04 ...
	I0816 12:44:24.085906   27091 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:24.085938   27091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:24.101796   27091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43929
	I0816 12:44:24.102208   27091 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:24.102752   27091 main.go:141] libmachine: Using API Version  1
	I0816 12:44:24.102774   27091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:24.103063   27091 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:24.103260   27091 main.go:141] libmachine: (ha-863936-m04) Calling .GetState
	I0816 12:44:24.104737   27091 status.go:330] ha-863936-m04 host status = "Running" (err=<nil>)
	I0816 12:44:24.104750   27091 host.go:66] Checking if "ha-863936-m04" exists ...
	I0816 12:44:24.105034   27091 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:24.105079   27091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:24.119461   27091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37049
	I0816 12:44:24.119872   27091 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:24.120409   27091 main.go:141] libmachine: Using API Version  1
	I0816 12:44:24.120427   27091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:24.120702   27091 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:24.120836   27091 main.go:141] libmachine: (ha-863936-m04) Calling .GetIP
	I0816 12:44:24.123492   27091 main.go:141] libmachine: (ha-863936-m04) DBG | domain ha-863936-m04 has defined MAC address 52:54:00:32:98:98 in network mk-ha-863936
	I0816 12:44:24.123964   27091 main.go:141] libmachine: (ha-863936-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:98:98", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:41:03 +0000 UTC Type:0 Mac:52:54:00:32:98:98 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-863936-m04 Clientid:01:52:54:00:32:98:98}
	I0816 12:44:24.123994   27091 main.go:141] libmachine: (ha-863936-m04) DBG | domain ha-863936-m04 has defined IP address 192.168.39.74 and MAC address 52:54:00:32:98:98 in network mk-ha-863936
	I0816 12:44:24.124171   27091 host.go:66] Checking if "ha-863936-m04" exists ...
	I0816 12:44:24.124470   27091 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:24.124503   27091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:24.139439   27091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41941
	I0816 12:44:24.139813   27091 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:24.140219   27091 main.go:141] libmachine: Using API Version  1
	I0816 12:44:24.140236   27091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:24.140532   27091 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:24.140697   27091 main.go:141] libmachine: (ha-863936-m04) Calling .DriverName
	I0816 12:44:24.140896   27091 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 12:44:24.140941   27091 main.go:141] libmachine: (ha-863936-m04) Calling .GetSSHHostname
	I0816 12:44:24.143604   27091 main.go:141] libmachine: (ha-863936-m04) DBG | domain ha-863936-m04 has defined MAC address 52:54:00:32:98:98 in network mk-ha-863936
	I0816 12:44:24.143929   27091 main.go:141] libmachine: (ha-863936-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:98:98", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:41:03 +0000 UTC Type:0 Mac:52:54:00:32:98:98 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-863936-m04 Clientid:01:52:54:00:32:98:98}
	I0816 12:44:24.143953   27091 main.go:141] libmachine: (ha-863936-m04) DBG | domain ha-863936-m04 has defined IP address 192.168.39.74 and MAC address 52:54:00:32:98:98 in network mk-ha-863936
	I0816 12:44:24.144067   27091 main.go:141] libmachine: (ha-863936-m04) Calling .GetSSHPort
	I0816 12:44:24.144224   27091 main.go:141] libmachine: (ha-863936-m04) Calling .GetSSHKeyPath
	I0816 12:44:24.144395   27091 main.go:141] libmachine: (ha-863936-m04) Calling .GetSSHUsername
	I0816 12:44:24.144541   27091 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m04/id_rsa Username:docker}
	I0816 12:44:24.224228   27091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 12:44:24.238021   27091 status.go:257] ha-863936-m04 status: &{Name:ha-863936-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-863936 status -v=7 --alsologtostderr: exit status 3 (5.230824161s)

                                                
                                                
-- stdout --
	ha-863936
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-863936-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-863936-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-863936-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 12:44:25.187730   27191 out.go:345] Setting OutFile to fd 1 ...
	I0816 12:44:25.188012   27191 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 12:44:25.188021   27191 out.go:358] Setting ErrFile to fd 2...
	I0816 12:44:25.188025   27191 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 12:44:25.188188   27191 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-3966/.minikube/bin
	I0816 12:44:25.188354   27191 out.go:352] Setting JSON to false
	I0816 12:44:25.188379   27191 mustload.go:65] Loading cluster: ha-863936
	I0816 12:44:25.188429   27191 notify.go:220] Checking for updates...
	I0816 12:44:25.188888   27191 config.go:182] Loaded profile config "ha-863936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 12:44:25.188930   27191 status.go:255] checking status of ha-863936 ...
	I0816 12:44:25.189351   27191 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:25.189419   27191 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:25.209222   27191 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37857
	I0816 12:44:25.209666   27191 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:25.210407   27191 main.go:141] libmachine: Using API Version  1
	I0816 12:44:25.210440   27191 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:25.210778   27191 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:25.210956   27191 main.go:141] libmachine: (ha-863936) Calling .GetState
	I0816 12:44:25.212566   27191 status.go:330] ha-863936 host status = "Running" (err=<nil>)
	I0816 12:44:25.212581   27191 host.go:66] Checking if "ha-863936" exists ...
	I0816 12:44:25.212965   27191 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:25.213006   27191 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:25.228783   27191 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44151
	I0816 12:44:25.229191   27191 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:25.229655   27191 main.go:141] libmachine: Using API Version  1
	I0816 12:44:25.229676   27191 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:25.229955   27191 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:25.230137   27191 main.go:141] libmachine: (ha-863936) Calling .GetIP
	I0816 12:44:25.233755   27191 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:44:25.234324   27191 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:44:25.234354   27191 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:44:25.234490   27191 host.go:66] Checking if "ha-863936" exists ...
	I0816 12:44:25.234901   27191 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:25.234968   27191 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:25.250772   27191 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36481
	I0816 12:44:25.251233   27191 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:25.251697   27191 main.go:141] libmachine: Using API Version  1
	I0816 12:44:25.251719   27191 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:25.252014   27191 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:25.252227   27191 main.go:141] libmachine: (ha-863936) Calling .DriverName
	I0816 12:44:25.252409   27191 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 12:44:25.252440   27191 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:44:25.255757   27191 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:44:25.256195   27191 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:44:25.256217   27191 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:44:25.256388   27191 main.go:141] libmachine: (ha-863936) Calling .GetSSHPort
	I0816 12:44:25.256565   27191 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:44:25.256719   27191 main.go:141] libmachine: (ha-863936) Calling .GetSSHUsername
	I0816 12:44:25.256877   27191 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936/id_rsa Username:docker}
	I0816 12:44:25.332628   27191 ssh_runner.go:195] Run: systemctl --version
	I0816 12:44:25.338527   27191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 12:44:25.354381   27191 kubeconfig.go:125] found "ha-863936" server: "https://192.168.39.254:8443"
	I0816 12:44:25.354410   27191 api_server.go:166] Checking apiserver status ...
	I0816 12:44:25.354451   27191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 12:44:25.368781   27191 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1179/cgroup
	W0816 12:44:25.379702   27191 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1179/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 12:44:25.379764   27191 ssh_runner.go:195] Run: ls
	I0816 12:44:25.384545   27191 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0816 12:44:25.390822   27191 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0816 12:44:25.390851   27191 status.go:422] ha-863936 apiserver status = Running (err=<nil>)
	I0816 12:44:25.390865   27191 status.go:257] ha-863936 status: &{Name:ha-863936 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 12:44:25.390885   27191 status.go:255] checking status of ha-863936-m02 ...
	I0816 12:44:25.391362   27191 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:25.391427   27191 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:25.406129   27191 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44783
	I0816 12:44:25.406644   27191 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:25.407151   27191 main.go:141] libmachine: Using API Version  1
	I0816 12:44:25.407175   27191 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:25.407521   27191 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:25.407691   27191 main.go:141] libmachine: (ha-863936-m02) Calling .GetState
	I0816 12:44:25.409376   27191 status.go:330] ha-863936-m02 host status = "Running" (err=<nil>)
	I0816 12:44:25.409391   27191 host.go:66] Checking if "ha-863936-m02" exists ...
	I0816 12:44:25.409732   27191 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:25.409772   27191 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:25.424374   27191 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34359
	I0816 12:44:25.424724   27191 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:25.425129   27191 main.go:141] libmachine: Using API Version  1
	I0816 12:44:25.425148   27191 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:25.425460   27191 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:25.425639   27191 main.go:141] libmachine: (ha-863936-m02) Calling .GetIP
	I0816 12:44:25.428092   27191 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:44:25.428517   27191 main.go:141] libmachine: (ha-863936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1e:73", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:37:36 +0000 UTC Type:0 Mac:52:54:00:c0:1e:73 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-863936-m02 Clientid:01:52:54:00:c0:1e:73}
	I0816 12:44:25.428544   27191 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:44:25.428590   27191 host.go:66] Checking if "ha-863936-m02" exists ...
	I0816 12:44:25.428895   27191 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:25.428948   27191 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:25.443150   27191 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34237
	I0816 12:44:25.443533   27191 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:25.443965   27191 main.go:141] libmachine: Using API Version  1
	I0816 12:44:25.443983   27191 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:25.444262   27191 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:25.444445   27191 main.go:141] libmachine: (ha-863936-m02) Calling .DriverName
	I0816 12:44:25.444616   27191 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 12:44:25.444636   27191 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHHostname
	I0816 12:44:25.447815   27191 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:44:25.448306   27191 main.go:141] libmachine: (ha-863936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1e:73", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:37:36 +0000 UTC Type:0 Mac:52:54:00:c0:1e:73 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-863936-m02 Clientid:01:52:54:00:c0:1e:73}
	I0816 12:44:25.448331   27191 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:44:25.448485   27191 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHPort
	I0816 12:44:25.448637   27191 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHKeyPath
	I0816 12:44:25.448779   27191 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHUsername
	I0816 12:44:25.448945   27191 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m02/id_rsa Username:docker}
	W0816 12:44:26.969246   27191 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.101:22: connect: no route to host
	I0816 12:44:26.969304   27191 retry.go:31] will retry after 297.402075ms: dial tcp 192.168.39.101:22: connect: no route to host
	W0816 12:44:30.037224   27191 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.101:22: connect: no route to host
	W0816 12:44:30.037297   27191 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.101:22: connect: no route to host
	E0816 12:44:30.037322   27191 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.101:22: connect: no route to host
	I0816 12:44:30.037343   27191 status.go:257] ha-863936-m02 status: &{Name:ha-863936-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0816 12:44:30.037367   27191 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.101:22: connect: no route to host
	I0816 12:44:30.037381   27191 status.go:255] checking status of ha-863936-m03 ...
	I0816 12:44:30.037755   27191 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:30.037803   27191 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:30.052955   27191 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43121
	I0816 12:44:30.053322   27191 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:30.053764   27191 main.go:141] libmachine: Using API Version  1
	I0816 12:44:30.053783   27191 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:30.054092   27191 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:30.054273   27191 main.go:141] libmachine: (ha-863936-m03) Calling .GetState
	I0816 12:44:30.055790   27191 status.go:330] ha-863936-m03 host status = "Running" (err=<nil>)
	I0816 12:44:30.055805   27191 host.go:66] Checking if "ha-863936-m03" exists ...
	I0816 12:44:30.056098   27191 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:30.056135   27191 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:30.070786   27191 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42137
	I0816 12:44:30.071160   27191 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:30.071600   27191 main.go:141] libmachine: Using API Version  1
	I0816 12:44:30.071617   27191 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:30.071945   27191 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:30.072120   27191 main.go:141] libmachine: (ha-863936-m03) Calling .GetIP
	I0816 12:44:30.074770   27191 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:44:30.075180   27191 main.go:141] libmachine: (ha-863936-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:05:59", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:39:35 +0000 UTC Type:0 Mac:52:54:00:ec:05:59 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-863936-m03 Clientid:01:52:54:00:ec:05:59}
	I0816 12:44:30.075207   27191 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined IP address 192.168.39.116 and MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:44:30.075321   27191 host.go:66] Checking if "ha-863936-m03" exists ...
	I0816 12:44:30.075637   27191 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:30.075673   27191 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:30.090394   27191 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39699
	I0816 12:44:30.090743   27191 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:30.091195   27191 main.go:141] libmachine: Using API Version  1
	I0816 12:44:30.091210   27191 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:30.091512   27191 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:30.091704   27191 main.go:141] libmachine: (ha-863936-m03) Calling .DriverName
	I0816 12:44:30.091912   27191 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 12:44:30.091929   27191 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHHostname
	I0816 12:44:30.094627   27191 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:44:30.094969   27191 main.go:141] libmachine: (ha-863936-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:05:59", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:39:35 +0000 UTC Type:0 Mac:52:54:00:ec:05:59 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-863936-m03 Clientid:01:52:54:00:ec:05:59}
	I0816 12:44:30.094994   27191 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined IP address 192.168.39.116 and MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:44:30.095111   27191 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHPort
	I0816 12:44:30.095275   27191 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHKeyPath
	I0816 12:44:30.095459   27191 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHUsername
	I0816 12:44:30.095582   27191 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m03/id_rsa Username:docker}
	I0816 12:44:30.176398   27191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 12:44:30.192386   27191 kubeconfig.go:125] found "ha-863936" server: "https://192.168.39.254:8443"
	I0816 12:44:30.192412   27191 api_server.go:166] Checking apiserver status ...
	I0816 12:44:30.192455   27191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 12:44:30.206285   27191 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1495/cgroup
	W0816 12:44:30.216421   27191 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1495/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 12:44:30.216476   27191 ssh_runner.go:195] Run: ls
	I0816 12:44:30.220843   27191 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0816 12:44:30.225241   27191 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0816 12:44:30.225268   27191 status.go:422] ha-863936-m03 apiserver status = Running (err=<nil>)
	I0816 12:44:30.225280   27191 status.go:257] ha-863936-m03 status: &{Name:ha-863936-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 12:44:30.225300   27191 status.go:255] checking status of ha-863936-m04 ...
	I0816 12:44:30.225796   27191 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:30.225843   27191 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:30.240822   27191 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36313
	I0816 12:44:30.241308   27191 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:30.241786   27191 main.go:141] libmachine: Using API Version  1
	I0816 12:44:30.241810   27191 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:30.242181   27191 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:30.242367   27191 main.go:141] libmachine: (ha-863936-m04) Calling .GetState
	I0816 12:44:30.244011   27191 status.go:330] ha-863936-m04 host status = "Running" (err=<nil>)
	I0816 12:44:30.244028   27191 host.go:66] Checking if "ha-863936-m04" exists ...
	I0816 12:44:30.244405   27191 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:30.244458   27191 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:30.260123   27191 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39583
	I0816 12:44:30.260488   27191 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:30.260959   27191 main.go:141] libmachine: Using API Version  1
	I0816 12:44:30.260981   27191 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:30.261341   27191 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:30.261515   27191 main.go:141] libmachine: (ha-863936-m04) Calling .GetIP
	I0816 12:44:30.264336   27191 main.go:141] libmachine: (ha-863936-m04) DBG | domain ha-863936-m04 has defined MAC address 52:54:00:32:98:98 in network mk-ha-863936
	I0816 12:44:30.264761   27191 main.go:141] libmachine: (ha-863936-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:98:98", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:41:03 +0000 UTC Type:0 Mac:52:54:00:32:98:98 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-863936-m04 Clientid:01:52:54:00:32:98:98}
	I0816 12:44:30.264798   27191 main.go:141] libmachine: (ha-863936-m04) DBG | domain ha-863936-m04 has defined IP address 192.168.39.74 and MAC address 52:54:00:32:98:98 in network mk-ha-863936
	I0816 12:44:30.264919   27191 host.go:66] Checking if "ha-863936-m04" exists ...
	I0816 12:44:30.265218   27191 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:30.265250   27191 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:30.279656   27191 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35959
	I0816 12:44:30.280035   27191 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:30.280487   27191 main.go:141] libmachine: Using API Version  1
	I0816 12:44:30.280504   27191 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:30.280774   27191 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:30.280957   27191 main.go:141] libmachine: (ha-863936-m04) Calling .DriverName
	I0816 12:44:30.281154   27191 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 12:44:30.281178   27191 main.go:141] libmachine: (ha-863936-m04) Calling .GetSSHHostname
	I0816 12:44:30.283454   27191 main.go:141] libmachine: (ha-863936-m04) DBG | domain ha-863936-m04 has defined MAC address 52:54:00:32:98:98 in network mk-ha-863936
	I0816 12:44:30.283848   27191 main.go:141] libmachine: (ha-863936-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:98:98", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:41:03 +0000 UTC Type:0 Mac:52:54:00:32:98:98 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-863936-m04 Clientid:01:52:54:00:32:98:98}
	I0816 12:44:30.283876   27191 main.go:141] libmachine: (ha-863936-m04) DBG | domain ha-863936-m04 has defined IP address 192.168.39.74 and MAC address 52:54:00:32:98:98 in network mk-ha-863936
	I0816 12:44:30.283987   27191 main.go:141] libmachine: (ha-863936-m04) Calling .GetSSHPort
	I0816 12:44:30.284140   27191 main.go:141] libmachine: (ha-863936-m04) Calling .GetSSHKeyPath
	I0816 12:44:30.284298   27191 main.go:141] libmachine: (ha-863936-m04) Calling .GetSSHUsername
	I0816 12:44:30.284420   27191 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m04/id_rsa Username:docker}
	I0816 12:44:30.363975   27191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 12:44:30.378250   27191 status.go:257] ha-863936-m04 status: &{Name:ha-863936-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-863936 status -v=7 --alsologtostderr: exit status 3 (4.59900755s)

                                                
                                                
-- stdout --
	ha-863936
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-863936-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-863936-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-863936-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 12:44:32.101181   27293 out.go:345] Setting OutFile to fd 1 ...
	I0816 12:44:32.101310   27293 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 12:44:32.101319   27293 out.go:358] Setting ErrFile to fd 2...
	I0816 12:44:32.101324   27293 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 12:44:32.101510   27293 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-3966/.minikube/bin
	I0816 12:44:32.101710   27293 out.go:352] Setting JSON to false
	I0816 12:44:32.101744   27293 mustload.go:65] Loading cluster: ha-863936
	I0816 12:44:32.101784   27293 notify.go:220] Checking for updates...
	I0816 12:44:32.102201   27293 config.go:182] Loaded profile config "ha-863936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 12:44:32.102218   27293 status.go:255] checking status of ha-863936 ...
	I0816 12:44:32.102636   27293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:32.102690   27293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:32.123180   27293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39055
	I0816 12:44:32.123583   27293 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:32.124118   27293 main.go:141] libmachine: Using API Version  1
	I0816 12:44:32.124148   27293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:32.124518   27293 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:32.124723   27293 main.go:141] libmachine: (ha-863936) Calling .GetState
	I0816 12:44:32.126203   27293 status.go:330] ha-863936 host status = "Running" (err=<nil>)
	I0816 12:44:32.126221   27293 host.go:66] Checking if "ha-863936" exists ...
	I0816 12:44:32.126529   27293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:32.126602   27293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:32.141590   27293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40115
	I0816 12:44:32.141950   27293 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:32.142374   27293 main.go:141] libmachine: Using API Version  1
	I0816 12:44:32.142393   27293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:32.142684   27293 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:32.142866   27293 main.go:141] libmachine: (ha-863936) Calling .GetIP
	I0816 12:44:32.145763   27293 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:44:32.146203   27293 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:44:32.146238   27293 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:44:32.146515   27293 host.go:66] Checking if "ha-863936" exists ...
	I0816 12:44:32.146919   27293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:32.146965   27293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:32.161991   27293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38413
	I0816 12:44:32.162375   27293 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:32.162809   27293 main.go:141] libmachine: Using API Version  1
	I0816 12:44:32.162823   27293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:32.163130   27293 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:32.163331   27293 main.go:141] libmachine: (ha-863936) Calling .DriverName
	I0816 12:44:32.163502   27293 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 12:44:32.163539   27293 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:44:32.166762   27293 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:44:32.167167   27293 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:44:32.167219   27293 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:44:32.167466   27293 main.go:141] libmachine: (ha-863936) Calling .GetSSHPort
	I0816 12:44:32.167658   27293 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:44:32.167804   27293 main.go:141] libmachine: (ha-863936) Calling .GetSSHUsername
	I0816 12:44:32.167926   27293 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936/id_rsa Username:docker}
	I0816 12:44:32.247033   27293 ssh_runner.go:195] Run: systemctl --version
	I0816 12:44:32.253558   27293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 12:44:32.269083   27293 kubeconfig.go:125] found "ha-863936" server: "https://192.168.39.254:8443"
	I0816 12:44:32.269111   27293 api_server.go:166] Checking apiserver status ...
	I0816 12:44:32.269169   27293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 12:44:32.283668   27293 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1179/cgroup
	W0816 12:44:32.294744   27293 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1179/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 12:44:32.294808   27293 ssh_runner.go:195] Run: ls
	I0816 12:44:32.299076   27293 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0816 12:44:32.303243   27293 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0816 12:44:32.303274   27293 status.go:422] ha-863936 apiserver status = Running (err=<nil>)
	I0816 12:44:32.303285   27293 status.go:257] ha-863936 status: &{Name:ha-863936 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 12:44:32.303300   27293 status.go:255] checking status of ha-863936-m02 ...
	I0816 12:44:32.303671   27293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:32.303714   27293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:32.319039   27293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37373
	I0816 12:44:32.319492   27293 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:32.320005   27293 main.go:141] libmachine: Using API Version  1
	I0816 12:44:32.320028   27293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:32.320354   27293 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:32.320542   27293 main.go:141] libmachine: (ha-863936-m02) Calling .GetState
	I0816 12:44:32.322261   27293 status.go:330] ha-863936-m02 host status = "Running" (err=<nil>)
	I0816 12:44:32.322278   27293 host.go:66] Checking if "ha-863936-m02" exists ...
	I0816 12:44:32.322673   27293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:32.322707   27293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:32.337624   27293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38365
	I0816 12:44:32.338022   27293 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:32.338548   27293 main.go:141] libmachine: Using API Version  1
	I0816 12:44:32.338572   27293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:32.338954   27293 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:32.339144   27293 main.go:141] libmachine: (ha-863936-m02) Calling .GetIP
	I0816 12:44:32.342122   27293 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:44:32.342544   27293 main.go:141] libmachine: (ha-863936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1e:73", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:37:36 +0000 UTC Type:0 Mac:52:54:00:c0:1e:73 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-863936-m02 Clientid:01:52:54:00:c0:1e:73}
	I0816 12:44:32.342566   27293 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:44:32.342686   27293 host.go:66] Checking if "ha-863936-m02" exists ...
	I0816 12:44:32.342967   27293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:32.343000   27293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:32.357578   27293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41515
	I0816 12:44:32.357972   27293 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:32.358410   27293 main.go:141] libmachine: Using API Version  1
	I0816 12:44:32.358433   27293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:32.358721   27293 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:32.358863   27293 main.go:141] libmachine: (ha-863936-m02) Calling .DriverName
	I0816 12:44:32.359023   27293 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 12:44:32.359042   27293 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHHostname
	I0816 12:44:32.361608   27293 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:44:32.361969   27293 main.go:141] libmachine: (ha-863936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1e:73", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:37:36 +0000 UTC Type:0 Mac:52:54:00:c0:1e:73 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-863936-m02 Clientid:01:52:54:00:c0:1e:73}
	I0816 12:44:32.361997   27293 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:44:32.362118   27293 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHPort
	I0816 12:44:32.362307   27293 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHKeyPath
	I0816 12:44:32.362438   27293 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHUsername
	I0816 12:44:32.362548   27293 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m02/id_rsa Username:docker}
	W0816 12:44:33.109179   27293 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.101:22: connect: no route to host
	I0816 12:44:33.109245   27293 retry.go:31] will retry after 141.981783ms: dial tcp 192.168.39.101:22: connect: no route to host
	W0816 12:44:36.309168   27293 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.101:22: connect: no route to host
	W0816 12:44:36.309287   27293 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.101:22: connect: no route to host
	E0816 12:44:36.309308   27293 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.101:22: connect: no route to host
	I0816 12:44:36.309316   27293 status.go:257] ha-863936-m02 status: &{Name:ha-863936-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0816 12:44:36.309342   27293 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.101:22: connect: no route to host
	I0816 12:44:36.309349   27293 status.go:255] checking status of ha-863936-m03 ...
	I0816 12:44:36.309640   27293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:36.309677   27293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:36.324515   27293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41769
	I0816 12:44:36.324941   27293 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:36.325385   27293 main.go:141] libmachine: Using API Version  1
	I0816 12:44:36.325410   27293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:36.325702   27293 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:36.325883   27293 main.go:141] libmachine: (ha-863936-m03) Calling .GetState
	I0816 12:44:36.327396   27293 status.go:330] ha-863936-m03 host status = "Running" (err=<nil>)
	I0816 12:44:36.327409   27293 host.go:66] Checking if "ha-863936-m03" exists ...
	I0816 12:44:36.327736   27293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:36.327767   27293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:36.342446   27293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39971
	I0816 12:44:36.342879   27293 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:36.343423   27293 main.go:141] libmachine: Using API Version  1
	I0816 12:44:36.343456   27293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:36.343820   27293 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:36.344009   27293 main.go:141] libmachine: (ha-863936-m03) Calling .GetIP
	I0816 12:44:36.346838   27293 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:44:36.347307   27293 main.go:141] libmachine: (ha-863936-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:05:59", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:39:35 +0000 UTC Type:0 Mac:52:54:00:ec:05:59 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-863936-m03 Clientid:01:52:54:00:ec:05:59}
	I0816 12:44:36.347344   27293 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined IP address 192.168.39.116 and MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:44:36.347441   27293 host.go:66] Checking if "ha-863936-m03" exists ...
	I0816 12:44:36.347725   27293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:36.347755   27293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:36.362658   27293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42713
	I0816 12:44:36.363030   27293 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:36.363451   27293 main.go:141] libmachine: Using API Version  1
	I0816 12:44:36.363472   27293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:36.363741   27293 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:36.363928   27293 main.go:141] libmachine: (ha-863936-m03) Calling .DriverName
	I0816 12:44:36.364088   27293 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 12:44:36.364106   27293 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHHostname
	I0816 12:44:36.366482   27293 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:44:36.366888   27293 main.go:141] libmachine: (ha-863936-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:05:59", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:39:35 +0000 UTC Type:0 Mac:52:54:00:ec:05:59 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-863936-m03 Clientid:01:52:54:00:ec:05:59}
	I0816 12:44:36.366924   27293 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined IP address 192.168.39.116 and MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:44:36.367055   27293 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHPort
	I0816 12:44:36.367225   27293 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHKeyPath
	I0816 12:44:36.367349   27293 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHUsername
	I0816 12:44:36.367509   27293 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m03/id_rsa Username:docker}
	I0816 12:44:36.449461   27293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 12:44:36.465793   27293 kubeconfig.go:125] found "ha-863936" server: "https://192.168.39.254:8443"
	I0816 12:44:36.465818   27293 api_server.go:166] Checking apiserver status ...
	I0816 12:44:36.465863   27293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 12:44:36.480520   27293 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1495/cgroup
	W0816 12:44:36.491379   27293 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1495/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 12:44:36.491451   27293 ssh_runner.go:195] Run: ls
	I0816 12:44:36.498640   27293 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0816 12:44:36.504187   27293 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0816 12:44:36.504220   27293 status.go:422] ha-863936-m03 apiserver status = Running (err=<nil>)
	I0816 12:44:36.504231   27293 status.go:257] ha-863936-m03 status: &{Name:ha-863936-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 12:44:36.504248   27293 status.go:255] checking status of ha-863936-m04 ...
	I0816 12:44:36.504632   27293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:36.504675   27293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:36.519435   27293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38483
	I0816 12:44:36.519809   27293 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:36.520343   27293 main.go:141] libmachine: Using API Version  1
	I0816 12:44:36.520364   27293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:36.520687   27293 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:36.520929   27293 main.go:141] libmachine: (ha-863936-m04) Calling .GetState
	I0816 12:44:36.522390   27293 status.go:330] ha-863936-m04 host status = "Running" (err=<nil>)
	I0816 12:44:36.522406   27293 host.go:66] Checking if "ha-863936-m04" exists ...
	I0816 12:44:36.522695   27293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:36.522726   27293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:36.537753   27293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33829
	I0816 12:44:36.538196   27293 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:36.538624   27293 main.go:141] libmachine: Using API Version  1
	I0816 12:44:36.538643   27293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:36.538954   27293 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:36.539154   27293 main.go:141] libmachine: (ha-863936-m04) Calling .GetIP
	I0816 12:44:36.542232   27293 main.go:141] libmachine: (ha-863936-m04) DBG | domain ha-863936-m04 has defined MAC address 52:54:00:32:98:98 in network mk-ha-863936
	I0816 12:44:36.542714   27293 main.go:141] libmachine: (ha-863936-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:98:98", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:41:03 +0000 UTC Type:0 Mac:52:54:00:32:98:98 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-863936-m04 Clientid:01:52:54:00:32:98:98}
	I0816 12:44:36.542754   27293 main.go:141] libmachine: (ha-863936-m04) DBG | domain ha-863936-m04 has defined IP address 192.168.39.74 and MAC address 52:54:00:32:98:98 in network mk-ha-863936
	I0816 12:44:36.542886   27293 host.go:66] Checking if "ha-863936-m04" exists ...
	I0816 12:44:36.543261   27293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:36.543306   27293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:36.557616   27293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46771
	I0816 12:44:36.558044   27293 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:36.558484   27293 main.go:141] libmachine: Using API Version  1
	I0816 12:44:36.558507   27293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:36.558775   27293 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:36.558943   27293 main.go:141] libmachine: (ha-863936-m04) Calling .DriverName
	I0816 12:44:36.559099   27293 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 12:44:36.559117   27293 main.go:141] libmachine: (ha-863936-m04) Calling .GetSSHHostname
	I0816 12:44:36.561887   27293 main.go:141] libmachine: (ha-863936-m04) DBG | domain ha-863936-m04 has defined MAC address 52:54:00:32:98:98 in network mk-ha-863936
	I0816 12:44:36.562251   27293 main.go:141] libmachine: (ha-863936-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:98:98", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:41:03 +0000 UTC Type:0 Mac:52:54:00:32:98:98 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-863936-m04 Clientid:01:52:54:00:32:98:98}
	I0816 12:44:36.562285   27293 main.go:141] libmachine: (ha-863936-m04) DBG | domain ha-863936-m04 has defined IP address 192.168.39.74 and MAC address 52:54:00:32:98:98 in network mk-ha-863936
	I0816 12:44:36.562425   27293 main.go:141] libmachine: (ha-863936-m04) Calling .GetSSHPort
	I0816 12:44:36.562587   27293 main.go:141] libmachine: (ha-863936-m04) Calling .GetSSHKeyPath
	I0816 12:44:36.562735   27293 main.go:141] libmachine: (ha-863936-m04) Calling .GetSSHUsername
	I0816 12:44:36.562828   27293 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m04/id_rsa Username:docker}
	I0816 12:44:36.644770   27293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 12:44:36.660129   27293 status.go:257] ha-863936-m04 status: &{Name:ha-863936-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-863936 status -v=7 --alsologtostderr: exit status 3 (3.713245468s)

                                                
                                                
-- stdout --
	ha-863936
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-863936-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-863936-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-863936-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 12:44:39.795877   27410 out.go:345] Setting OutFile to fd 1 ...
	I0816 12:44:39.795968   27410 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 12:44:39.795976   27410 out.go:358] Setting ErrFile to fd 2...
	I0816 12:44:39.795980   27410 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 12:44:39.796147   27410 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-3966/.minikube/bin
	I0816 12:44:39.796296   27410 out.go:352] Setting JSON to false
	I0816 12:44:39.796338   27410 mustload.go:65] Loading cluster: ha-863936
	I0816 12:44:39.796454   27410 notify.go:220] Checking for updates...
	I0816 12:44:39.796716   27410 config.go:182] Loaded profile config "ha-863936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 12:44:39.796730   27410 status.go:255] checking status of ha-863936 ...
	I0816 12:44:39.797210   27410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:39.797275   27410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:39.815252   27410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44799
	I0816 12:44:39.815677   27410 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:39.816235   27410 main.go:141] libmachine: Using API Version  1
	I0816 12:44:39.816271   27410 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:39.816667   27410 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:39.816878   27410 main.go:141] libmachine: (ha-863936) Calling .GetState
	I0816 12:44:39.818564   27410 status.go:330] ha-863936 host status = "Running" (err=<nil>)
	I0816 12:44:39.818578   27410 host.go:66] Checking if "ha-863936" exists ...
	I0816 12:44:39.818837   27410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:39.818867   27410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:39.833696   27410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46689
	I0816 12:44:39.834124   27410 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:39.834734   27410 main.go:141] libmachine: Using API Version  1
	I0816 12:44:39.834760   27410 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:39.835041   27410 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:39.835201   27410 main.go:141] libmachine: (ha-863936) Calling .GetIP
	I0816 12:44:39.838063   27410 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:44:39.838440   27410 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:44:39.838472   27410 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:44:39.838609   27410 host.go:66] Checking if "ha-863936" exists ...
	I0816 12:44:39.838887   27410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:39.838918   27410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:39.853131   27410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37929
	I0816 12:44:39.853485   27410 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:39.853872   27410 main.go:141] libmachine: Using API Version  1
	I0816 12:44:39.853891   27410 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:39.854161   27410 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:39.854355   27410 main.go:141] libmachine: (ha-863936) Calling .DriverName
	I0816 12:44:39.854532   27410 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 12:44:39.854569   27410 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:44:39.857004   27410 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:44:39.857374   27410 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:44:39.857389   27410 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:44:39.857547   27410 main.go:141] libmachine: (ha-863936) Calling .GetSSHPort
	I0816 12:44:39.857711   27410 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:44:39.857854   27410 main.go:141] libmachine: (ha-863936) Calling .GetSSHUsername
	I0816 12:44:39.857990   27410 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936/id_rsa Username:docker}
	I0816 12:44:39.936977   27410 ssh_runner.go:195] Run: systemctl --version
	I0816 12:44:39.943803   27410 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 12:44:39.959449   27410 kubeconfig.go:125] found "ha-863936" server: "https://192.168.39.254:8443"
	I0816 12:44:39.959478   27410 api_server.go:166] Checking apiserver status ...
	I0816 12:44:39.959512   27410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 12:44:39.973828   27410 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1179/cgroup
	W0816 12:44:39.983927   27410 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1179/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 12:44:39.983989   27410 ssh_runner.go:195] Run: ls
	I0816 12:44:39.989035   27410 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0816 12:44:39.994568   27410 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0816 12:44:39.994596   27410 status.go:422] ha-863936 apiserver status = Running (err=<nil>)
	I0816 12:44:39.994604   27410 status.go:257] ha-863936 status: &{Name:ha-863936 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 12:44:39.994621   27410 status.go:255] checking status of ha-863936-m02 ...
	I0816 12:44:39.994899   27410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:39.994931   27410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:40.010399   27410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43313
	I0816 12:44:40.010830   27410 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:40.011333   27410 main.go:141] libmachine: Using API Version  1
	I0816 12:44:40.011354   27410 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:40.011638   27410 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:40.011808   27410 main.go:141] libmachine: (ha-863936-m02) Calling .GetState
	I0816 12:44:40.013194   27410 status.go:330] ha-863936-m02 host status = "Running" (err=<nil>)
	I0816 12:44:40.013222   27410 host.go:66] Checking if "ha-863936-m02" exists ...
	I0816 12:44:40.013667   27410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:40.013710   27410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:40.028023   27410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32771
	I0816 12:44:40.028423   27410 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:40.028984   27410 main.go:141] libmachine: Using API Version  1
	I0816 12:44:40.029006   27410 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:40.029288   27410 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:40.029466   27410 main.go:141] libmachine: (ha-863936-m02) Calling .GetIP
	I0816 12:44:40.031955   27410 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:44:40.032310   27410 main.go:141] libmachine: (ha-863936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1e:73", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:37:36 +0000 UTC Type:0 Mac:52:54:00:c0:1e:73 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-863936-m02 Clientid:01:52:54:00:c0:1e:73}
	I0816 12:44:40.032334   27410 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:44:40.032442   27410 host.go:66] Checking if "ha-863936-m02" exists ...
	I0816 12:44:40.032765   27410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:40.032798   27410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:40.047229   27410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34605
	I0816 12:44:40.047603   27410 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:40.047992   27410 main.go:141] libmachine: Using API Version  1
	I0816 12:44:40.048009   27410 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:40.048272   27410 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:40.048468   27410 main.go:141] libmachine: (ha-863936-m02) Calling .DriverName
	I0816 12:44:40.048619   27410 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 12:44:40.048638   27410 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHHostname
	I0816 12:44:40.051178   27410 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:44:40.051563   27410 main.go:141] libmachine: (ha-863936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1e:73", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:37:36 +0000 UTC Type:0 Mac:52:54:00:c0:1e:73 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-863936-m02 Clientid:01:52:54:00:c0:1e:73}
	I0816 12:44:40.051599   27410 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:44:40.051701   27410 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHPort
	I0816 12:44:40.051849   27410 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHKeyPath
	I0816 12:44:40.051989   27410 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHUsername
	I0816 12:44:40.052097   27410 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m02/id_rsa Username:docker}
	W0816 12:44:43.125149   27410 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.101:22: connect: no route to host
	W0816 12:44:43.125240   27410 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.101:22: connect: no route to host
	E0816 12:44:43.125275   27410 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.101:22: connect: no route to host
	I0816 12:44:43.125296   27410 status.go:257] ha-863936-m02 status: &{Name:ha-863936-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0816 12:44:43.125320   27410 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.101:22: connect: no route to host
	I0816 12:44:43.125334   27410 status.go:255] checking status of ha-863936-m03 ...
	I0816 12:44:43.125652   27410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:43.125703   27410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:43.140369   27410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36193
	I0816 12:44:43.140747   27410 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:43.141228   27410 main.go:141] libmachine: Using API Version  1
	I0816 12:44:43.141248   27410 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:43.141538   27410 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:43.141756   27410 main.go:141] libmachine: (ha-863936-m03) Calling .GetState
	I0816 12:44:43.143262   27410 status.go:330] ha-863936-m03 host status = "Running" (err=<nil>)
	I0816 12:44:43.143276   27410 host.go:66] Checking if "ha-863936-m03" exists ...
	I0816 12:44:43.143554   27410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:43.143589   27410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:43.157361   27410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41263
	I0816 12:44:43.157712   27410 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:43.158097   27410 main.go:141] libmachine: Using API Version  1
	I0816 12:44:43.158117   27410 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:43.158394   27410 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:43.158573   27410 main.go:141] libmachine: (ha-863936-m03) Calling .GetIP
	I0816 12:44:43.161472   27410 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:44:43.161978   27410 main.go:141] libmachine: (ha-863936-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:05:59", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:39:35 +0000 UTC Type:0 Mac:52:54:00:ec:05:59 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-863936-m03 Clientid:01:52:54:00:ec:05:59}
	I0816 12:44:43.162014   27410 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined IP address 192.168.39.116 and MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:44:43.162209   27410 host.go:66] Checking if "ha-863936-m03" exists ...
	I0816 12:44:43.162503   27410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:43.162535   27410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:43.176754   27410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38641
	I0816 12:44:43.177142   27410 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:43.177624   27410 main.go:141] libmachine: Using API Version  1
	I0816 12:44:43.177648   27410 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:43.177929   27410 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:43.178120   27410 main.go:141] libmachine: (ha-863936-m03) Calling .DriverName
	I0816 12:44:43.178309   27410 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 12:44:43.178330   27410 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHHostname
	I0816 12:44:43.180954   27410 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:44:43.181401   27410 main.go:141] libmachine: (ha-863936-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:05:59", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:39:35 +0000 UTC Type:0 Mac:52:54:00:ec:05:59 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-863936-m03 Clientid:01:52:54:00:ec:05:59}
	I0816 12:44:43.181441   27410 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined IP address 192.168.39.116 and MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:44:43.181578   27410 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHPort
	I0816 12:44:43.181746   27410 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHKeyPath
	I0816 12:44:43.181889   27410 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHUsername
	I0816 12:44:43.182018   27410 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m03/id_rsa Username:docker}
	I0816 12:44:43.260733   27410 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 12:44:43.276896   27410 kubeconfig.go:125] found "ha-863936" server: "https://192.168.39.254:8443"
	I0816 12:44:43.276947   27410 api_server.go:166] Checking apiserver status ...
	I0816 12:44:43.276994   27410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 12:44:43.293995   27410 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1495/cgroup
	W0816 12:44:43.303889   27410 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1495/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 12:44:43.303934   27410 ssh_runner.go:195] Run: ls
	I0816 12:44:43.308371   27410 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0816 12:44:43.312624   27410 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0816 12:44:43.312644   27410 status.go:422] ha-863936-m03 apiserver status = Running (err=<nil>)
	I0816 12:44:43.312652   27410 status.go:257] ha-863936-m03 status: &{Name:ha-863936-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 12:44:43.312665   27410 status.go:255] checking status of ha-863936-m04 ...
	I0816 12:44:43.312962   27410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:43.312992   27410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:43.327653   27410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34245
	I0816 12:44:43.328076   27410 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:43.328551   27410 main.go:141] libmachine: Using API Version  1
	I0816 12:44:43.328570   27410 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:43.328863   27410 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:43.329059   27410 main.go:141] libmachine: (ha-863936-m04) Calling .GetState
	I0816 12:44:43.330675   27410 status.go:330] ha-863936-m04 host status = "Running" (err=<nil>)
	I0816 12:44:43.330689   27410 host.go:66] Checking if "ha-863936-m04" exists ...
	I0816 12:44:43.330958   27410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:43.330996   27410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:43.345274   27410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41479
	I0816 12:44:43.345706   27410 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:43.346199   27410 main.go:141] libmachine: Using API Version  1
	I0816 12:44:43.346222   27410 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:43.346494   27410 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:43.346667   27410 main.go:141] libmachine: (ha-863936-m04) Calling .GetIP
	I0816 12:44:43.349411   27410 main.go:141] libmachine: (ha-863936-m04) DBG | domain ha-863936-m04 has defined MAC address 52:54:00:32:98:98 in network mk-ha-863936
	I0816 12:44:43.349793   27410 main.go:141] libmachine: (ha-863936-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:98:98", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:41:03 +0000 UTC Type:0 Mac:52:54:00:32:98:98 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-863936-m04 Clientid:01:52:54:00:32:98:98}
	I0816 12:44:43.349823   27410 main.go:141] libmachine: (ha-863936-m04) DBG | domain ha-863936-m04 has defined IP address 192.168.39.74 and MAC address 52:54:00:32:98:98 in network mk-ha-863936
	I0816 12:44:43.349896   27410 host.go:66] Checking if "ha-863936-m04" exists ...
	I0816 12:44:43.350216   27410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:43.350247   27410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:43.364728   27410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39631
	I0816 12:44:43.365105   27410 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:43.365531   27410 main.go:141] libmachine: Using API Version  1
	I0816 12:44:43.365544   27410 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:43.365852   27410 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:43.365982   27410 main.go:141] libmachine: (ha-863936-m04) Calling .DriverName
	I0816 12:44:43.366116   27410 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 12:44:43.366137   27410 main.go:141] libmachine: (ha-863936-m04) Calling .GetSSHHostname
	I0816 12:44:43.369137   27410 main.go:141] libmachine: (ha-863936-m04) DBG | domain ha-863936-m04 has defined MAC address 52:54:00:32:98:98 in network mk-ha-863936
	I0816 12:44:43.369533   27410 main.go:141] libmachine: (ha-863936-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:98:98", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:41:03 +0000 UTC Type:0 Mac:52:54:00:32:98:98 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-863936-m04 Clientid:01:52:54:00:32:98:98}
	I0816 12:44:43.369560   27410 main.go:141] libmachine: (ha-863936-m04) DBG | domain ha-863936-m04 has defined IP address 192.168.39.74 and MAC address 52:54:00:32:98:98 in network mk-ha-863936
	I0816 12:44:43.369733   27410 main.go:141] libmachine: (ha-863936-m04) Calling .GetSSHPort
	I0816 12:44:43.369897   27410 main.go:141] libmachine: (ha-863936-m04) Calling .GetSSHKeyPath
	I0816 12:44:43.370024   27410 main.go:141] libmachine: (ha-863936-m04) Calling .GetSSHUsername
	I0816 12:44:43.370172   27410 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m04/id_rsa Username:docker}
	I0816 12:44:43.452390   27410 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 12:44:43.466161   27410 status.go:257] ha-863936-m04 status: &{Name:ha-863936-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-863936 status -v=7 --alsologtostderr: exit status 3 (3.683782s)

                                                
                                                
-- stdout --
	ha-863936
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-863936-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-863936-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-863936-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 12:44:47.918482   27510 out.go:345] Setting OutFile to fd 1 ...
	I0816 12:44:47.918578   27510 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 12:44:47.918585   27510 out.go:358] Setting ErrFile to fd 2...
	I0816 12:44:47.918589   27510 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 12:44:47.918743   27510 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-3966/.minikube/bin
	I0816 12:44:47.918885   27510 out.go:352] Setting JSON to false
	I0816 12:44:47.918907   27510 mustload.go:65] Loading cluster: ha-863936
	I0816 12:44:47.919014   27510 notify.go:220] Checking for updates...
	I0816 12:44:47.919245   27510 config.go:182] Loaded profile config "ha-863936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 12:44:47.919259   27510 status.go:255] checking status of ha-863936 ...
	I0816 12:44:47.919605   27510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:47.919656   27510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:47.940779   27510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35709
	I0816 12:44:47.941184   27510 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:47.941662   27510 main.go:141] libmachine: Using API Version  1
	I0816 12:44:47.941685   27510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:47.942111   27510 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:47.942302   27510 main.go:141] libmachine: (ha-863936) Calling .GetState
	I0816 12:44:47.943863   27510 status.go:330] ha-863936 host status = "Running" (err=<nil>)
	I0816 12:44:47.943878   27510 host.go:66] Checking if "ha-863936" exists ...
	I0816 12:44:47.944167   27510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:47.944218   27510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:47.958974   27510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41553
	I0816 12:44:47.959389   27510 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:47.959790   27510 main.go:141] libmachine: Using API Version  1
	I0816 12:44:47.959816   27510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:47.960119   27510 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:47.960291   27510 main.go:141] libmachine: (ha-863936) Calling .GetIP
	I0816 12:44:47.963243   27510 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:44:47.963618   27510 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:44:47.963644   27510 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:44:47.963815   27510 host.go:66] Checking if "ha-863936" exists ...
	I0816 12:44:47.964193   27510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:47.964240   27510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:47.978882   27510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37113
	I0816 12:44:47.979297   27510 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:47.979763   27510 main.go:141] libmachine: Using API Version  1
	I0816 12:44:47.979785   27510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:47.980078   27510 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:47.980257   27510 main.go:141] libmachine: (ha-863936) Calling .DriverName
	I0816 12:44:47.980455   27510 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 12:44:47.980486   27510 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:44:47.982893   27510 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:44:47.983222   27510 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:44:47.983241   27510 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:44:47.983354   27510 main.go:141] libmachine: (ha-863936) Calling .GetSSHPort
	I0816 12:44:47.983528   27510 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:44:47.983636   27510 main.go:141] libmachine: (ha-863936) Calling .GetSSHUsername
	I0816 12:44:47.983751   27510 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936/id_rsa Username:docker}
	I0816 12:44:48.060204   27510 ssh_runner.go:195] Run: systemctl --version
	I0816 12:44:48.066461   27510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 12:44:48.081759   27510 kubeconfig.go:125] found "ha-863936" server: "https://192.168.39.254:8443"
	I0816 12:44:48.081786   27510 api_server.go:166] Checking apiserver status ...
	I0816 12:44:48.081814   27510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 12:44:48.094607   27510 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1179/cgroup
	W0816 12:44:48.103483   27510 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1179/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 12:44:48.103528   27510 ssh_runner.go:195] Run: ls
	I0816 12:44:48.108338   27510 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0816 12:44:48.113928   27510 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0816 12:44:48.113946   27510 status.go:422] ha-863936 apiserver status = Running (err=<nil>)
	I0816 12:44:48.113955   27510 status.go:257] ha-863936 status: &{Name:ha-863936 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 12:44:48.113970   27510 status.go:255] checking status of ha-863936-m02 ...
	I0816 12:44:48.114267   27510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:48.114305   27510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:48.128833   27510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37103
	I0816 12:44:48.129282   27510 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:48.129772   27510 main.go:141] libmachine: Using API Version  1
	I0816 12:44:48.129790   27510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:48.130042   27510 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:48.130202   27510 main.go:141] libmachine: (ha-863936-m02) Calling .GetState
	I0816 12:44:48.131752   27510 status.go:330] ha-863936-m02 host status = "Running" (err=<nil>)
	I0816 12:44:48.131769   27510 host.go:66] Checking if "ha-863936-m02" exists ...
	I0816 12:44:48.132044   27510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:48.132074   27510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:48.146050   27510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45401
	I0816 12:44:48.146513   27510 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:48.147047   27510 main.go:141] libmachine: Using API Version  1
	I0816 12:44:48.147072   27510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:48.147375   27510 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:48.147580   27510 main.go:141] libmachine: (ha-863936-m02) Calling .GetIP
	I0816 12:44:48.149987   27510 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:44:48.150395   27510 main.go:141] libmachine: (ha-863936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1e:73", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:37:36 +0000 UTC Type:0 Mac:52:54:00:c0:1e:73 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-863936-m02 Clientid:01:52:54:00:c0:1e:73}
	I0816 12:44:48.150419   27510 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:44:48.150550   27510 host.go:66] Checking if "ha-863936-m02" exists ...
	I0816 12:44:48.150940   27510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:48.150983   27510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:48.165125   27510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40593
	I0816 12:44:48.165493   27510 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:48.165992   27510 main.go:141] libmachine: Using API Version  1
	I0816 12:44:48.166016   27510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:48.166306   27510 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:48.166503   27510 main.go:141] libmachine: (ha-863936-m02) Calling .DriverName
	I0816 12:44:48.166647   27510 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 12:44:48.166670   27510 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHHostname
	I0816 12:44:48.169137   27510 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:44:48.169512   27510 main.go:141] libmachine: (ha-863936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1e:73", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:37:36 +0000 UTC Type:0 Mac:52:54:00:c0:1e:73 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-863936-m02 Clientid:01:52:54:00:c0:1e:73}
	I0816 12:44:48.169540   27510 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:44:48.169664   27510 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHPort
	I0816 12:44:48.169827   27510 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHKeyPath
	I0816 12:44:48.169988   27510 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHUsername
	I0816 12:44:48.170121   27510 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m02/id_rsa Username:docker}
	W0816 12:44:51.221277   27510 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.101:22: connect: no route to host
	W0816 12:44:51.221380   27510 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.101:22: connect: no route to host
	E0816 12:44:51.221396   27510 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.101:22: connect: no route to host
	I0816 12:44:51.221404   27510 status.go:257] ha-863936-m02 status: &{Name:ha-863936-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0816 12:44:51.221421   27510 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.101:22: connect: no route to host
	I0816 12:44:51.221429   27510 status.go:255] checking status of ha-863936-m03 ...
	I0816 12:44:51.221819   27510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:51.221863   27510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:51.236590   27510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36751
	I0816 12:44:51.237009   27510 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:51.237530   27510 main.go:141] libmachine: Using API Version  1
	I0816 12:44:51.237554   27510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:51.237855   27510 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:51.238041   27510 main.go:141] libmachine: (ha-863936-m03) Calling .GetState
	I0816 12:44:51.239427   27510 status.go:330] ha-863936-m03 host status = "Running" (err=<nil>)
	I0816 12:44:51.239443   27510 host.go:66] Checking if "ha-863936-m03" exists ...
	I0816 12:44:51.239723   27510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:51.239761   27510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:51.254850   27510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35119
	I0816 12:44:51.255293   27510 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:51.255756   27510 main.go:141] libmachine: Using API Version  1
	I0816 12:44:51.255780   27510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:51.256053   27510 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:51.256253   27510 main.go:141] libmachine: (ha-863936-m03) Calling .GetIP
	I0816 12:44:51.258940   27510 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:44:51.259290   27510 main.go:141] libmachine: (ha-863936-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:05:59", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:39:35 +0000 UTC Type:0 Mac:52:54:00:ec:05:59 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-863936-m03 Clientid:01:52:54:00:ec:05:59}
	I0816 12:44:51.259316   27510 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined IP address 192.168.39.116 and MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:44:51.259493   27510 host.go:66] Checking if "ha-863936-m03" exists ...
	I0816 12:44:51.259783   27510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:51.259816   27510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:51.274498   27510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37555
	I0816 12:44:51.274853   27510 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:51.275377   27510 main.go:141] libmachine: Using API Version  1
	I0816 12:44:51.275394   27510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:51.275683   27510 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:51.275877   27510 main.go:141] libmachine: (ha-863936-m03) Calling .DriverName
	I0816 12:44:51.276057   27510 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 12:44:51.276076   27510 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHHostname
	I0816 12:44:51.278324   27510 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:44:51.278704   27510 main.go:141] libmachine: (ha-863936-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:05:59", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:39:35 +0000 UTC Type:0 Mac:52:54:00:ec:05:59 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-863936-m03 Clientid:01:52:54:00:ec:05:59}
	I0816 12:44:51.278726   27510 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined IP address 192.168.39.116 and MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:44:51.278925   27510 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHPort
	I0816 12:44:51.279094   27510 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHKeyPath
	I0816 12:44:51.279253   27510 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHUsername
	I0816 12:44:51.279371   27510 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m03/id_rsa Username:docker}
	I0816 12:44:51.360464   27510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 12:44:51.374927   27510 kubeconfig.go:125] found "ha-863936" server: "https://192.168.39.254:8443"
	I0816 12:44:51.374952   27510 api_server.go:166] Checking apiserver status ...
	I0816 12:44:51.374985   27510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 12:44:51.388630   27510 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1495/cgroup
	W0816 12:44:51.399047   27510 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1495/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 12:44:51.399119   27510 ssh_runner.go:195] Run: ls
	I0816 12:44:51.403799   27510 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0816 12:44:51.409709   27510 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0816 12:44:51.409737   27510 status.go:422] ha-863936-m03 apiserver status = Running (err=<nil>)
	I0816 12:44:51.409748   27510 status.go:257] ha-863936-m03 status: &{Name:ha-863936-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 12:44:51.409767   27510 status.go:255] checking status of ha-863936-m04 ...
	I0816 12:44:51.410094   27510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:51.410143   27510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:51.425919   27510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45369
	I0816 12:44:51.426345   27510 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:51.426837   27510 main.go:141] libmachine: Using API Version  1
	I0816 12:44:51.426854   27510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:51.427207   27510 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:51.427369   27510 main.go:141] libmachine: (ha-863936-m04) Calling .GetState
	I0816 12:44:51.428854   27510 status.go:330] ha-863936-m04 host status = "Running" (err=<nil>)
	I0816 12:44:51.428870   27510 host.go:66] Checking if "ha-863936-m04" exists ...
	I0816 12:44:51.429224   27510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:51.429256   27510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:51.443933   27510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34203
	I0816 12:44:51.444292   27510 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:51.444709   27510 main.go:141] libmachine: Using API Version  1
	I0816 12:44:51.444732   27510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:51.445045   27510 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:51.445243   27510 main.go:141] libmachine: (ha-863936-m04) Calling .GetIP
	I0816 12:44:51.448159   27510 main.go:141] libmachine: (ha-863936-m04) DBG | domain ha-863936-m04 has defined MAC address 52:54:00:32:98:98 in network mk-ha-863936
	I0816 12:44:51.448501   27510 main.go:141] libmachine: (ha-863936-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:98:98", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:41:03 +0000 UTC Type:0 Mac:52:54:00:32:98:98 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-863936-m04 Clientid:01:52:54:00:32:98:98}
	I0816 12:44:51.448536   27510 main.go:141] libmachine: (ha-863936-m04) DBG | domain ha-863936-m04 has defined IP address 192.168.39.74 and MAC address 52:54:00:32:98:98 in network mk-ha-863936
	I0816 12:44:51.448619   27510 host.go:66] Checking if "ha-863936-m04" exists ...
	I0816 12:44:51.448901   27510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:51.448957   27510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:51.463722   27510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39399
	I0816 12:44:51.464059   27510 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:51.464508   27510 main.go:141] libmachine: Using API Version  1
	I0816 12:44:51.464528   27510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:51.464803   27510 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:51.464992   27510 main.go:141] libmachine: (ha-863936-m04) Calling .DriverName
	I0816 12:44:51.465167   27510 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 12:44:51.465188   27510 main.go:141] libmachine: (ha-863936-m04) Calling .GetSSHHostname
	I0816 12:44:51.467526   27510 main.go:141] libmachine: (ha-863936-m04) DBG | domain ha-863936-m04 has defined MAC address 52:54:00:32:98:98 in network mk-ha-863936
	I0816 12:44:51.467932   27510 main.go:141] libmachine: (ha-863936-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:98:98", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:41:03 +0000 UTC Type:0 Mac:52:54:00:32:98:98 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-863936-m04 Clientid:01:52:54:00:32:98:98}
	I0816 12:44:51.467968   27510 main.go:141] libmachine: (ha-863936-m04) DBG | domain ha-863936-m04 has defined IP address 192.168.39.74 and MAC address 52:54:00:32:98:98 in network mk-ha-863936
	I0816 12:44:51.468212   27510 main.go:141] libmachine: (ha-863936-m04) Calling .GetSSHPort
	I0816 12:44:51.468375   27510 main.go:141] libmachine: (ha-863936-m04) Calling .GetSSHKeyPath
	I0816 12:44:51.468504   27510 main.go:141] libmachine: (ha-863936-m04) Calling .GetSSHUsername
	I0816 12:44:51.468658   27510 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m04/id_rsa Username:docker}
	I0816 12:44:51.548779   27510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 12:44:51.563088   27510 status.go:257] ha-863936-m04 status: &{Name:ha-863936-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-863936 status -v=7 --alsologtostderr: exit status 3 (3.730371333s)

                                                
                                                
-- stdout --
	ha-863936
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-863936-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-863936-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-863936-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 12:44:56.780415   27626 out.go:345] Setting OutFile to fd 1 ...
	I0816 12:44:56.780745   27626 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 12:44:56.780814   27626 out.go:358] Setting ErrFile to fd 2...
	I0816 12:44:56.780894   27626 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 12:44:56.781362   27626 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-3966/.minikube/bin
	I0816 12:44:56.781845   27626 out.go:352] Setting JSON to false
	I0816 12:44:56.781879   27626 mustload.go:65] Loading cluster: ha-863936
	I0816 12:44:56.781989   27626 notify.go:220] Checking for updates...
	I0816 12:44:56.782351   27626 config.go:182] Loaded profile config "ha-863936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 12:44:56.782372   27626 status.go:255] checking status of ha-863936 ...
	I0816 12:44:56.782735   27626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:56.782796   27626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:56.803123   27626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35695
	I0816 12:44:56.803523   27626 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:56.804046   27626 main.go:141] libmachine: Using API Version  1
	I0816 12:44:56.804060   27626 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:56.804384   27626 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:56.804545   27626 main.go:141] libmachine: (ha-863936) Calling .GetState
	I0816 12:44:56.806635   27626 status.go:330] ha-863936 host status = "Running" (err=<nil>)
	I0816 12:44:56.806694   27626 host.go:66] Checking if "ha-863936" exists ...
	I0816 12:44:56.807002   27626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:56.807041   27626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:56.821327   27626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41953
	I0816 12:44:56.821752   27626 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:56.822210   27626 main.go:141] libmachine: Using API Version  1
	I0816 12:44:56.822231   27626 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:56.822475   27626 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:56.822626   27626 main.go:141] libmachine: (ha-863936) Calling .GetIP
	I0816 12:44:56.824903   27626 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:44:56.825311   27626 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:44:56.825338   27626 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:44:56.825467   27626 host.go:66] Checking if "ha-863936" exists ...
	I0816 12:44:56.825833   27626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:56.825876   27626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:56.840238   27626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38899
	I0816 12:44:56.840576   27626 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:56.841005   27626 main.go:141] libmachine: Using API Version  1
	I0816 12:44:56.841026   27626 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:56.841312   27626 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:56.841495   27626 main.go:141] libmachine: (ha-863936) Calling .DriverName
	I0816 12:44:56.841634   27626 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 12:44:56.841658   27626 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:44:56.843956   27626 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:44:56.844321   27626 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:44:56.844350   27626 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:44:56.844483   27626 main.go:141] libmachine: (ha-863936) Calling .GetSSHPort
	I0816 12:44:56.844652   27626 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:44:56.844824   27626 main.go:141] libmachine: (ha-863936) Calling .GetSSHUsername
	I0816 12:44:56.844990   27626 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936/id_rsa Username:docker}
	I0816 12:44:56.921128   27626 ssh_runner.go:195] Run: systemctl --version
	I0816 12:44:56.928465   27626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 12:44:56.943060   27626 kubeconfig.go:125] found "ha-863936" server: "https://192.168.39.254:8443"
	I0816 12:44:56.943086   27626 api_server.go:166] Checking apiserver status ...
	I0816 12:44:56.943120   27626 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 12:44:56.958448   27626 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1179/cgroup
	W0816 12:44:56.970824   27626 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1179/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 12:44:56.970864   27626 ssh_runner.go:195] Run: ls
	I0816 12:44:56.975598   27626 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0816 12:44:56.980483   27626 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0816 12:44:56.980502   27626 status.go:422] ha-863936 apiserver status = Running (err=<nil>)
	I0816 12:44:56.980514   27626 status.go:257] ha-863936 status: &{Name:ha-863936 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 12:44:56.980537   27626 status.go:255] checking status of ha-863936-m02 ...
	I0816 12:44:56.980829   27626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:56.980868   27626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:56.995897   27626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43323
	I0816 12:44:56.996306   27626 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:56.996826   27626 main.go:141] libmachine: Using API Version  1
	I0816 12:44:56.996847   27626 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:56.997195   27626 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:56.997446   27626 main.go:141] libmachine: (ha-863936-m02) Calling .GetState
	I0816 12:44:56.999078   27626 status.go:330] ha-863936-m02 host status = "Running" (err=<nil>)
	I0816 12:44:56.999096   27626 host.go:66] Checking if "ha-863936-m02" exists ...
	I0816 12:44:56.999458   27626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:56.999501   27626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:57.014949   27626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42761
	I0816 12:44:57.015364   27626 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:57.015797   27626 main.go:141] libmachine: Using API Version  1
	I0816 12:44:57.015820   27626 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:57.016079   27626 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:57.016263   27626 main.go:141] libmachine: (ha-863936-m02) Calling .GetIP
	I0816 12:44:57.019518   27626 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:44:57.019942   27626 main.go:141] libmachine: (ha-863936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1e:73", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:37:36 +0000 UTC Type:0 Mac:52:54:00:c0:1e:73 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-863936-m02 Clientid:01:52:54:00:c0:1e:73}
	I0816 12:44:57.019967   27626 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:44:57.020155   27626 host.go:66] Checking if "ha-863936-m02" exists ...
	I0816 12:44:57.020516   27626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:44:57.020587   27626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:44:57.035409   27626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33711
	I0816 12:44:57.035832   27626 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:44:57.036294   27626 main.go:141] libmachine: Using API Version  1
	I0816 12:44:57.036319   27626 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:44:57.036592   27626 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:44:57.036805   27626 main.go:141] libmachine: (ha-863936-m02) Calling .DriverName
	I0816 12:44:57.037024   27626 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 12:44:57.037047   27626 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHHostname
	I0816 12:44:57.039700   27626 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:44:57.040110   27626 main.go:141] libmachine: (ha-863936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1e:73", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:37:36 +0000 UTC Type:0 Mac:52:54:00:c0:1e:73 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-863936-m02 Clientid:01:52:54:00:c0:1e:73}
	I0816 12:44:57.040136   27626 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:44:57.040258   27626 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHPort
	I0816 12:44:57.040435   27626 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHKeyPath
	I0816 12:44:57.040561   27626 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHUsername
	I0816 12:44:57.040684   27626 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m02/id_rsa Username:docker}
	W0816 12:45:00.117168   27626 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.101:22: connect: no route to host
	W0816 12:45:00.117267   27626 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.101:22: connect: no route to host
	E0816 12:45:00.117282   27626 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.101:22: connect: no route to host
	I0816 12:45:00.117289   27626 status.go:257] ha-863936-m02 status: &{Name:ha-863936-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0816 12:45:00.117308   27626 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.101:22: connect: no route to host
	I0816 12:45:00.117316   27626 status.go:255] checking status of ha-863936-m03 ...
	I0816 12:45:00.117656   27626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:45:00.117698   27626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:45:00.132694   27626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40067
	I0816 12:45:00.133206   27626 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:45:00.133632   27626 main.go:141] libmachine: Using API Version  1
	I0816 12:45:00.133668   27626 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:45:00.133947   27626 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:45:00.134147   27626 main.go:141] libmachine: (ha-863936-m03) Calling .GetState
	I0816 12:45:00.135883   27626 status.go:330] ha-863936-m03 host status = "Running" (err=<nil>)
	I0816 12:45:00.135896   27626 host.go:66] Checking if "ha-863936-m03" exists ...
	I0816 12:45:00.136220   27626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:45:00.136280   27626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:45:00.150975   27626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42083
	I0816 12:45:00.151410   27626 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:45:00.151864   27626 main.go:141] libmachine: Using API Version  1
	I0816 12:45:00.151885   27626 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:45:00.152204   27626 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:45:00.152387   27626 main.go:141] libmachine: (ha-863936-m03) Calling .GetIP
	I0816 12:45:00.155348   27626 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:45:00.155851   27626 main.go:141] libmachine: (ha-863936-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:05:59", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:39:35 +0000 UTC Type:0 Mac:52:54:00:ec:05:59 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-863936-m03 Clientid:01:52:54:00:ec:05:59}
	I0816 12:45:00.155880   27626 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined IP address 192.168.39.116 and MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:45:00.156050   27626 host.go:66] Checking if "ha-863936-m03" exists ...
	I0816 12:45:00.156627   27626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:45:00.156678   27626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:45:00.173392   27626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34633
	I0816 12:45:00.173781   27626 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:45:00.174213   27626 main.go:141] libmachine: Using API Version  1
	I0816 12:45:00.174234   27626 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:45:00.174559   27626 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:45:00.174724   27626 main.go:141] libmachine: (ha-863936-m03) Calling .DriverName
	I0816 12:45:00.174926   27626 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 12:45:00.174947   27626 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHHostname
	I0816 12:45:00.177870   27626 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:45:00.178325   27626 main.go:141] libmachine: (ha-863936-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:05:59", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:39:35 +0000 UTC Type:0 Mac:52:54:00:ec:05:59 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-863936-m03 Clientid:01:52:54:00:ec:05:59}
	I0816 12:45:00.178348   27626 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined IP address 192.168.39.116 and MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:45:00.178520   27626 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHPort
	I0816 12:45:00.178691   27626 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHKeyPath
	I0816 12:45:00.178855   27626 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHUsername
	I0816 12:45:00.178980   27626 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m03/id_rsa Username:docker}
	I0816 12:45:00.261175   27626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 12:45:00.277298   27626 kubeconfig.go:125] found "ha-863936" server: "https://192.168.39.254:8443"
	I0816 12:45:00.277329   27626 api_server.go:166] Checking apiserver status ...
	I0816 12:45:00.277390   27626 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 12:45:00.292147   27626 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1495/cgroup
	W0816 12:45:00.302992   27626 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1495/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 12:45:00.303058   27626 ssh_runner.go:195] Run: ls
	I0816 12:45:00.307776   27626 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0816 12:45:00.312306   27626 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0816 12:45:00.312345   27626 status.go:422] ha-863936-m03 apiserver status = Running (err=<nil>)
	I0816 12:45:00.312356   27626 status.go:257] ha-863936-m03 status: &{Name:ha-863936-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 12:45:00.312370   27626 status.go:255] checking status of ha-863936-m04 ...
	I0816 12:45:00.312658   27626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:45:00.312693   27626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:45:00.327906   27626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43837
	I0816 12:45:00.328346   27626 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:45:00.328852   27626 main.go:141] libmachine: Using API Version  1
	I0816 12:45:00.328871   27626 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:45:00.329212   27626 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:45:00.329442   27626 main.go:141] libmachine: (ha-863936-m04) Calling .GetState
	I0816 12:45:00.331118   27626 status.go:330] ha-863936-m04 host status = "Running" (err=<nil>)
	I0816 12:45:00.331131   27626 host.go:66] Checking if "ha-863936-m04" exists ...
	I0816 12:45:00.331527   27626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:45:00.331573   27626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:45:00.346374   27626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36503
	I0816 12:45:00.346866   27626 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:45:00.347367   27626 main.go:141] libmachine: Using API Version  1
	I0816 12:45:00.347390   27626 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:45:00.347756   27626 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:45:00.347989   27626 main.go:141] libmachine: (ha-863936-m04) Calling .GetIP
	I0816 12:45:00.350800   27626 main.go:141] libmachine: (ha-863936-m04) DBG | domain ha-863936-m04 has defined MAC address 52:54:00:32:98:98 in network mk-ha-863936
	I0816 12:45:00.351187   27626 main.go:141] libmachine: (ha-863936-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:98:98", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:41:03 +0000 UTC Type:0 Mac:52:54:00:32:98:98 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-863936-m04 Clientid:01:52:54:00:32:98:98}
	I0816 12:45:00.351213   27626 main.go:141] libmachine: (ha-863936-m04) DBG | domain ha-863936-m04 has defined IP address 192.168.39.74 and MAC address 52:54:00:32:98:98 in network mk-ha-863936
	I0816 12:45:00.351361   27626 host.go:66] Checking if "ha-863936-m04" exists ...
	I0816 12:45:00.351667   27626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:45:00.351703   27626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:45:00.366191   27626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42367
	I0816 12:45:00.366572   27626 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:45:00.366976   27626 main.go:141] libmachine: Using API Version  1
	I0816 12:45:00.366998   27626 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:45:00.367336   27626 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:45:00.367512   27626 main.go:141] libmachine: (ha-863936-m04) Calling .DriverName
	I0816 12:45:00.367685   27626 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 12:45:00.367707   27626 main.go:141] libmachine: (ha-863936-m04) Calling .GetSSHHostname
	I0816 12:45:00.370450   27626 main.go:141] libmachine: (ha-863936-m04) DBG | domain ha-863936-m04 has defined MAC address 52:54:00:32:98:98 in network mk-ha-863936
	I0816 12:45:00.370748   27626 main.go:141] libmachine: (ha-863936-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:98:98", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:41:03 +0000 UTC Type:0 Mac:52:54:00:32:98:98 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-863936-m04 Clientid:01:52:54:00:32:98:98}
	I0816 12:45:00.370768   27626 main.go:141] libmachine: (ha-863936-m04) DBG | domain ha-863936-m04 has defined IP address 192.168.39.74 and MAC address 52:54:00:32:98:98 in network mk-ha-863936
	I0816 12:45:00.370915   27626 main.go:141] libmachine: (ha-863936-m04) Calling .GetSSHPort
	I0816 12:45:00.371075   27626 main.go:141] libmachine: (ha-863936-m04) Calling .GetSSHKeyPath
	I0816 12:45:00.371198   27626 main.go:141] libmachine: (ha-863936-m04) Calling .GetSSHUsername
	I0816 12:45:00.371376   27626 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m04/id_rsa Username:docker}
	I0816 12:45:00.457503   27626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 12:45:00.471895   27626 status.go:257] ha-863936-m04 status: &{Name:ha-863936-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-863936 status -v=7 --alsologtostderr: exit status 7 (638.438261ms)

                                                
                                                
-- stdout --
	ha-863936
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-863936-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-863936-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-863936-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 12:45:07.602550   27763 out.go:345] Setting OutFile to fd 1 ...
	I0816 12:45:07.602801   27763 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 12:45:07.602809   27763 out.go:358] Setting ErrFile to fd 2...
	I0816 12:45:07.602813   27763 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 12:45:07.602968   27763 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-3966/.minikube/bin
	I0816 12:45:07.603124   27763 out.go:352] Setting JSON to false
	I0816 12:45:07.603148   27763 mustload.go:65] Loading cluster: ha-863936
	I0816 12:45:07.603273   27763 notify.go:220] Checking for updates...
	I0816 12:45:07.603548   27763 config.go:182] Loaded profile config "ha-863936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 12:45:07.603565   27763 status.go:255] checking status of ha-863936 ...
	I0816 12:45:07.604002   27763 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:45:07.604074   27763 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:45:07.620798   27763 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32833
	I0816 12:45:07.621374   27763 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:45:07.621989   27763 main.go:141] libmachine: Using API Version  1
	I0816 12:45:07.622015   27763 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:45:07.622570   27763 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:45:07.622770   27763 main.go:141] libmachine: (ha-863936) Calling .GetState
	I0816 12:45:07.624644   27763 status.go:330] ha-863936 host status = "Running" (err=<nil>)
	I0816 12:45:07.624660   27763 host.go:66] Checking if "ha-863936" exists ...
	I0816 12:45:07.624970   27763 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:45:07.625012   27763 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:45:07.639413   27763 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38755
	I0816 12:45:07.639850   27763 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:45:07.640309   27763 main.go:141] libmachine: Using API Version  1
	I0816 12:45:07.640330   27763 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:45:07.640710   27763 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:45:07.640954   27763 main.go:141] libmachine: (ha-863936) Calling .GetIP
	I0816 12:45:07.643689   27763 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:45:07.644084   27763 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:45:07.644107   27763 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:45:07.644276   27763 host.go:66] Checking if "ha-863936" exists ...
	I0816 12:45:07.644573   27763 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:45:07.644614   27763 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:45:07.660052   27763 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41803
	I0816 12:45:07.660491   27763 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:45:07.661049   27763 main.go:141] libmachine: Using API Version  1
	I0816 12:45:07.661072   27763 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:45:07.661475   27763 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:45:07.661739   27763 main.go:141] libmachine: (ha-863936) Calling .DriverName
	I0816 12:45:07.661992   27763 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 12:45:07.662026   27763 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:45:07.665396   27763 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:45:07.665764   27763 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:45:07.665798   27763 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:45:07.665938   27763 main.go:141] libmachine: (ha-863936) Calling .GetSSHPort
	I0816 12:45:07.666101   27763 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:45:07.666246   27763 main.go:141] libmachine: (ha-863936) Calling .GetSSHUsername
	I0816 12:45:07.666365   27763 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936/id_rsa Username:docker}
	I0816 12:45:07.746578   27763 ssh_runner.go:195] Run: systemctl --version
	I0816 12:45:07.753591   27763 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 12:45:07.772979   27763 kubeconfig.go:125] found "ha-863936" server: "https://192.168.39.254:8443"
	I0816 12:45:07.773007   27763 api_server.go:166] Checking apiserver status ...
	I0816 12:45:07.773055   27763 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 12:45:07.797957   27763 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1179/cgroup
	W0816 12:45:07.809097   27763 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1179/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 12:45:07.809152   27763 ssh_runner.go:195] Run: ls
	I0816 12:45:07.814076   27763 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0816 12:45:07.818454   27763 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0816 12:45:07.818474   27763 status.go:422] ha-863936 apiserver status = Running (err=<nil>)
	I0816 12:45:07.818482   27763 status.go:257] ha-863936 status: &{Name:ha-863936 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 12:45:07.818506   27763 status.go:255] checking status of ha-863936-m02 ...
	I0816 12:45:07.818792   27763 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:45:07.818828   27763 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:45:07.833169   27763 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42597
	I0816 12:45:07.833594   27763 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:45:07.834038   27763 main.go:141] libmachine: Using API Version  1
	I0816 12:45:07.834061   27763 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:45:07.834446   27763 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:45:07.834612   27763 main.go:141] libmachine: (ha-863936-m02) Calling .GetState
	I0816 12:45:07.836170   27763 status.go:330] ha-863936-m02 host status = "Stopped" (err=<nil>)
	I0816 12:45:07.836185   27763 status.go:343] host is not running, skipping remaining checks
	I0816 12:45:07.836190   27763 status.go:257] ha-863936-m02 status: &{Name:ha-863936-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 12:45:07.836203   27763 status.go:255] checking status of ha-863936-m03 ...
	I0816 12:45:07.836510   27763 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:45:07.836545   27763 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:45:07.851508   27763 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36947
	I0816 12:45:07.851907   27763 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:45:07.852333   27763 main.go:141] libmachine: Using API Version  1
	I0816 12:45:07.852352   27763 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:45:07.852683   27763 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:45:07.852845   27763 main.go:141] libmachine: (ha-863936-m03) Calling .GetState
	I0816 12:45:07.854476   27763 status.go:330] ha-863936-m03 host status = "Running" (err=<nil>)
	I0816 12:45:07.854489   27763 host.go:66] Checking if "ha-863936-m03" exists ...
	I0816 12:45:07.854816   27763 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:45:07.854856   27763 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:45:07.869180   27763 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33159
	I0816 12:45:07.869503   27763 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:45:07.869899   27763 main.go:141] libmachine: Using API Version  1
	I0816 12:45:07.869917   27763 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:45:07.870249   27763 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:45:07.870446   27763 main.go:141] libmachine: (ha-863936-m03) Calling .GetIP
	I0816 12:45:07.873251   27763 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:45:07.873632   27763 main.go:141] libmachine: (ha-863936-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:05:59", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:39:35 +0000 UTC Type:0 Mac:52:54:00:ec:05:59 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-863936-m03 Clientid:01:52:54:00:ec:05:59}
	I0816 12:45:07.873670   27763 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined IP address 192.168.39.116 and MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:45:07.873805   27763 host.go:66] Checking if "ha-863936-m03" exists ...
	I0816 12:45:07.874075   27763 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:45:07.874110   27763 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:45:07.888446   27763 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41329
	I0816 12:45:07.888794   27763 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:45:07.889255   27763 main.go:141] libmachine: Using API Version  1
	I0816 12:45:07.889278   27763 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:45:07.889563   27763 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:45:07.889724   27763 main.go:141] libmachine: (ha-863936-m03) Calling .DriverName
	I0816 12:45:07.889977   27763 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 12:45:07.889997   27763 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHHostname
	I0816 12:45:07.892486   27763 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:45:07.892945   27763 main.go:141] libmachine: (ha-863936-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:05:59", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:39:35 +0000 UTC Type:0 Mac:52:54:00:ec:05:59 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-863936-m03 Clientid:01:52:54:00:ec:05:59}
	I0816 12:45:07.892970   27763 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined IP address 192.168.39.116 and MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:45:07.893141   27763 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHPort
	I0816 12:45:07.893299   27763 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHKeyPath
	I0816 12:45:07.893444   27763 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHUsername
	I0816 12:45:07.893574   27763 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m03/id_rsa Username:docker}
	I0816 12:45:07.981926   27763 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 12:45:07.999713   27763 kubeconfig.go:125] found "ha-863936" server: "https://192.168.39.254:8443"
	I0816 12:45:07.999747   27763 api_server.go:166] Checking apiserver status ...
	I0816 12:45:07.999796   27763 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 12:45:08.015941   27763 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1495/cgroup
	W0816 12:45:08.029859   27763 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1495/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 12:45:08.029943   27763 ssh_runner.go:195] Run: ls
	I0816 12:45:08.034949   27763 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0816 12:45:08.039349   27763 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0816 12:45:08.039384   27763 status.go:422] ha-863936-m03 apiserver status = Running (err=<nil>)
	I0816 12:45:08.039395   27763 status.go:257] ha-863936-m03 status: &{Name:ha-863936-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 12:45:08.039418   27763 status.go:255] checking status of ha-863936-m04 ...
	I0816 12:45:08.039727   27763 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:45:08.039771   27763 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:45:08.055267   27763 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34901
	I0816 12:45:08.055631   27763 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:45:08.056170   27763 main.go:141] libmachine: Using API Version  1
	I0816 12:45:08.056191   27763 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:45:08.056536   27763 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:45:08.056711   27763 main.go:141] libmachine: (ha-863936-m04) Calling .GetState
	I0816 12:45:08.058073   27763 status.go:330] ha-863936-m04 host status = "Running" (err=<nil>)
	I0816 12:45:08.058087   27763 host.go:66] Checking if "ha-863936-m04" exists ...
	I0816 12:45:08.058379   27763 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:45:08.058410   27763 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:45:08.072310   27763 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33013
	I0816 12:45:08.072615   27763 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:45:08.073045   27763 main.go:141] libmachine: Using API Version  1
	I0816 12:45:08.073064   27763 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:45:08.073372   27763 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:45:08.073549   27763 main.go:141] libmachine: (ha-863936-m04) Calling .GetIP
	I0816 12:45:08.076056   27763 main.go:141] libmachine: (ha-863936-m04) DBG | domain ha-863936-m04 has defined MAC address 52:54:00:32:98:98 in network mk-ha-863936
	I0816 12:45:08.076559   27763 main.go:141] libmachine: (ha-863936-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:98:98", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:41:03 +0000 UTC Type:0 Mac:52:54:00:32:98:98 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-863936-m04 Clientid:01:52:54:00:32:98:98}
	I0816 12:45:08.076584   27763 main.go:141] libmachine: (ha-863936-m04) DBG | domain ha-863936-m04 has defined IP address 192.168.39.74 and MAC address 52:54:00:32:98:98 in network mk-ha-863936
	I0816 12:45:08.076766   27763 host.go:66] Checking if "ha-863936-m04" exists ...
	I0816 12:45:08.077094   27763 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:45:08.077133   27763 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:45:08.091540   27763 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44795
	I0816 12:45:08.091903   27763 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:45:08.092348   27763 main.go:141] libmachine: Using API Version  1
	I0816 12:45:08.092377   27763 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:45:08.092654   27763 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:45:08.092861   27763 main.go:141] libmachine: (ha-863936-m04) Calling .DriverName
	I0816 12:45:08.093022   27763 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 12:45:08.093043   27763 main.go:141] libmachine: (ha-863936-m04) Calling .GetSSHHostname
	I0816 12:45:08.096008   27763 main.go:141] libmachine: (ha-863936-m04) DBG | domain ha-863936-m04 has defined MAC address 52:54:00:32:98:98 in network mk-ha-863936
	I0816 12:45:08.096455   27763 main.go:141] libmachine: (ha-863936-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:98:98", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:41:03 +0000 UTC Type:0 Mac:52:54:00:32:98:98 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-863936-m04 Clientid:01:52:54:00:32:98:98}
	I0816 12:45:08.096482   27763 main.go:141] libmachine: (ha-863936-m04) DBG | domain ha-863936-m04 has defined IP address 192.168.39.74 and MAC address 52:54:00:32:98:98 in network mk-ha-863936
	I0816 12:45:08.096611   27763 main.go:141] libmachine: (ha-863936-m04) Calling .GetSSHPort
	I0816 12:45:08.096767   27763 main.go:141] libmachine: (ha-863936-m04) Calling .GetSSHKeyPath
	I0816 12:45:08.096984   27763 main.go:141] libmachine: (ha-863936-m04) Calling .GetSSHUsername
	I0816 12:45:08.097095   27763 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m04/id_rsa Username:docker}
	I0816 12:45:08.180713   27763 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 12:45:08.198813   27763 status.go:257] ha-863936-m04 status: &{Name:ha-863936-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-863936 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-863936 -n ha-863936
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-863936 logs -n 25: (1.364350808s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-863936 ssh -n                                                                 | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | ha-863936-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-863936 cp ha-863936-m03:/home/docker/cp-test.txt                              | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | ha-863936:/home/docker/cp-test_ha-863936-m03_ha-863936.txt                       |           |         |         |                     |                     |
	| ssh     | ha-863936 ssh -n                                                                 | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | ha-863936-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-863936 ssh -n ha-863936 sudo cat                                              | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | /home/docker/cp-test_ha-863936-m03_ha-863936.txt                                 |           |         |         |                     |                     |
	| cp      | ha-863936 cp ha-863936-m03:/home/docker/cp-test.txt                              | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | ha-863936-m02:/home/docker/cp-test_ha-863936-m03_ha-863936-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-863936 ssh -n                                                                 | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | ha-863936-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-863936 ssh -n ha-863936-m02 sudo cat                                          | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | /home/docker/cp-test_ha-863936-m03_ha-863936-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-863936 cp ha-863936-m03:/home/docker/cp-test.txt                              | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | ha-863936-m04:/home/docker/cp-test_ha-863936-m03_ha-863936-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-863936 ssh -n                                                                 | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | ha-863936-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-863936 ssh -n ha-863936-m04 sudo cat                                          | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | /home/docker/cp-test_ha-863936-m03_ha-863936-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-863936 cp testdata/cp-test.txt                                                | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | ha-863936-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-863936 ssh -n                                                                 | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | ha-863936-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-863936 cp ha-863936-m04:/home/docker/cp-test.txt                              | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2848660471/001/cp-test_ha-863936-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-863936 ssh -n                                                                 | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | ha-863936-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-863936 cp ha-863936-m04:/home/docker/cp-test.txt                              | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | ha-863936:/home/docker/cp-test_ha-863936-m04_ha-863936.txt                       |           |         |         |                     |                     |
	| ssh     | ha-863936 ssh -n                                                                 | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | ha-863936-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-863936 ssh -n ha-863936 sudo cat                                              | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | /home/docker/cp-test_ha-863936-m04_ha-863936.txt                                 |           |         |         |                     |                     |
	| cp      | ha-863936 cp ha-863936-m04:/home/docker/cp-test.txt                              | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | ha-863936-m02:/home/docker/cp-test_ha-863936-m04_ha-863936-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-863936 ssh -n                                                                 | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | ha-863936-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-863936 ssh -n ha-863936-m02 sudo cat                                          | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | /home/docker/cp-test_ha-863936-m04_ha-863936-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-863936 cp ha-863936-m04:/home/docker/cp-test.txt                              | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | ha-863936-m03:/home/docker/cp-test_ha-863936-m04_ha-863936-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-863936 ssh -n                                                                 | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | ha-863936-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-863936 ssh -n ha-863936-m03 sudo cat                                          | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | /home/docker/cp-test_ha-863936-m04_ha-863936-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-863936 node stop m02 -v=7                                                     | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-863936 node start m02 -v=7                                                    | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:44 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 12:36:33
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 12:36:33.028737   22106 out.go:345] Setting OutFile to fd 1 ...
	I0816 12:36:33.029022   22106 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 12:36:33.029032   22106 out.go:358] Setting ErrFile to fd 2...
	I0816 12:36:33.029038   22106 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 12:36:33.029244   22106 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-3966/.minikube/bin
	I0816 12:36:33.029799   22106 out.go:352] Setting JSON to false
	I0816 12:36:33.030663   22106 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1138,"bootTime":1723810655,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 12:36:33.030718   22106 start.go:139] virtualization: kvm guest
	I0816 12:36:33.032809   22106 out.go:177] * [ha-863936] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 12:36:33.034134   22106 notify.go:220] Checking for updates...
	I0816 12:36:33.034197   22106 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 12:36:33.035350   22106 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 12:36:33.036498   22106 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 12:36:33.037706   22106 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-3966/.minikube
	I0816 12:36:33.038927   22106 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 12:36:33.040084   22106 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 12:36:33.041429   22106 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 12:36:33.075523   22106 out.go:177] * Using the kvm2 driver based on user configuration
	I0816 12:36:33.076768   22106 start.go:297] selected driver: kvm2
	I0816 12:36:33.076793   22106 start.go:901] validating driver "kvm2" against <nil>
	I0816 12:36:33.076808   22106 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 12:36:33.077467   22106 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 12:36:33.077544   22106 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-3966/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 12:36:33.091248   22106 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0816 12:36:33.091295   22106 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 12:36:33.091522   22106 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 12:36:33.091549   22106 cni.go:84] Creating CNI manager for ""
	I0816 12:36:33.091555   22106 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0816 12:36:33.091564   22106 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0816 12:36:33.091604   22106 start.go:340] cluster config:
	{Name:ha-863936 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-863936 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0816 12:36:33.091685   22106 iso.go:125] acquiring lock: {Name:mk00139b359c6fe4c84d80e843a2099303fb7c5b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 12:36:33.093441   22106 out.go:177] * Starting "ha-863936" primary control-plane node in "ha-863936" cluster
	I0816 12:36:33.094542   22106 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 12:36:33.094580   22106 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0816 12:36:33.094590   22106 cache.go:56] Caching tarball of preloaded images
	I0816 12:36:33.094653   22106 preload.go:172] Found /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 12:36:33.094663   22106 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0816 12:36:33.094930   22106 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/config.json ...
	I0816 12:36:33.094948   22106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/config.json: {Name:mkbf2b129b047186e4a4a70a39c941aa37bc0fd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:36:33.095073   22106 start.go:360] acquireMachinesLock for ha-863936: {Name:mk0b4b88ee8893cefe753073bc08069ac50e5641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 12:36:33.095100   22106 start.go:364] duration metric: took 14.702µs to acquireMachinesLock for "ha-863936"
	I0816 12:36:33.095116   22106 start.go:93] Provisioning new machine with config: &{Name:ha-863936 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-863936 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 12:36:33.095178   22106 start.go:125] createHost starting for "" (driver="kvm2")
	I0816 12:36:33.096737   22106 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 12:36:33.096862   22106 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:36:33.096894   22106 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:36:33.110446   22106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38001
	I0816 12:36:33.110839   22106 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:36:33.111381   22106 main.go:141] libmachine: Using API Version  1
	I0816 12:36:33.111408   22106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:36:33.111738   22106 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:36:33.111902   22106 main.go:141] libmachine: (ha-863936) Calling .GetMachineName
	I0816 12:36:33.112046   22106 main.go:141] libmachine: (ha-863936) Calling .DriverName
	I0816 12:36:33.112171   22106 start.go:159] libmachine.API.Create for "ha-863936" (driver="kvm2")
	I0816 12:36:33.112198   22106 client.go:168] LocalClient.Create starting
	I0816 12:36:33.112229   22106 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem
	I0816 12:36:33.112263   22106 main.go:141] libmachine: Decoding PEM data...
	I0816 12:36:33.112279   22106 main.go:141] libmachine: Parsing certificate...
	I0816 12:36:33.112331   22106 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem
	I0816 12:36:33.112349   22106 main.go:141] libmachine: Decoding PEM data...
	I0816 12:36:33.112362   22106 main.go:141] libmachine: Parsing certificate...
	I0816 12:36:33.112377   22106 main.go:141] libmachine: Running pre-create checks...
	I0816 12:36:33.112389   22106 main.go:141] libmachine: (ha-863936) Calling .PreCreateCheck
	I0816 12:36:33.112703   22106 main.go:141] libmachine: (ha-863936) Calling .GetConfigRaw
	I0816 12:36:33.113064   22106 main.go:141] libmachine: Creating machine...
	I0816 12:36:33.113077   22106 main.go:141] libmachine: (ha-863936) Calling .Create
	I0816 12:36:33.113203   22106 main.go:141] libmachine: (ha-863936) Creating KVM machine...
	I0816 12:36:33.114386   22106 main.go:141] libmachine: (ha-863936) DBG | found existing default KVM network
	I0816 12:36:33.114969   22106 main.go:141] libmachine: (ha-863936) DBG | I0816 12:36:33.114854   22145 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ac0}
	I0816 12:36:33.115010   22106 main.go:141] libmachine: (ha-863936) DBG | created network xml: 
	I0816 12:36:33.115031   22106 main.go:141] libmachine: (ha-863936) DBG | <network>
	I0816 12:36:33.115042   22106 main.go:141] libmachine: (ha-863936) DBG |   <name>mk-ha-863936</name>
	I0816 12:36:33.115060   22106 main.go:141] libmachine: (ha-863936) DBG |   <dns enable='no'/>
	I0816 12:36:33.115072   22106 main.go:141] libmachine: (ha-863936) DBG |   
	I0816 12:36:33.115089   22106 main.go:141] libmachine: (ha-863936) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0816 12:36:33.115100   22106 main.go:141] libmachine: (ha-863936) DBG |     <dhcp>
	I0816 12:36:33.115109   22106 main.go:141] libmachine: (ha-863936) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0816 12:36:33.115126   22106 main.go:141] libmachine: (ha-863936) DBG |     </dhcp>
	I0816 12:36:33.115136   22106 main.go:141] libmachine: (ha-863936) DBG |   </ip>
	I0816 12:36:33.115144   22106 main.go:141] libmachine: (ha-863936) DBG |   
	I0816 12:36:33.115148   22106 main.go:141] libmachine: (ha-863936) DBG | </network>
	I0816 12:36:33.115155   22106 main.go:141] libmachine: (ha-863936) DBG | 
	I0816 12:36:33.119982   22106 main.go:141] libmachine: (ha-863936) DBG | trying to create private KVM network mk-ha-863936 192.168.39.0/24...
	I0816 12:36:33.182767   22106 main.go:141] libmachine: (ha-863936) DBG | private KVM network mk-ha-863936 192.168.39.0/24 created
	I0816 12:36:33.182793   22106 main.go:141] libmachine: (ha-863936) Setting up store path in /home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936 ...
	I0816 12:36:33.182818   22106 main.go:141] libmachine: (ha-863936) DBG | I0816 12:36:33.182754   22145 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19423-3966/.minikube
	I0816 12:36:33.182837   22106 main.go:141] libmachine: (ha-863936) Building disk image from file:///home/jenkins/minikube-integration/19423-3966/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso
	I0816 12:36:33.182872   22106 main.go:141] libmachine: (ha-863936) Downloading /home/jenkins/minikube-integration/19423-3966/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19423-3966/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso...
	I0816 12:36:33.429831   22106 main.go:141] libmachine: (ha-863936) DBG | I0816 12:36:33.429695   22145 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936/id_rsa...
	I0816 12:36:33.532414   22106 main.go:141] libmachine: (ha-863936) DBG | I0816 12:36:33.532299   22145 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936/ha-863936.rawdisk...
	I0816 12:36:33.532446   22106 main.go:141] libmachine: (ha-863936) DBG | Writing magic tar header
	I0816 12:36:33.532460   22106 main.go:141] libmachine: (ha-863936) DBG | Writing SSH key tar header
	I0816 12:36:33.532471   22106 main.go:141] libmachine: (ha-863936) DBG | I0816 12:36:33.532406   22145 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936 ...
	I0816 12:36:33.532567   22106 main.go:141] libmachine: (ha-863936) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936
	I0816 12:36:33.532596   22106 main.go:141] libmachine: (ha-863936) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-3966/.minikube/machines
	I0816 12:36:33.532610   22106 main.go:141] libmachine: (ha-863936) Setting executable bit set on /home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936 (perms=drwx------)
	I0816 12:36:33.532619   22106 main.go:141] libmachine: (ha-863936) Setting executable bit set on /home/jenkins/minikube-integration/19423-3966/.minikube/machines (perms=drwxr-xr-x)
	I0816 12:36:33.532632   22106 main.go:141] libmachine: (ha-863936) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-3966/.minikube
	I0816 12:36:33.532639   22106 main.go:141] libmachine: (ha-863936) Setting executable bit set on /home/jenkins/minikube-integration/19423-3966/.minikube (perms=drwxr-xr-x)
	I0816 12:36:33.532645   22106 main.go:141] libmachine: (ha-863936) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-3966
	I0816 12:36:33.532655   22106 main.go:141] libmachine: (ha-863936) Setting executable bit set on /home/jenkins/minikube-integration/19423-3966 (perms=drwxrwxr-x)
	I0816 12:36:33.532662   22106 main.go:141] libmachine: (ha-863936) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0816 12:36:33.532670   22106 main.go:141] libmachine: (ha-863936) DBG | Checking permissions on dir: /home/jenkins
	I0816 12:36:33.532675   22106 main.go:141] libmachine: (ha-863936) DBG | Checking permissions on dir: /home
	I0816 12:36:33.532685   22106 main.go:141] libmachine: (ha-863936) DBG | Skipping /home - not owner
	I0816 12:36:33.532694   22106 main.go:141] libmachine: (ha-863936) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0816 12:36:33.532700   22106 main.go:141] libmachine: (ha-863936) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0816 12:36:33.532747   22106 main.go:141] libmachine: (ha-863936) Creating domain...
	I0816 12:36:33.533598   22106 main.go:141] libmachine: (ha-863936) define libvirt domain using xml: 
	I0816 12:36:33.533614   22106 main.go:141] libmachine: (ha-863936) <domain type='kvm'>
	I0816 12:36:33.533620   22106 main.go:141] libmachine: (ha-863936)   <name>ha-863936</name>
	I0816 12:36:33.533625   22106 main.go:141] libmachine: (ha-863936)   <memory unit='MiB'>2200</memory>
	I0816 12:36:33.533633   22106 main.go:141] libmachine: (ha-863936)   <vcpu>2</vcpu>
	I0816 12:36:33.533643   22106 main.go:141] libmachine: (ha-863936)   <features>
	I0816 12:36:33.533674   22106 main.go:141] libmachine: (ha-863936)     <acpi/>
	I0816 12:36:33.533697   22106 main.go:141] libmachine: (ha-863936)     <apic/>
	I0816 12:36:33.533704   22106 main.go:141] libmachine: (ha-863936)     <pae/>
	I0816 12:36:33.533720   22106 main.go:141] libmachine: (ha-863936)     
	I0816 12:36:33.533731   22106 main.go:141] libmachine: (ha-863936)   </features>
	I0816 12:36:33.533736   22106 main.go:141] libmachine: (ha-863936)   <cpu mode='host-passthrough'>
	I0816 12:36:33.533741   22106 main.go:141] libmachine: (ha-863936)   
	I0816 12:36:33.533746   22106 main.go:141] libmachine: (ha-863936)   </cpu>
	I0816 12:36:33.533754   22106 main.go:141] libmachine: (ha-863936)   <os>
	I0816 12:36:33.533768   22106 main.go:141] libmachine: (ha-863936)     <type>hvm</type>
	I0816 12:36:33.533780   22106 main.go:141] libmachine: (ha-863936)     <boot dev='cdrom'/>
	I0816 12:36:33.533788   22106 main.go:141] libmachine: (ha-863936)     <boot dev='hd'/>
	I0816 12:36:33.533796   22106 main.go:141] libmachine: (ha-863936)     <bootmenu enable='no'/>
	I0816 12:36:33.533803   22106 main.go:141] libmachine: (ha-863936)   </os>
	I0816 12:36:33.533808   22106 main.go:141] libmachine: (ha-863936)   <devices>
	I0816 12:36:33.533813   22106 main.go:141] libmachine: (ha-863936)     <disk type='file' device='cdrom'>
	I0816 12:36:33.533820   22106 main.go:141] libmachine: (ha-863936)       <source file='/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936/boot2docker.iso'/>
	I0816 12:36:33.533830   22106 main.go:141] libmachine: (ha-863936)       <target dev='hdc' bus='scsi'/>
	I0816 12:36:33.533837   22106 main.go:141] libmachine: (ha-863936)       <readonly/>
	I0816 12:36:33.533844   22106 main.go:141] libmachine: (ha-863936)     </disk>
	I0816 12:36:33.533859   22106 main.go:141] libmachine: (ha-863936)     <disk type='file' device='disk'>
	I0816 12:36:33.533870   22106 main.go:141] libmachine: (ha-863936)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0816 12:36:33.533884   22106 main.go:141] libmachine: (ha-863936)       <source file='/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936/ha-863936.rawdisk'/>
	I0816 12:36:33.533894   22106 main.go:141] libmachine: (ha-863936)       <target dev='hda' bus='virtio'/>
	I0816 12:36:33.533906   22106 main.go:141] libmachine: (ha-863936)     </disk>
	I0816 12:36:33.533912   22106 main.go:141] libmachine: (ha-863936)     <interface type='network'>
	I0816 12:36:33.533926   22106 main.go:141] libmachine: (ha-863936)       <source network='mk-ha-863936'/>
	I0816 12:36:33.533945   22106 main.go:141] libmachine: (ha-863936)       <model type='virtio'/>
	I0816 12:36:33.533962   22106 main.go:141] libmachine: (ha-863936)     </interface>
	I0816 12:36:33.533974   22106 main.go:141] libmachine: (ha-863936)     <interface type='network'>
	I0816 12:36:33.533984   22106 main.go:141] libmachine: (ha-863936)       <source network='default'/>
	I0816 12:36:33.533995   22106 main.go:141] libmachine: (ha-863936)       <model type='virtio'/>
	I0816 12:36:33.534010   22106 main.go:141] libmachine: (ha-863936)     </interface>
	I0816 12:36:33.534018   22106 main.go:141] libmachine: (ha-863936)     <serial type='pty'>
	I0816 12:36:33.534029   22106 main.go:141] libmachine: (ha-863936)       <target port='0'/>
	I0816 12:36:33.534041   22106 main.go:141] libmachine: (ha-863936)     </serial>
	I0816 12:36:33.534049   22106 main.go:141] libmachine: (ha-863936)     <console type='pty'>
	I0816 12:36:33.534062   22106 main.go:141] libmachine: (ha-863936)       <target type='serial' port='0'/>
	I0816 12:36:33.534071   22106 main.go:141] libmachine: (ha-863936)     </console>
	I0816 12:36:33.534087   22106 main.go:141] libmachine: (ha-863936)     <rng model='virtio'>
	I0816 12:36:33.534102   22106 main.go:141] libmachine: (ha-863936)       <backend model='random'>/dev/random</backend>
	I0816 12:36:33.534111   22106 main.go:141] libmachine: (ha-863936)     </rng>
	I0816 12:36:33.534116   22106 main.go:141] libmachine: (ha-863936)     
	I0816 12:36:33.534124   22106 main.go:141] libmachine: (ha-863936)     
	I0816 12:36:33.534132   22106 main.go:141] libmachine: (ha-863936)   </devices>
	I0816 12:36:33.534144   22106 main.go:141] libmachine: (ha-863936) </domain>
	I0816 12:36:33.534154   22106 main.go:141] libmachine: (ha-863936) 
	I0816 12:36:33.538625   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:3f:7d:80 in network default
	I0816 12:36:33.539104   22106 main.go:141] libmachine: (ha-863936) Ensuring networks are active...
	I0816 12:36:33.539122   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:33.539711   22106 main.go:141] libmachine: (ha-863936) Ensuring network default is active
	I0816 12:36:33.539944   22106 main.go:141] libmachine: (ha-863936) Ensuring network mk-ha-863936 is active
	I0816 12:36:33.540382   22106 main.go:141] libmachine: (ha-863936) Getting domain xml...
	I0816 12:36:33.541054   22106 main.go:141] libmachine: (ha-863936) Creating domain...
	I0816 12:36:34.707299   22106 main.go:141] libmachine: (ha-863936) Waiting to get IP...
	I0816 12:36:34.708214   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:34.708557   22106 main.go:141] libmachine: (ha-863936) DBG | unable to find current IP address of domain ha-863936 in network mk-ha-863936
	I0816 12:36:34.708585   22106 main.go:141] libmachine: (ha-863936) DBG | I0816 12:36:34.708533   22145 retry.go:31] will retry after 235.79842ms: waiting for machine to come up
	I0816 12:36:34.946052   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:34.946490   22106 main.go:141] libmachine: (ha-863936) DBG | unable to find current IP address of domain ha-863936 in network mk-ha-863936
	I0816 12:36:34.946510   22106 main.go:141] libmachine: (ha-863936) DBG | I0816 12:36:34.946459   22145 retry.go:31] will retry after 286.730589ms: waiting for machine to come up
	I0816 12:36:35.234829   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:35.235292   22106 main.go:141] libmachine: (ha-863936) DBG | unable to find current IP address of domain ha-863936 in network mk-ha-863936
	I0816 12:36:35.235319   22106 main.go:141] libmachine: (ha-863936) DBG | I0816 12:36:35.235249   22145 retry.go:31] will retry after 372.002112ms: waiting for machine to come up
	I0816 12:36:35.608963   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:35.609506   22106 main.go:141] libmachine: (ha-863936) DBG | unable to find current IP address of domain ha-863936 in network mk-ha-863936
	I0816 12:36:35.609529   22106 main.go:141] libmachine: (ha-863936) DBG | I0816 12:36:35.609480   22145 retry.go:31] will retry after 435.098284ms: waiting for machine to come up
	I0816 12:36:36.045944   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:36.046322   22106 main.go:141] libmachine: (ha-863936) DBG | unable to find current IP address of domain ha-863936 in network mk-ha-863936
	I0816 12:36:36.046350   22106 main.go:141] libmachine: (ha-863936) DBG | I0816 12:36:36.046274   22145 retry.go:31] will retry after 725.404095ms: waiting for machine to come up
	I0816 12:36:36.773280   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:36.773700   22106 main.go:141] libmachine: (ha-863936) DBG | unable to find current IP address of domain ha-863936 in network mk-ha-863936
	I0816 12:36:36.773729   22106 main.go:141] libmachine: (ha-863936) DBG | I0816 12:36:36.773653   22145 retry.go:31] will retry after 744.247182ms: waiting for machine to come up
	I0816 12:36:37.519622   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:37.520086   22106 main.go:141] libmachine: (ha-863936) DBG | unable to find current IP address of domain ha-863936 in network mk-ha-863936
	I0816 12:36:37.520137   22106 main.go:141] libmachine: (ha-863936) DBG | I0816 12:36:37.520001   22145 retry.go:31] will retry after 804.927636ms: waiting for machine to come up
	I0816 12:36:38.326481   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:38.326877   22106 main.go:141] libmachine: (ha-863936) DBG | unable to find current IP address of domain ha-863936 in network mk-ha-863936
	I0816 12:36:38.326902   22106 main.go:141] libmachine: (ha-863936) DBG | I0816 12:36:38.326829   22145 retry.go:31] will retry after 941.718732ms: waiting for machine to come up
	I0816 12:36:39.269832   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:39.270287   22106 main.go:141] libmachine: (ha-863936) DBG | unable to find current IP address of domain ha-863936 in network mk-ha-863936
	I0816 12:36:39.270329   22106 main.go:141] libmachine: (ha-863936) DBG | I0816 12:36:39.270252   22145 retry.go:31] will retry after 1.138744713s: waiting for machine to come up
	I0816 12:36:40.410235   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:40.410623   22106 main.go:141] libmachine: (ha-863936) DBG | unable to find current IP address of domain ha-863936 in network mk-ha-863936
	I0816 12:36:40.410644   22106 main.go:141] libmachine: (ha-863936) DBG | I0816 12:36:40.410585   22145 retry.go:31] will retry after 1.56134778s: waiting for machine to come up
	I0816 12:36:41.974169   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:41.974598   22106 main.go:141] libmachine: (ha-863936) DBG | unable to find current IP address of domain ha-863936 in network mk-ha-863936
	I0816 12:36:41.974629   22106 main.go:141] libmachine: (ha-863936) DBG | I0816 12:36:41.974543   22145 retry.go:31] will retry after 2.667992359s: waiting for machine to come up
	I0816 12:36:44.645158   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:44.645587   22106 main.go:141] libmachine: (ha-863936) DBG | unable to find current IP address of domain ha-863936 in network mk-ha-863936
	I0816 12:36:44.645635   22106 main.go:141] libmachine: (ha-863936) DBG | I0816 12:36:44.645578   22145 retry.go:31] will retry after 2.979452041s: waiting for machine to come up
	I0816 12:36:47.628572   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:47.629020   22106 main.go:141] libmachine: (ha-863936) DBG | unable to find current IP address of domain ha-863936 in network mk-ha-863936
	I0816 12:36:47.629047   22106 main.go:141] libmachine: (ha-863936) DBG | I0816 12:36:47.628972   22145 retry.go:31] will retry after 2.839313737s: waiting for machine to come up
	I0816 12:36:50.471956   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:50.472551   22106 main.go:141] libmachine: (ha-863936) DBG | unable to find current IP address of domain ha-863936 in network mk-ha-863936
	I0816 12:36:50.472580   22106 main.go:141] libmachine: (ha-863936) DBG | I0816 12:36:50.472504   22145 retry.go:31] will retry after 4.05549474s: waiting for machine to come up
	I0816 12:36:54.529582   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:54.529882   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has current primary IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:54.529901   22106 main.go:141] libmachine: (ha-863936) Found IP for machine: 192.168.39.2
	I0816 12:36:54.529912   22106 main.go:141] libmachine: (ha-863936) Reserving static IP address...
	I0816 12:36:54.530219   22106 main.go:141] libmachine: (ha-863936) DBG | unable to find host DHCP lease matching {name: "ha-863936", mac: "52:54:00:88:fe:d4", ip: "192.168.39.2"} in network mk-ha-863936
	I0816 12:36:54.599629   22106 main.go:141] libmachine: (ha-863936) DBG | Getting to WaitForSSH function...
	I0816 12:36:54.599659   22106 main.go:141] libmachine: (ha-863936) Reserved static IP address: 192.168.39.2
	I0816 12:36:54.599672   22106 main.go:141] libmachine: (ha-863936) Waiting for SSH to be available...
	I0816 12:36:54.602035   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:54.602380   22106 main.go:141] libmachine: (ha-863936) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936
	I0816 12:36:54.602405   22106 main.go:141] libmachine: (ha-863936) DBG | unable to find defined IP address of network mk-ha-863936 interface with MAC address 52:54:00:88:fe:d4
	I0816 12:36:54.602542   22106 main.go:141] libmachine: (ha-863936) DBG | Using SSH client type: external
	I0816 12:36:54.602568   22106 main.go:141] libmachine: (ha-863936) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936/id_rsa (-rw-------)
	I0816 12:36:54.602613   22106 main.go:141] libmachine: (ha-863936) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 12:36:54.602645   22106 main.go:141] libmachine: (ha-863936) DBG | About to run SSH command:
	I0816 12:36:54.602762   22106 main.go:141] libmachine: (ha-863936) DBG | exit 0
	I0816 12:36:54.606160   22106 main.go:141] libmachine: (ha-863936) DBG | SSH cmd err, output: exit status 255: 
	I0816 12:36:54.606178   22106 main.go:141] libmachine: (ha-863936) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0816 12:36:54.606185   22106 main.go:141] libmachine: (ha-863936) DBG | command : exit 0
	I0816 12:36:54.606192   22106 main.go:141] libmachine: (ha-863936) DBG | err     : exit status 255
	I0816 12:36:54.606199   22106 main.go:141] libmachine: (ha-863936) DBG | output  : 
	I0816 12:36:57.608362   22106 main.go:141] libmachine: (ha-863936) DBG | Getting to WaitForSSH function...
	I0816 12:36:57.611132   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:57.611494   22106 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:36:57.611523   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:57.611608   22106 main.go:141] libmachine: (ha-863936) DBG | Using SSH client type: external
	I0816 12:36:57.611642   22106 main.go:141] libmachine: (ha-863936) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936/id_rsa (-rw-------)
	I0816 12:36:57.611672   22106 main.go:141] libmachine: (ha-863936) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.2 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 12:36:57.611686   22106 main.go:141] libmachine: (ha-863936) DBG | About to run SSH command:
	I0816 12:36:57.611697   22106 main.go:141] libmachine: (ha-863936) DBG | exit 0
	I0816 12:36:57.733040   22106 main.go:141] libmachine: (ha-863936) DBG | SSH cmd err, output: <nil>: 
	I0816 12:36:57.733299   22106 main.go:141] libmachine: (ha-863936) KVM machine creation complete!
	I0816 12:36:57.733639   22106 main.go:141] libmachine: (ha-863936) Calling .GetConfigRaw
	I0816 12:36:57.734186   22106 main.go:141] libmachine: (ha-863936) Calling .DriverName
	I0816 12:36:57.734331   22106 main.go:141] libmachine: (ha-863936) Calling .DriverName
	I0816 12:36:57.734501   22106 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0816 12:36:57.734515   22106 main.go:141] libmachine: (ha-863936) Calling .GetState
	I0816 12:36:57.735605   22106 main.go:141] libmachine: Detecting operating system of created instance...
	I0816 12:36:57.735617   22106 main.go:141] libmachine: Waiting for SSH to be available...
	I0816 12:36:57.735622   22106 main.go:141] libmachine: Getting to WaitForSSH function...
	I0816 12:36:57.735628   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:36:57.737594   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:57.737913   22106 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:36:57.737937   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:57.738062   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHPort
	I0816 12:36:57.738225   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:36:57.738384   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:36:57.738529   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHUsername
	I0816 12:36:57.738675   22106 main.go:141] libmachine: Using SSH client type: native
	I0816 12:36:57.738912   22106 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0816 12:36:57.738928   22106 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0816 12:36:57.836202   22106 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 12:36:57.836231   22106 main.go:141] libmachine: Detecting the provisioner...
	I0816 12:36:57.836240   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:36:57.838974   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:57.839315   22106 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:36:57.839347   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:57.839552   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHPort
	I0816 12:36:57.839749   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:36:57.839916   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:36:57.840055   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHUsername
	I0816 12:36:57.840205   22106 main.go:141] libmachine: Using SSH client type: native
	I0816 12:36:57.840396   22106 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0816 12:36:57.840409   22106 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0816 12:36:57.937627   22106 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0816 12:36:57.937686   22106 main.go:141] libmachine: found compatible host: buildroot
	I0816 12:36:57.937693   22106 main.go:141] libmachine: Provisioning with buildroot...
	I0816 12:36:57.937700   22106 main.go:141] libmachine: (ha-863936) Calling .GetMachineName
	I0816 12:36:57.937945   22106 buildroot.go:166] provisioning hostname "ha-863936"
	I0816 12:36:57.937971   22106 main.go:141] libmachine: (ha-863936) Calling .GetMachineName
	I0816 12:36:57.938121   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:36:57.940492   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:57.940894   22106 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:36:57.940929   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:57.941085   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHPort
	I0816 12:36:57.941286   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:36:57.941472   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:36:57.941596   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHUsername
	I0816 12:36:57.941743   22106 main.go:141] libmachine: Using SSH client type: native
	I0816 12:36:57.941969   22106 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0816 12:36:57.941984   22106 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-863936 && echo "ha-863936" | sudo tee /etc/hostname
	I0816 12:36:58.051455   22106 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-863936
	
	I0816 12:36:58.051484   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:36:58.054131   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:58.054428   22106 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:36:58.054455   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:58.054631   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHPort
	I0816 12:36:58.054839   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:36:58.055014   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:36:58.055187   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHUsername
	I0816 12:36:58.055335   22106 main.go:141] libmachine: Using SSH client type: native
	I0816 12:36:58.055527   22106 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0816 12:36:58.055548   22106 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-863936' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-863936/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-863936' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 12:36:58.162086   22106 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 12:36:58.162115   22106 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-3966/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-3966/.minikube}
	I0816 12:36:58.162165   22106 buildroot.go:174] setting up certificates
	I0816 12:36:58.162183   22106 provision.go:84] configureAuth start
	I0816 12:36:58.162191   22106 main.go:141] libmachine: (ha-863936) Calling .GetMachineName
	I0816 12:36:58.162442   22106 main.go:141] libmachine: (ha-863936) Calling .GetIP
	I0816 12:36:58.165016   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:58.165350   22106 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:36:58.165373   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:58.165526   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:36:58.167671   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:58.168011   22106 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:36:58.168037   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:58.168147   22106 provision.go:143] copyHostCerts
	I0816 12:36:58.168177   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem
	I0816 12:36:58.168216   22106 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem, removing ...
	I0816 12:36:58.168236   22106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem
	I0816 12:36:58.168314   22106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem (1082 bytes)
	I0816 12:36:58.168420   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem
	I0816 12:36:58.168445   22106 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem, removing ...
	I0816 12:36:58.168451   22106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem
	I0816 12:36:58.168502   22106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem (1123 bytes)
	I0816 12:36:58.168577   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem
	I0816 12:36:58.168615   22106 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem, removing ...
	I0816 12:36:58.168624   22106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem
	I0816 12:36:58.168661   22106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem (1675 bytes)
	I0816 12:36:58.168762   22106 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem org=jenkins.ha-863936 san=[127.0.0.1 192.168.39.2 ha-863936 localhost minikube]
	I0816 12:36:58.274002   22106 provision.go:177] copyRemoteCerts
	I0816 12:36:58.274071   22106 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 12:36:58.274102   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:36:58.276663   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:58.276965   22106 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:36:58.276994   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:58.277196   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHPort
	I0816 12:36:58.277361   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:36:58.277516   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHUsername
	I0816 12:36:58.277664   22106 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936/id_rsa Username:docker}
	I0816 12:36:58.355502   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 12:36:58.355592   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 12:36:58.383229   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 12:36:58.383294   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0816 12:36:58.410432   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 12:36:58.410508   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 12:36:58.437316   22106 provision.go:87] duration metric: took 275.123314ms to configureAuth
	I0816 12:36:58.437338   22106 buildroot.go:189] setting minikube options for container-runtime
	I0816 12:36:58.437527   22106 config.go:182] Loaded profile config "ha-863936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 12:36:58.437605   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:36:58.439981   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:58.440293   22106 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:36:58.440318   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:58.440490   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHPort
	I0816 12:36:58.440673   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:36:58.440832   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:36:58.440996   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHUsername
	I0816 12:36:58.441159   22106 main.go:141] libmachine: Using SSH client type: native
	I0816 12:36:58.441317   22106 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0816 12:36:58.441330   22106 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 12:36:58.710508   22106 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 12:36:58.710534   22106 main.go:141] libmachine: Checking connection to Docker...
	I0816 12:36:58.710543   22106 main.go:141] libmachine: (ha-863936) Calling .GetURL
	I0816 12:36:58.711676   22106 main.go:141] libmachine: (ha-863936) DBG | Using libvirt version 6000000
	I0816 12:36:58.713804   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:58.714036   22106 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:36:58.714070   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:58.714187   22106 main.go:141] libmachine: Docker is up and running!
	I0816 12:36:58.714202   22106 main.go:141] libmachine: Reticulating splines...
	I0816 12:36:58.714210   22106 client.go:171] duration metric: took 25.602002765s to LocalClient.Create
	I0816 12:36:58.714235   22106 start.go:167] duration metric: took 25.602064165s to libmachine.API.Create "ha-863936"
	I0816 12:36:58.714256   22106 start.go:293] postStartSetup for "ha-863936" (driver="kvm2")
	I0816 12:36:58.714279   22106 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 12:36:58.714298   22106 main.go:141] libmachine: (ha-863936) Calling .DriverName
	I0816 12:36:58.714526   22106 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 12:36:58.714548   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:36:58.716428   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:58.716673   22106 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:36:58.716699   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:58.716805   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHPort
	I0816 12:36:58.716975   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:36:58.717145   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHUsername
	I0816 12:36:58.717303   22106 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936/id_rsa Username:docker}
	I0816 12:36:58.795033   22106 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 12:36:58.799670   22106 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 12:36:58.799688   22106 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/addons for local assets ...
	I0816 12:36:58.799754   22106 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/files for local assets ...
	I0816 12:36:58.799847   22106 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem -> 111492.pem in /etc/ssl/certs
	I0816 12:36:58.799857   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem -> /etc/ssl/certs/111492.pem
	I0816 12:36:58.799980   22106 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 12:36:58.809592   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /etc/ssl/certs/111492.pem (1708 bytes)
	I0816 12:36:58.837153   22106 start.go:296] duration metric: took 122.874442ms for postStartSetup
	I0816 12:36:58.837200   22106 main.go:141] libmachine: (ha-863936) Calling .GetConfigRaw
	I0816 12:36:58.837738   22106 main.go:141] libmachine: (ha-863936) Calling .GetIP
	I0816 12:36:58.840054   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:58.840360   22106 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:36:58.840382   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:58.840590   22106 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/config.json ...
	I0816 12:36:58.840792   22106 start.go:128] duration metric: took 25.745604524s to createHost
	I0816 12:36:58.840815   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:36:58.842610   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:58.842896   22106 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:36:58.842925   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:58.843043   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHPort
	I0816 12:36:58.843206   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:36:58.843336   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:36:58.843494   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHUsername
	I0816 12:36:58.843671   22106 main.go:141] libmachine: Using SSH client type: native
	I0816 12:36:58.843871   22106 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0816 12:36:58.843883   22106 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 12:36:58.941633   22106 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723811818.921188895
	
	I0816 12:36:58.941655   22106 fix.go:216] guest clock: 1723811818.921188895
	I0816 12:36:58.941663   22106 fix.go:229] Guest: 2024-08-16 12:36:58.921188895 +0000 UTC Remote: 2024-08-16 12:36:58.84080489 +0000 UTC m=+25.845157784 (delta=80.384005ms)
	I0816 12:36:58.941701   22106 fix.go:200] guest clock delta is within tolerance: 80.384005ms
	I0816 12:36:58.941708   22106 start.go:83] releasing machines lock for "ha-863936", held for 25.846598719s
	I0816 12:36:58.941732   22106 main.go:141] libmachine: (ha-863936) Calling .DriverName
	I0816 12:36:58.941956   22106 main.go:141] libmachine: (ha-863936) Calling .GetIP
	I0816 12:36:58.944195   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:58.944538   22106 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:36:58.944578   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:58.944679   22106 main.go:141] libmachine: (ha-863936) Calling .DriverName
	I0816 12:36:58.945211   22106 main.go:141] libmachine: (ha-863936) Calling .DriverName
	I0816 12:36:58.945356   22106 main.go:141] libmachine: (ha-863936) Calling .DriverName
	I0816 12:36:58.945429   22106 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 12:36:58.945477   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:36:58.945629   22106 ssh_runner.go:195] Run: cat /version.json
	I0816 12:36:58.945652   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:36:58.947899   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:58.948211   22106 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:36:58.948234   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:58.948252   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:58.948347   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHPort
	I0816 12:36:58.948536   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:36:58.948693   22106 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:36:58.948713   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:36:58.948752   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHUsername
	I0816 12:36:58.948862   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHPort
	I0816 12:36:58.948993   22106 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936/id_rsa Username:docker}
	I0816 12:36:58.949063   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:36:58.949201   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHUsername
	I0816 12:36:58.949332   22106 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936/id_rsa Username:docker}
	I0816 12:36:59.022013   22106 ssh_runner.go:195] Run: systemctl --version
	I0816 12:36:59.046055   22106 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 12:36:59.199918   22106 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 12:36:59.205719   22106 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 12:36:59.205792   22106 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 12:36:59.222101   22106 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 12:36:59.222124   22106 start.go:495] detecting cgroup driver to use...
	I0816 12:36:59.222183   22106 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 12:36:59.238191   22106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 12:36:59.251719   22106 docker.go:217] disabling cri-docker service (if available) ...
	I0816 12:36:59.251769   22106 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 12:36:59.265166   22106 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 12:36:59.278597   22106 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 12:36:59.393979   22106 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 12:36:59.544406   22106 docker.go:233] disabling docker service ...
	I0816 12:36:59.544464   22106 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 12:36:59.558840   22106 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 12:36:59.571562   22106 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 12:36:59.694834   22106 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 12:36:59.813595   22106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 12:36:59.827354   22106 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 12:36:59.845758   22106 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 12:36:59.845811   22106 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:36:59.856402   22106 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 12:36:59.856447   22106 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:36:59.866890   22106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:36:59.877035   22106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:36:59.887490   22106 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 12:36:59.897770   22106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:36:59.907908   22106 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:36:59.924420   22106 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:36:59.934587   22106 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 12:36:59.943661   22106 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 12:36:59.943727   22106 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 12:36:59.956613   22106 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 12:36:59.965940   22106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 12:37:00.085504   22106 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 12:37:00.221358   22106 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 12:37:00.221431   22106 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 12:37:00.226179   22106 start.go:563] Will wait 60s for crictl version
	I0816 12:37:00.226239   22106 ssh_runner.go:195] Run: which crictl
	I0816 12:37:00.229795   22106 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 12:37:00.268160   22106 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 12:37:00.268251   22106 ssh_runner.go:195] Run: crio --version
	I0816 12:37:00.294793   22106 ssh_runner.go:195] Run: crio --version
	I0816 12:37:00.324459   22106 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 12:37:00.325811   22106 main.go:141] libmachine: (ha-863936) Calling .GetIP
	I0816 12:37:00.328293   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:37:00.328641   22106 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:37:00.328667   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:37:00.328847   22106 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0816 12:37:00.332764   22106 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 12:37:00.345931   22106 kubeadm.go:883] updating cluster {Name:ha-863936 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-863936 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 12:37:00.346063   22106 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 12:37:00.346111   22106 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 12:37:00.377715   22106 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 12:37:00.377789   22106 ssh_runner.go:195] Run: which lz4
	I0816 12:37:00.381595   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0816 12:37:00.381678   22106 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 12:37:00.385779   22106 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 12:37:00.385813   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0816 12:37:01.718458   22106 crio.go:462] duration metric: took 1.336808857s to copy over tarball
	I0816 12:37:01.718543   22106 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 12:37:03.731657   22106 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.013082282s)
	I0816 12:37:03.731688   22106 crio.go:469] duration metric: took 2.013202273s to extract the tarball
	I0816 12:37:03.731696   22106 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 12:37:03.768560   22106 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 12:37:03.814909   22106 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 12:37:03.814938   22106 cache_images.go:84] Images are preloaded, skipping loading
	I0816 12:37:03.814945   22106 kubeadm.go:934] updating node { 192.168.39.2 8443 v1.31.0 crio true true} ...
	I0816 12:37:03.815033   22106 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-863936 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-863936 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 12:37:03.815109   22106 ssh_runner.go:195] Run: crio config
	I0816 12:37:03.864151   22106 cni.go:84] Creating CNI manager for ""
	I0816 12:37:03.864171   22106 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0816 12:37:03.864180   22106 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 12:37:03.864199   22106 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-863936 NodeName:ha-863936 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 12:37:03.864315   22106 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-863936"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 12:37:03.864339   22106 kube-vip.go:115] generating kube-vip config ...
	I0816 12:37:03.864381   22106 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0816 12:37:03.881475   22106 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0816 12:37:03.881674   22106 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0816 12:37:03.881752   22106 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 12:37:03.891974   22106 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 12:37:03.892045   22106 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0816 12:37:03.904064   22106 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0816 12:37:03.920714   22106 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 12:37:03.937785   22106 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0816 12:37:03.954082   22106 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0816 12:37:03.969749   22106 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0816 12:37:03.973444   22106 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 12:37:03.985039   22106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 12:37:04.117836   22106 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 12:37:04.135197   22106 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936 for IP: 192.168.39.2
	I0816 12:37:04.135219   22106 certs.go:194] generating shared ca certs ...
	I0816 12:37:04.135238   22106 certs.go:226] acquiring lock for ca certs: {Name:mkdb46755373c135acf8239d2d4352f4b0b3d1d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:37:04.135409   22106 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key
	I0816 12:37:04.135464   22106 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key
	I0816 12:37:04.135479   22106 certs.go:256] generating profile certs ...
	I0816 12:37:04.135540   22106 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/client.key
	I0816 12:37:04.135557   22106 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/client.crt with IP's: []
	I0816 12:37:04.286829   22106 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/client.crt ...
	I0816 12:37:04.286855   22106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/client.crt: {Name:mk3c8e19727ad782fc37b7c10c318864d8bf662a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:37:04.287013   22106 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/client.key ...
	I0816 12:37:04.287023   22106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/client.key: {Name:mk20a68f4171979de7052db8f1e89f5baaff55a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:37:04.287123   22106 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.key.3e1ece89
	I0816 12:37:04.287140   22106 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.crt.3e1ece89 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.2 192.168.39.254]
	I0816 12:37:04.419270   22106 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.crt.3e1ece89 ...
	I0816 12:37:04.419298   22106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.crt.3e1ece89: {Name:mkfebc5717092261a16c434a47e224f6ebd88df9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:37:04.419437   22106 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.key.3e1ece89 ...
	I0816 12:37:04.419449   22106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.key.3e1ece89: {Name:mk235afa59962aa082ba1b26e96b63080d574abf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:37:04.419518   22106 certs.go:381] copying /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.crt.3e1ece89 -> /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.crt
	I0816 12:37:04.419598   22106 certs.go:385] copying /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.key.3e1ece89 -> /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.key
	I0816 12:37:04.419652   22106 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/proxy-client.key
	I0816 12:37:04.419666   22106 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/proxy-client.crt with IP's: []
	I0816 12:37:04.753212   22106 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/proxy-client.crt ...
	I0816 12:37:04.753239   22106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/proxy-client.crt: {Name:mk61c146dbc6bf8fbcfd831eae718e0e1aa7bc23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:37:04.753382   22106 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/proxy-client.key ...
	I0816 12:37:04.753393   22106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/proxy-client.key: {Name:mk7152f64e6ce778dd27d833594971ad2030a4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:37:04.753454   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0816 12:37:04.753470   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0816 12:37:04.753481   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0816 12:37:04.753494   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0816 12:37:04.753507   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0816 12:37:04.753519   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0816 12:37:04.753531   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0816 12:37:04.753543   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0816 12:37:04.753590   22106 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem (1338 bytes)
	W0816 12:37:04.753624   22106 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149_empty.pem, impossibly tiny 0 bytes
	I0816 12:37:04.753632   22106 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 12:37:04.753653   22106 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem (1082 bytes)
	I0816 12:37:04.753676   22106 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem (1123 bytes)
	I0816 12:37:04.753698   22106 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem (1675 bytes)
	I0816 12:37:04.753734   22106 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem (1708 bytes)
	I0816 12:37:04.753758   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0816 12:37:04.753772   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem -> /usr/share/ca-certificates/11149.pem
	I0816 12:37:04.753807   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem -> /usr/share/ca-certificates/111492.pem
	I0816 12:37:04.754318   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 12:37:04.779865   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 12:37:04.803727   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 12:37:04.827684   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 12:37:04.851974   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0816 12:37:04.875136   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 12:37:04.901383   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 12:37:04.942284   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 12:37:04.969463   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 12:37:04.992491   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem --> /usr/share/ca-certificates/11149.pem (1338 bytes)
	I0816 12:37:05.015994   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /usr/share/ca-certificates/111492.pem (1708 bytes)
	I0816 12:37:05.040511   22106 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 12:37:05.056803   22106 ssh_runner.go:195] Run: openssl version
	I0816 12:37:05.062412   22106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 12:37:05.073019   22106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 12:37:05.077441   22106 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 12:22 /usr/share/ca-certificates/minikubeCA.pem
	I0816 12:37:05.077485   22106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 12:37:05.083071   22106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 12:37:05.093512   22106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11149.pem && ln -fs /usr/share/ca-certificates/11149.pem /etc/ssl/certs/11149.pem"
	I0816 12:37:05.103596   22106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11149.pem
	I0816 12:37:05.107740   22106 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 12:33 /usr/share/ca-certificates/11149.pem
	I0816 12:37:05.107780   22106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11149.pem
	I0816 12:37:05.113302   22106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11149.pem /etc/ssl/certs/51391683.0"
	I0816 12:37:05.123486   22106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111492.pem && ln -fs /usr/share/ca-certificates/111492.pem /etc/ssl/certs/111492.pem"
	I0816 12:37:05.133632   22106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111492.pem
	I0816 12:37:05.137892   22106 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 12:33 /usr/share/ca-certificates/111492.pem
	I0816 12:37:05.137932   22106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111492.pem
	I0816 12:37:05.143541   22106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111492.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 12:37:05.153932   22106 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 12:37:05.157875   22106 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0816 12:37:05.157925   22106 kubeadm.go:392] StartCluster: {Name:ha-863936 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-863936 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 12:37:05.157993   22106 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 12:37:05.158032   22106 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 12:37:05.198628   22106 cri.go:89] found id: ""
	I0816 12:37:05.198687   22106 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 12:37:05.208057   22106 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 12:37:05.221611   22106 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 12:37:05.233147   22106 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 12:37:05.233165   22106 kubeadm.go:157] found existing configuration files:
	
	I0816 12:37:05.233223   22106 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 12:37:05.241915   22106 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 12:37:05.241973   22106 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 12:37:05.250984   22106 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 12:37:05.259559   22106 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 12:37:05.259609   22106 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 12:37:05.268632   22106 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 12:37:05.277082   22106 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 12:37:05.277124   22106 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 12:37:05.286168   22106 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 12:37:05.294641   22106 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 12:37:05.294686   22106 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 12:37:05.303471   22106 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 12:37:05.406815   22106 kubeadm.go:310] W0816 12:37:05.392317     855 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 12:37:05.409908   22106 kubeadm.go:310] W0816 12:37:05.395512     855 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 12:37:05.518595   22106 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 12:37:16.343194   22106 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0816 12:37:16.343273   22106 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 12:37:16.343362   22106 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 12:37:16.343494   22106 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 12:37:16.343613   22106 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0816 12:37:16.343705   22106 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 12:37:16.345471   22106 out.go:235]   - Generating certificates and keys ...
	I0816 12:37:16.345570   22106 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 12:37:16.345653   22106 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 12:37:16.345741   22106 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0816 12:37:16.345810   22106 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0816 12:37:16.345878   22106 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0816 12:37:16.345958   22106 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0816 12:37:16.346013   22106 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0816 12:37:16.346134   22106 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-863936 localhost] and IPs [192.168.39.2 127.0.0.1 ::1]
	I0816 12:37:16.346203   22106 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0816 12:37:16.346310   22106 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-863936 localhost] and IPs [192.168.39.2 127.0.0.1 ::1]
	I0816 12:37:16.346365   22106 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0816 12:37:16.346433   22106 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0816 12:37:16.346501   22106 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0816 12:37:16.346565   22106 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 12:37:16.346636   22106 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 12:37:16.346714   22106 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0816 12:37:16.346783   22106 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 12:37:16.346873   22106 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 12:37:16.346953   22106 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 12:37:16.347033   22106 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 12:37:16.347128   22106 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 12:37:16.348632   22106 out.go:235]   - Booting up control plane ...
	I0816 12:37:16.348728   22106 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 12:37:16.348816   22106 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 12:37:16.348878   22106 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 12:37:16.349027   22106 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 12:37:16.349155   22106 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 12:37:16.349225   22106 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 12:37:16.349372   22106 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0816 12:37:16.349501   22106 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0816 12:37:16.349559   22106 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.000363ms
	I0816 12:37:16.349659   22106 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0816 12:37:16.349739   22106 kubeadm.go:310] [api-check] The API server is healthy after 6.014136208s
	I0816 12:37:16.349845   22106 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0816 12:37:16.349953   22106 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0816 12:37:16.350002   22106 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0816 12:37:16.350159   22106 kubeadm.go:310] [mark-control-plane] Marking the node ha-863936 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0816 12:37:16.350218   22106 kubeadm.go:310] [bootstrap-token] Using token: lvudru.afb7dzk6lhr7lh2y
	I0816 12:37:16.351850   22106 out.go:235]   - Configuring RBAC rules ...
	I0816 12:37:16.351979   22106 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0816 12:37:16.352082   22106 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0816 12:37:16.352227   22106 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0816 12:37:16.352376   22106 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0816 12:37:16.352482   22106 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0816 12:37:16.352588   22106 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0816 12:37:16.352706   22106 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0816 12:37:16.352744   22106 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0816 12:37:16.352783   22106 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0816 12:37:16.352789   22106 kubeadm.go:310] 
	I0816 12:37:16.352868   22106 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0816 12:37:16.352879   22106 kubeadm.go:310] 
	I0816 12:37:16.353010   22106 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0816 12:37:16.353022   22106 kubeadm.go:310] 
	I0816 12:37:16.353053   22106 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0816 12:37:16.353129   22106 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0816 12:37:16.353197   22106 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0816 12:37:16.353207   22106 kubeadm.go:310] 
	I0816 12:37:16.353299   22106 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0816 12:37:16.353311   22106 kubeadm.go:310] 
	I0816 12:37:16.353375   22106 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0816 12:37:16.353384   22106 kubeadm.go:310] 
	I0816 12:37:16.353471   22106 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0816 12:37:16.353683   22106 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0816 12:37:16.353779   22106 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0816 12:37:16.353789   22106 kubeadm.go:310] 
	I0816 12:37:16.353891   22106 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0816 12:37:16.353999   22106 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0816 12:37:16.354008   22106 kubeadm.go:310] 
	I0816 12:37:16.354144   22106 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token lvudru.afb7dzk6lhr7lh2y \
	I0816 12:37:16.354282   22106 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4dc33a2efd2478f5cbf5aa38b4f971f510c5b41ce450063af834610254851641 \
	I0816 12:37:16.354313   22106 kubeadm.go:310] 	--control-plane 
	I0816 12:37:16.354320   22106 kubeadm.go:310] 
	I0816 12:37:16.354404   22106 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0816 12:37:16.354424   22106 kubeadm.go:310] 
	I0816 12:37:16.354538   22106 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token lvudru.afb7dzk6lhr7lh2y \
	I0816 12:37:16.354652   22106 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4dc33a2efd2478f5cbf5aa38b4f971f510c5b41ce450063af834610254851641 
	I0816 12:37:16.354695   22106 cni.go:84] Creating CNI manager for ""
	I0816 12:37:16.354704   22106 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0816 12:37:16.356387   22106 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0816 12:37:16.357774   22106 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0816 12:37:16.363355   22106 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0816 12:37:16.363371   22106 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0816 12:37:16.384862   22106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0816 12:37:16.748270   22106 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 12:37:16.748343   22106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 12:37:16.748368   22106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-863936 minikube.k8s.io/updated_at=2024_08_16T12_37_16_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ab84f9bc76071a77c857a14f5c66dccc01002b05 minikube.k8s.io/name=ha-863936 minikube.k8s.io/primary=true
	I0816 12:37:16.781305   22106 ops.go:34] apiserver oom_adj: -16
	I0816 12:37:16.884210   22106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 12:37:17.384293   22106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 12:37:17.884514   22106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 12:37:18.385238   22106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 12:37:18.884557   22106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 12:37:19.385272   22106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 12:37:19.884233   22106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 12:37:19.982889   22106 kubeadm.go:1113] duration metric: took 3.234603021s to wait for elevateKubeSystemPrivileges
	I0816 12:37:19.982926   22106 kubeadm.go:394] duration metric: took 14.825002272s to StartCluster
	I0816 12:37:19.982948   22106 settings.go:142] acquiring lock: {Name:mk96ffc9331ceacd8f3c1c33a59b38e047898a0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:37:19.983025   22106 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 12:37:19.983705   22106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/kubeconfig: {Name:mk102d7d0e1fecb6a50b9d6c1ee82dcff7f7a898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:37:19.983899   22106 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 12:37:19.983915   22106 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 12:37:19.983955   22106 addons.go:69] Setting storage-provisioner=true in profile "ha-863936"
	I0816 12:37:19.983988   22106 addons.go:234] Setting addon storage-provisioner=true in "ha-863936"
	I0816 12:37:19.983901   22106 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 12:37:19.984019   22106 addons.go:69] Setting default-storageclass=true in profile "ha-863936"
	I0816 12:37:19.984028   22106 start.go:241] waiting for startup goroutines ...
	I0816 12:37:19.984024   22106 host.go:66] Checking if "ha-863936" exists ...
	I0816 12:37:19.984085   22106 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-863936"
	I0816 12:37:19.984163   22106 config.go:182] Loaded profile config "ha-863936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 12:37:19.984423   22106 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:37:19.984451   22106 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:37:19.984485   22106 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:37:19.984517   22106 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:37:19.999421   22106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39767
	I0816 12:37:19.999861   22106 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:37:19.999953   22106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37565
	I0816 12:37:20.000281   22106 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:37:20.000461   22106 main.go:141] libmachine: Using API Version  1
	I0816 12:37:20.000487   22106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:37:20.000742   22106 main.go:141] libmachine: Using API Version  1
	I0816 12:37:20.000767   22106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:37:20.000856   22106 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:37:20.001041   22106 main.go:141] libmachine: (ha-863936) Calling .GetState
	I0816 12:37:20.001088   22106 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:37:20.001572   22106 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:37:20.001598   22106 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:37:20.003400   22106 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 12:37:20.003717   22106 kapi.go:59] client config for ha-863936: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/client.crt", KeyFile:"/home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/client.key", CAFile:"/home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f189a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0816 12:37:20.004233   22106 cert_rotation.go:140] Starting client certificate rotation controller
	I0816 12:37:20.004587   22106 addons.go:234] Setting addon default-storageclass=true in "ha-863936"
	I0816 12:37:20.004631   22106 host.go:66] Checking if "ha-863936" exists ...
	I0816 12:37:20.005023   22106 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:37:20.005053   22106 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:37:20.016523   22106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41951
	I0816 12:37:20.016987   22106 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:37:20.017535   22106 main.go:141] libmachine: Using API Version  1
	I0816 12:37:20.017553   22106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:37:20.017859   22106 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:37:20.018056   22106 main.go:141] libmachine: (ha-863936) Calling .GetState
	I0816 12:37:20.019745   22106 main.go:141] libmachine: (ha-863936) Calling .DriverName
	I0816 12:37:20.019891   22106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42291
	I0816 12:37:20.020227   22106 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:37:20.020608   22106 main.go:141] libmachine: Using API Version  1
	I0816 12:37:20.020625   22106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:37:20.020925   22106 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:37:20.021540   22106 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:37:20.021606   22106 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:37:20.022159   22106 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 12:37:20.023672   22106 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 12:37:20.023693   22106 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 12:37:20.023713   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:37:20.026914   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:37:20.027315   22106 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:37:20.027335   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:37:20.027483   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHPort
	I0816 12:37:20.027626   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:37:20.027725   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHUsername
	I0816 12:37:20.027820   22106 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936/id_rsa Username:docker}
	I0816 12:37:20.038121   22106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43121
	I0816 12:37:20.038481   22106 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:37:20.038973   22106 main.go:141] libmachine: Using API Version  1
	I0816 12:37:20.038994   22106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:37:20.039283   22106 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:37:20.039456   22106 main.go:141] libmachine: (ha-863936) Calling .GetState
	I0816 12:37:20.040897   22106 main.go:141] libmachine: (ha-863936) Calling .DriverName
	I0816 12:37:20.041134   22106 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 12:37:20.041146   22106 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 12:37:20.041160   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:37:20.043868   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:37:20.045012   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHPort
	I0816 12:37:20.045018   22106 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:37:20.045046   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:37:20.045217   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:37:20.045376   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHUsername
	I0816 12:37:20.045500   22106 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936/id_rsa Username:docker}
	I0816 12:37:20.094170   22106 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0816 12:37:20.165956   22106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 12:37:20.190074   22106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 12:37:20.595501   22106 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0816 12:37:20.943920   22106 main.go:141] libmachine: Making call to close driver server
	I0816 12:37:20.943938   22106 main.go:141] libmachine: Making call to close driver server
	I0816 12:37:20.943948   22106 main.go:141] libmachine: (ha-863936) Calling .Close
	I0816 12:37:20.943955   22106 main.go:141] libmachine: (ha-863936) Calling .Close
	I0816 12:37:20.944243   22106 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:37:20.944252   22106 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:37:20.944265   22106 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:37:20.944269   22106 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:37:20.944276   22106 main.go:141] libmachine: Making call to close driver server
	I0816 12:37:20.944280   22106 main.go:141] libmachine: Making call to close driver server
	I0816 12:37:20.944285   22106 main.go:141] libmachine: (ha-863936) Calling .Close
	I0816 12:37:20.944288   22106 main.go:141] libmachine: (ha-863936) Calling .Close
	I0816 12:37:20.944481   22106 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:37:20.944484   22106 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:37:20.944494   22106 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:37:20.944505   22106 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:37:20.944566   22106 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0816 12:37:20.944588   22106 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0816 12:37:20.944672   22106 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0816 12:37:20.944681   22106 round_trippers.go:469] Request Headers:
	I0816 12:37:20.944700   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:37:20.944705   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:37:20.955313   22106 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0816 12:37:20.955988   22106 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0816 12:37:20.956003   22106 round_trippers.go:469] Request Headers:
	I0816 12:37:20.956010   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:37:20.956013   22106 round_trippers.go:473]     Content-Type: application/json
	I0816 12:37:20.956016   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:37:20.958516   22106 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 12:37:20.958647   22106 main.go:141] libmachine: Making call to close driver server
	I0816 12:37:20.958661   22106 main.go:141] libmachine: (ha-863936) Calling .Close
	I0816 12:37:20.958945   22106 main.go:141] libmachine: (ha-863936) DBG | Closing plugin on server side
	I0816 12:37:20.958989   22106 main.go:141] libmachine: Successfully made call to close driver server
	I0816 12:37:20.958998   22106 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 12:37:20.961733   22106 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0816 12:37:20.962970   22106 addons.go:510] duration metric: took 979.05008ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0816 12:37:20.963004   22106 start.go:246] waiting for cluster config update ...
	I0816 12:37:20.963016   22106 start.go:255] writing updated cluster config ...
	I0816 12:37:20.964754   22106 out.go:201] 
	I0816 12:37:20.966457   22106 config.go:182] Loaded profile config "ha-863936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 12:37:20.966523   22106 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/config.json ...
	I0816 12:37:20.968354   22106 out.go:177] * Starting "ha-863936-m02" control-plane node in "ha-863936" cluster
	I0816 12:37:20.969799   22106 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 12:37:20.969820   22106 cache.go:56] Caching tarball of preloaded images
	I0816 12:37:20.969901   22106 preload.go:172] Found /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 12:37:20.969912   22106 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0816 12:37:20.969980   22106 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/config.json ...
	I0816 12:37:20.970128   22106 start.go:360] acquireMachinesLock for ha-863936-m02: {Name:mk0b4b88ee8893cefe753073bc08069ac50e5641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 12:37:20.970164   22106 start.go:364] duration metric: took 19.96µs to acquireMachinesLock for "ha-863936-m02"
	I0816 12:37:20.970178   22106 start.go:93] Provisioning new machine with config: &{Name:ha-863936 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-863936 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 12:37:20.970252   22106 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0816 12:37:20.971726   22106 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 12:37:20.971799   22106 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:37:20.971825   22106 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:37:20.986311   22106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46103
	I0816 12:37:20.986832   22106 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:37:20.987310   22106 main.go:141] libmachine: Using API Version  1
	I0816 12:37:20.987330   22106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:37:20.987658   22106 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:37:20.987875   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetMachineName
	I0816 12:37:20.988025   22106 main.go:141] libmachine: (ha-863936-m02) Calling .DriverName
	I0816 12:37:20.988221   22106 start.go:159] libmachine.API.Create for "ha-863936" (driver="kvm2")
	I0816 12:37:20.988243   22106 client.go:168] LocalClient.Create starting
	I0816 12:37:20.988275   22106 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem
	I0816 12:37:20.988311   22106 main.go:141] libmachine: Decoding PEM data...
	I0816 12:37:20.988332   22106 main.go:141] libmachine: Parsing certificate...
	I0816 12:37:20.988400   22106 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem
	I0816 12:37:20.988430   22106 main.go:141] libmachine: Decoding PEM data...
	I0816 12:37:20.988452   22106 main.go:141] libmachine: Parsing certificate...
	I0816 12:37:20.988479   22106 main.go:141] libmachine: Running pre-create checks...
	I0816 12:37:20.988491   22106 main.go:141] libmachine: (ha-863936-m02) Calling .PreCreateCheck
	I0816 12:37:20.988642   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetConfigRaw
	I0816 12:37:20.989056   22106 main.go:141] libmachine: Creating machine...
	I0816 12:37:20.989070   22106 main.go:141] libmachine: (ha-863936-m02) Calling .Create
	I0816 12:37:20.989213   22106 main.go:141] libmachine: (ha-863936-m02) Creating KVM machine...
	I0816 12:37:20.990534   22106 main.go:141] libmachine: (ha-863936-m02) DBG | found existing default KVM network
	I0816 12:37:20.990706   22106 main.go:141] libmachine: (ha-863936-m02) DBG | found existing private KVM network mk-ha-863936
	I0816 12:37:20.990851   22106 main.go:141] libmachine: (ha-863936-m02) Setting up store path in /home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m02 ...
	I0816 12:37:20.990875   22106 main.go:141] libmachine: (ha-863936-m02) Building disk image from file:///home/jenkins/minikube-integration/19423-3966/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso
	I0816 12:37:20.990963   22106 main.go:141] libmachine: (ha-863936-m02) DBG | I0816 12:37:20.990843   22488 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19423-3966/.minikube
	I0816 12:37:20.991031   22106 main.go:141] libmachine: (ha-863936-m02) Downloading /home/jenkins/minikube-integration/19423-3966/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19423-3966/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso...
	I0816 12:37:21.234968   22106 main.go:141] libmachine: (ha-863936-m02) DBG | I0816 12:37:21.234855   22488 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m02/id_rsa...
	I0816 12:37:21.638861   22106 main.go:141] libmachine: (ha-863936-m02) DBG | I0816 12:37:21.638689   22488 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m02/ha-863936-m02.rawdisk...
	I0816 12:37:21.638898   22106 main.go:141] libmachine: (ha-863936-m02) DBG | Writing magic tar header
	I0816 12:37:21.638915   22106 main.go:141] libmachine: (ha-863936-m02) DBG | Writing SSH key tar header
	I0816 12:37:21.638932   22106 main.go:141] libmachine: (ha-863936-m02) DBG | I0816 12:37:21.638831   22488 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m02 ...
	I0816 12:37:21.638949   22106 main.go:141] libmachine: (ha-863936-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m02
	I0816 12:37:21.639051   22106 main.go:141] libmachine: (ha-863936-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-3966/.minikube/machines
	I0816 12:37:21.639080   22106 main.go:141] libmachine: (ha-863936-m02) Setting executable bit set on /home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m02 (perms=drwx------)
	I0816 12:37:21.639091   22106 main.go:141] libmachine: (ha-863936-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-3966/.minikube
	I0816 12:37:21.639107   22106 main.go:141] libmachine: (ha-863936-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-3966
	I0816 12:37:21.639120   22106 main.go:141] libmachine: (ha-863936-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0816 12:37:21.639135   22106 main.go:141] libmachine: (ha-863936-m02) DBG | Checking permissions on dir: /home/jenkins
	I0816 12:37:21.639163   22106 main.go:141] libmachine: (ha-863936-m02) DBG | Checking permissions on dir: /home
	I0816 12:37:21.639181   22106 main.go:141] libmachine: (ha-863936-m02) DBG | Skipping /home - not owner
	I0816 12:37:21.639199   22106 main.go:141] libmachine: (ha-863936-m02) Setting executable bit set on /home/jenkins/minikube-integration/19423-3966/.minikube/machines (perms=drwxr-xr-x)
	I0816 12:37:21.639211   22106 main.go:141] libmachine: (ha-863936-m02) Setting executable bit set on /home/jenkins/minikube-integration/19423-3966/.minikube (perms=drwxr-xr-x)
	I0816 12:37:21.639222   22106 main.go:141] libmachine: (ha-863936-m02) Setting executable bit set on /home/jenkins/minikube-integration/19423-3966 (perms=drwxrwxr-x)
	I0816 12:37:21.639236   22106 main.go:141] libmachine: (ha-863936-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0816 12:37:21.639248   22106 main.go:141] libmachine: (ha-863936-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0816 12:37:21.639263   22106 main.go:141] libmachine: (ha-863936-m02) Creating domain...
	I0816 12:37:21.640077   22106 main.go:141] libmachine: (ha-863936-m02) define libvirt domain using xml: 
	I0816 12:37:21.640101   22106 main.go:141] libmachine: (ha-863936-m02) <domain type='kvm'>
	I0816 12:37:21.640112   22106 main.go:141] libmachine: (ha-863936-m02)   <name>ha-863936-m02</name>
	I0816 12:37:21.640127   22106 main.go:141] libmachine: (ha-863936-m02)   <memory unit='MiB'>2200</memory>
	I0816 12:37:21.640140   22106 main.go:141] libmachine: (ha-863936-m02)   <vcpu>2</vcpu>
	I0816 12:37:21.640147   22106 main.go:141] libmachine: (ha-863936-m02)   <features>
	I0816 12:37:21.640160   22106 main.go:141] libmachine: (ha-863936-m02)     <acpi/>
	I0816 12:37:21.640168   22106 main.go:141] libmachine: (ha-863936-m02)     <apic/>
	I0816 12:37:21.640175   22106 main.go:141] libmachine: (ha-863936-m02)     <pae/>
	I0816 12:37:21.640182   22106 main.go:141] libmachine: (ha-863936-m02)     
	I0816 12:37:21.640189   22106 main.go:141] libmachine: (ha-863936-m02)   </features>
	I0816 12:37:21.640197   22106 main.go:141] libmachine: (ha-863936-m02)   <cpu mode='host-passthrough'>
	I0816 12:37:21.640206   22106 main.go:141] libmachine: (ha-863936-m02)   
	I0816 12:37:21.640267   22106 main.go:141] libmachine: (ha-863936-m02)   </cpu>
	I0816 12:37:21.640310   22106 main.go:141] libmachine: (ha-863936-m02)   <os>
	I0816 12:37:21.640325   22106 main.go:141] libmachine: (ha-863936-m02)     <type>hvm</type>
	I0816 12:37:21.640337   22106 main.go:141] libmachine: (ha-863936-m02)     <boot dev='cdrom'/>
	I0816 12:37:21.640351   22106 main.go:141] libmachine: (ha-863936-m02)     <boot dev='hd'/>
	I0816 12:37:21.640358   22106 main.go:141] libmachine: (ha-863936-m02)     <bootmenu enable='no'/>
	I0816 12:37:21.640369   22106 main.go:141] libmachine: (ha-863936-m02)   </os>
	I0816 12:37:21.640378   22106 main.go:141] libmachine: (ha-863936-m02)   <devices>
	I0816 12:37:21.640390   22106 main.go:141] libmachine: (ha-863936-m02)     <disk type='file' device='cdrom'>
	I0816 12:37:21.640407   22106 main.go:141] libmachine: (ha-863936-m02)       <source file='/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m02/boot2docker.iso'/>
	I0816 12:37:21.640438   22106 main.go:141] libmachine: (ha-863936-m02)       <target dev='hdc' bus='scsi'/>
	I0816 12:37:21.640460   22106 main.go:141] libmachine: (ha-863936-m02)       <readonly/>
	I0816 12:37:21.640473   22106 main.go:141] libmachine: (ha-863936-m02)     </disk>
	I0816 12:37:21.640483   22106 main.go:141] libmachine: (ha-863936-m02)     <disk type='file' device='disk'>
	I0816 12:37:21.640508   22106 main.go:141] libmachine: (ha-863936-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0816 12:37:21.640523   22106 main.go:141] libmachine: (ha-863936-m02)       <source file='/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m02/ha-863936-m02.rawdisk'/>
	I0816 12:37:21.640536   22106 main.go:141] libmachine: (ha-863936-m02)       <target dev='hda' bus='virtio'/>
	I0816 12:37:21.640555   22106 main.go:141] libmachine: (ha-863936-m02)     </disk>
	I0816 12:37:21.640576   22106 main.go:141] libmachine: (ha-863936-m02)     <interface type='network'>
	I0816 12:37:21.640590   22106 main.go:141] libmachine: (ha-863936-m02)       <source network='mk-ha-863936'/>
	I0816 12:37:21.640603   22106 main.go:141] libmachine: (ha-863936-m02)       <model type='virtio'/>
	I0816 12:37:21.640614   22106 main.go:141] libmachine: (ha-863936-m02)     </interface>
	I0816 12:37:21.640624   22106 main.go:141] libmachine: (ha-863936-m02)     <interface type='network'>
	I0816 12:37:21.640634   22106 main.go:141] libmachine: (ha-863936-m02)       <source network='default'/>
	I0816 12:37:21.640646   22106 main.go:141] libmachine: (ha-863936-m02)       <model type='virtio'/>
	I0816 12:37:21.640657   22106 main.go:141] libmachine: (ha-863936-m02)     </interface>
	I0816 12:37:21.640669   22106 main.go:141] libmachine: (ha-863936-m02)     <serial type='pty'>
	I0816 12:37:21.640679   22106 main.go:141] libmachine: (ha-863936-m02)       <target port='0'/>
	I0816 12:37:21.640691   22106 main.go:141] libmachine: (ha-863936-m02)     </serial>
	I0816 12:37:21.640700   22106 main.go:141] libmachine: (ha-863936-m02)     <console type='pty'>
	I0816 12:37:21.640709   22106 main.go:141] libmachine: (ha-863936-m02)       <target type='serial' port='0'/>
	I0816 12:37:21.640720   22106 main.go:141] libmachine: (ha-863936-m02)     </console>
	I0816 12:37:21.640738   22106 main.go:141] libmachine: (ha-863936-m02)     <rng model='virtio'>
	I0816 12:37:21.640758   22106 main.go:141] libmachine: (ha-863936-m02)       <backend model='random'>/dev/random</backend>
	I0816 12:37:21.640770   22106 main.go:141] libmachine: (ha-863936-m02)     </rng>
	I0816 12:37:21.640788   22106 main.go:141] libmachine: (ha-863936-m02)     
	I0816 12:37:21.640796   22106 main.go:141] libmachine: (ha-863936-m02)     
	I0816 12:37:21.640806   22106 main.go:141] libmachine: (ha-863936-m02)   </devices>
	I0816 12:37:21.640817   22106 main.go:141] libmachine: (ha-863936-m02) </domain>
	I0816 12:37:21.640831   22106 main.go:141] libmachine: (ha-863936-m02) 
	I0816 12:37:21.647415   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c8:3b:98 in network default
	I0816 12:37:21.647962   22106 main.go:141] libmachine: (ha-863936-m02) Ensuring networks are active...
	I0816 12:37:21.647986   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:21.648730   22106 main.go:141] libmachine: (ha-863936-m02) Ensuring network default is active
	I0816 12:37:21.649070   22106 main.go:141] libmachine: (ha-863936-m02) Ensuring network mk-ha-863936 is active
	I0816 12:37:21.649455   22106 main.go:141] libmachine: (ha-863936-m02) Getting domain xml...
	I0816 12:37:21.650276   22106 main.go:141] libmachine: (ha-863936-m02) Creating domain...
	I0816 12:37:22.855208   22106 main.go:141] libmachine: (ha-863936-m02) Waiting to get IP...
	I0816 12:37:22.856103   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:22.856557   22106 main.go:141] libmachine: (ha-863936-m02) DBG | unable to find current IP address of domain ha-863936-m02 in network mk-ha-863936
	I0816 12:37:22.856597   22106 main.go:141] libmachine: (ha-863936-m02) DBG | I0816 12:37:22.856536   22488 retry.go:31] will retry after 272.389415ms: waiting for machine to come up
	I0816 12:37:23.130961   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:23.131461   22106 main.go:141] libmachine: (ha-863936-m02) DBG | unable to find current IP address of domain ha-863936-m02 in network mk-ha-863936
	I0816 12:37:23.131484   22106 main.go:141] libmachine: (ha-863936-m02) DBG | I0816 12:37:23.131417   22488 retry.go:31] will retry after 263.73211ms: waiting for machine to come up
	I0816 12:37:23.396863   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:23.397312   22106 main.go:141] libmachine: (ha-863936-m02) DBG | unable to find current IP address of domain ha-863936-m02 in network mk-ha-863936
	I0816 12:37:23.397337   22106 main.go:141] libmachine: (ha-863936-m02) DBG | I0816 12:37:23.397273   22488 retry.go:31] will retry after 313.449142ms: waiting for machine to come up
	I0816 12:37:23.712539   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:23.712963   22106 main.go:141] libmachine: (ha-863936-m02) DBG | unable to find current IP address of domain ha-863936-m02 in network mk-ha-863936
	I0816 12:37:23.712989   22106 main.go:141] libmachine: (ha-863936-m02) DBG | I0816 12:37:23.712936   22488 retry.go:31] will retry after 505.914988ms: waiting for machine to come up
	I0816 12:37:24.220249   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:24.220674   22106 main.go:141] libmachine: (ha-863936-m02) DBG | unable to find current IP address of domain ha-863936-m02 in network mk-ha-863936
	I0816 12:37:24.220702   22106 main.go:141] libmachine: (ha-863936-m02) DBG | I0816 12:37:24.220630   22488 retry.go:31] will retry after 707.95495ms: waiting for machine to come up
	I0816 12:37:24.930477   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:24.930826   22106 main.go:141] libmachine: (ha-863936-m02) DBG | unable to find current IP address of domain ha-863936-m02 in network mk-ha-863936
	I0816 12:37:24.930856   22106 main.go:141] libmachine: (ha-863936-m02) DBG | I0816 12:37:24.930782   22488 retry.go:31] will retry after 639.579813ms: waiting for machine to come up
	I0816 12:37:25.571536   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:25.572001   22106 main.go:141] libmachine: (ha-863936-m02) DBG | unable to find current IP address of domain ha-863936-m02 in network mk-ha-863936
	I0816 12:37:25.572031   22106 main.go:141] libmachine: (ha-863936-m02) DBG | I0816 12:37:25.571949   22488 retry.go:31] will retry after 1.052898678s: waiting for machine to come up
	I0816 12:37:26.625833   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:26.626274   22106 main.go:141] libmachine: (ha-863936-m02) DBG | unable to find current IP address of domain ha-863936-m02 in network mk-ha-863936
	I0816 12:37:26.626326   22106 main.go:141] libmachine: (ha-863936-m02) DBG | I0816 12:37:26.626222   22488 retry.go:31] will retry after 1.484593769s: waiting for machine to come up
	I0816 12:37:28.112785   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:28.113240   22106 main.go:141] libmachine: (ha-863936-m02) DBG | unable to find current IP address of domain ha-863936-m02 in network mk-ha-863936
	I0816 12:37:28.113261   22106 main.go:141] libmachine: (ha-863936-m02) DBG | I0816 12:37:28.113173   22488 retry.go:31] will retry after 1.265009506s: waiting for machine to come up
	I0816 12:37:29.379613   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:29.379966   22106 main.go:141] libmachine: (ha-863936-m02) DBG | unable to find current IP address of domain ha-863936-m02 in network mk-ha-863936
	I0816 12:37:29.379989   22106 main.go:141] libmachine: (ha-863936-m02) DBG | I0816 12:37:29.379927   22488 retry.go:31] will retry after 2.04114548s: waiting for machine to come up
	I0816 12:37:31.422945   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:31.423402   22106 main.go:141] libmachine: (ha-863936-m02) DBG | unable to find current IP address of domain ha-863936-m02 in network mk-ha-863936
	I0816 12:37:31.423436   22106 main.go:141] libmachine: (ha-863936-m02) DBG | I0816 12:37:31.423364   22488 retry.go:31] will retry after 2.857495578s: waiting for machine to come up
	I0816 12:37:34.284282   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:34.284671   22106 main.go:141] libmachine: (ha-863936-m02) DBG | unable to find current IP address of domain ha-863936-m02 in network mk-ha-863936
	I0816 12:37:34.284694   22106 main.go:141] libmachine: (ha-863936-m02) DBG | I0816 12:37:34.284642   22488 retry.go:31] will retry after 3.238481842s: waiting for machine to come up
	I0816 12:37:37.525727   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:37.526164   22106 main.go:141] libmachine: (ha-863936-m02) DBG | unable to find current IP address of domain ha-863936-m02 in network mk-ha-863936
	I0816 12:37:37.526184   22106 main.go:141] libmachine: (ha-863936-m02) DBG | I0816 12:37:37.526113   22488 retry.go:31] will retry after 4.3057399s: waiting for machine to come up
	I0816 12:37:41.833819   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:41.834270   22106 main.go:141] libmachine: (ha-863936-m02) Found IP for machine: 192.168.39.101
	I0816 12:37:41.834289   22106 main.go:141] libmachine: (ha-863936-m02) Reserving static IP address...
	I0816 12:37:41.834299   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has current primary IP address 192.168.39.101 and MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:41.834724   22106 main.go:141] libmachine: (ha-863936-m02) DBG | unable to find host DHCP lease matching {name: "ha-863936-m02", mac: "52:54:00:c0:1e:73", ip: "192.168.39.101"} in network mk-ha-863936
	I0816 12:37:41.905117   22106 main.go:141] libmachine: (ha-863936-m02) Reserved static IP address: 192.168.39.101
	I0816 12:37:41.905143   22106 main.go:141] libmachine: (ha-863936-m02) Waiting for SSH to be available...
	I0816 12:37:41.905189   22106 main.go:141] libmachine: (ha-863936-m02) DBG | Getting to WaitForSSH function...
	I0816 12:37:41.907974   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:41.908426   22106 main.go:141] libmachine: (ha-863936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1e:73", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:37:36 +0000 UTC Type:0 Mac:52:54:00:c0:1e:73 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c0:1e:73}
	I0816 12:37:41.908450   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:41.908608   22106 main.go:141] libmachine: (ha-863936-m02) DBG | Using SSH client type: external
	I0816 12:37:41.908632   22106 main.go:141] libmachine: (ha-863936-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m02/id_rsa (-rw-------)
	I0816 12:37:41.908663   22106 main.go:141] libmachine: (ha-863936-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.101 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 12:37:41.908671   22106 main.go:141] libmachine: (ha-863936-m02) DBG | About to run SSH command:
	I0816 12:37:41.908684   22106 main.go:141] libmachine: (ha-863936-m02) DBG | exit 0
	I0816 12:37:42.036782   22106 main.go:141] libmachine: (ha-863936-m02) DBG | SSH cmd err, output: <nil>: 
	I0816 12:37:42.037083   22106 main.go:141] libmachine: (ha-863936-m02) KVM machine creation complete!
	I0816 12:37:42.037407   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetConfigRaw
	I0816 12:37:42.037913   22106 main.go:141] libmachine: (ha-863936-m02) Calling .DriverName
	I0816 12:37:42.038073   22106 main.go:141] libmachine: (ha-863936-m02) Calling .DriverName
	I0816 12:37:42.038308   22106 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0816 12:37:42.038324   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetState
	I0816 12:37:42.039541   22106 main.go:141] libmachine: Detecting operating system of created instance...
	I0816 12:37:42.039571   22106 main.go:141] libmachine: Waiting for SSH to be available...
	I0816 12:37:42.039577   22106 main.go:141] libmachine: Getting to WaitForSSH function...
	I0816 12:37:42.039584   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHHostname
	I0816 12:37:42.041745   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:42.042058   22106 main.go:141] libmachine: (ha-863936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1e:73", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:37:36 +0000 UTC Type:0 Mac:52:54:00:c0:1e:73 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-863936-m02 Clientid:01:52:54:00:c0:1e:73}
	I0816 12:37:42.042097   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:42.042251   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHPort
	I0816 12:37:42.042374   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHKeyPath
	I0816 12:37:42.042479   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHKeyPath
	I0816 12:37:42.042579   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHUsername
	I0816 12:37:42.042752   22106 main.go:141] libmachine: Using SSH client type: native
	I0816 12:37:42.042946   22106 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I0816 12:37:42.042957   22106 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0816 12:37:42.148036   22106 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 12:37:42.148058   22106 main.go:141] libmachine: Detecting the provisioner...
	I0816 12:37:42.148067   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHHostname
	I0816 12:37:42.150631   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:42.150997   22106 main.go:141] libmachine: (ha-863936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1e:73", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:37:36 +0000 UTC Type:0 Mac:52:54:00:c0:1e:73 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-863936-m02 Clientid:01:52:54:00:c0:1e:73}
	I0816 12:37:42.151019   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:42.151219   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHPort
	I0816 12:37:42.151414   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHKeyPath
	I0816 12:37:42.151595   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHKeyPath
	I0816 12:37:42.151733   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHUsername
	I0816 12:37:42.151890   22106 main.go:141] libmachine: Using SSH client type: native
	I0816 12:37:42.152091   22106 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I0816 12:37:42.152105   22106 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0816 12:37:42.257667   22106 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0816 12:37:42.257739   22106 main.go:141] libmachine: found compatible host: buildroot
	I0816 12:37:42.257750   22106 main.go:141] libmachine: Provisioning with buildroot...
	I0816 12:37:42.257758   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetMachineName
	I0816 12:37:42.257982   22106 buildroot.go:166] provisioning hostname "ha-863936-m02"
	I0816 12:37:42.258013   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetMachineName
	I0816 12:37:42.258225   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHHostname
	I0816 12:37:42.260648   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:42.261018   22106 main.go:141] libmachine: (ha-863936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1e:73", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:37:36 +0000 UTC Type:0 Mac:52:54:00:c0:1e:73 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-863936-m02 Clientid:01:52:54:00:c0:1e:73}
	I0816 12:37:42.261047   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:42.261197   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHPort
	I0816 12:37:42.261376   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHKeyPath
	I0816 12:37:42.261498   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHKeyPath
	I0816 12:37:42.261602   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHUsername
	I0816 12:37:42.261775   22106 main.go:141] libmachine: Using SSH client type: native
	I0816 12:37:42.261937   22106 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I0816 12:37:42.261949   22106 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-863936-m02 && echo "ha-863936-m02" | sudo tee /etc/hostname
	I0816 12:37:42.380594   22106 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-863936-m02
	
	I0816 12:37:42.380615   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHHostname
	I0816 12:37:42.383327   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:42.383693   22106 main.go:141] libmachine: (ha-863936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1e:73", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:37:36 +0000 UTC Type:0 Mac:52:54:00:c0:1e:73 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-863936-m02 Clientid:01:52:54:00:c0:1e:73}
	I0816 12:37:42.383719   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:42.383936   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHPort
	I0816 12:37:42.384178   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHKeyPath
	I0816 12:37:42.384347   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHKeyPath
	I0816 12:37:42.384499   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHUsername
	I0816 12:37:42.384657   22106 main.go:141] libmachine: Using SSH client type: native
	I0816 12:37:42.384846   22106 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I0816 12:37:42.384863   22106 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-863936-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-863936-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-863936-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 12:37:42.501738   22106 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 12:37:42.501765   22106 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-3966/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-3966/.minikube}
	I0816 12:37:42.501779   22106 buildroot.go:174] setting up certificates
	I0816 12:37:42.501788   22106 provision.go:84] configureAuth start
	I0816 12:37:42.501796   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetMachineName
	I0816 12:37:42.502045   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetIP
	I0816 12:37:42.504618   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:42.504898   22106 main.go:141] libmachine: (ha-863936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1e:73", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:37:36 +0000 UTC Type:0 Mac:52:54:00:c0:1e:73 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-863936-m02 Clientid:01:52:54:00:c0:1e:73}
	I0816 12:37:42.504943   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:42.505135   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHHostname
	I0816 12:37:42.507187   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:42.507542   22106 main.go:141] libmachine: (ha-863936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1e:73", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:37:36 +0000 UTC Type:0 Mac:52:54:00:c0:1e:73 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-863936-m02 Clientid:01:52:54:00:c0:1e:73}
	I0816 12:37:42.507570   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:42.507718   22106 provision.go:143] copyHostCerts
	I0816 12:37:42.507747   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem
	I0816 12:37:42.507785   22106 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem, removing ...
	I0816 12:37:42.507797   22106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem
	I0816 12:37:42.507873   22106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem (1082 bytes)
	I0816 12:37:42.507975   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem
	I0816 12:37:42.508000   22106 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem, removing ...
	I0816 12:37:42.508009   22106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem
	I0816 12:37:42.508041   22106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem (1123 bytes)
	I0816 12:37:42.508111   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem
	I0816 12:37:42.508137   22106 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem, removing ...
	I0816 12:37:42.508146   22106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem
	I0816 12:37:42.508193   22106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem (1675 bytes)
	I0816 12:37:42.508286   22106 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem org=jenkins.ha-863936-m02 san=[127.0.0.1 192.168.39.101 ha-863936-m02 localhost minikube]
	I0816 12:37:42.645945   22106 provision.go:177] copyRemoteCerts
	I0816 12:37:42.645994   22106 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 12:37:42.646015   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHHostname
	I0816 12:37:42.648696   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:42.649035   22106 main.go:141] libmachine: (ha-863936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1e:73", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:37:36 +0000 UTC Type:0 Mac:52:54:00:c0:1e:73 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-863936-m02 Clientid:01:52:54:00:c0:1e:73}
	I0816 12:37:42.649061   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:42.649216   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHPort
	I0816 12:37:42.649345   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHKeyPath
	I0816 12:37:42.649484   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHUsername
	I0816 12:37:42.649568   22106 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m02/id_rsa Username:docker}
	I0816 12:37:42.731781   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 12:37:42.731841   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 12:37:42.755699   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 12:37:42.755759   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0816 12:37:42.778658   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 12:37:42.778716   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 12:37:42.801588   22106 provision.go:87] duration metric: took 299.788614ms to configureAuth
	I0816 12:37:42.801637   22106 buildroot.go:189] setting minikube options for container-runtime
	I0816 12:37:42.801814   22106 config.go:182] Loaded profile config "ha-863936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 12:37:42.801879   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHHostname
	I0816 12:37:42.804443   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:42.804758   22106 main.go:141] libmachine: (ha-863936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1e:73", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:37:36 +0000 UTC Type:0 Mac:52:54:00:c0:1e:73 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-863936-m02 Clientid:01:52:54:00:c0:1e:73}
	I0816 12:37:42.804786   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:42.804988   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHPort
	I0816 12:37:42.805161   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHKeyPath
	I0816 12:37:42.805302   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHKeyPath
	I0816 12:37:42.805417   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHUsername
	I0816 12:37:42.805549   22106 main.go:141] libmachine: Using SSH client type: native
	I0816 12:37:42.805716   22106 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I0816 12:37:42.805730   22106 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 12:37:43.072228   22106 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 12:37:43.072252   22106 main.go:141] libmachine: Checking connection to Docker...
	I0816 12:37:43.072261   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetURL
	I0816 12:37:43.073511   22106 main.go:141] libmachine: (ha-863936-m02) DBG | Using libvirt version 6000000
	I0816 12:37:43.075706   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:43.076023   22106 main.go:141] libmachine: (ha-863936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1e:73", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:37:36 +0000 UTC Type:0 Mac:52:54:00:c0:1e:73 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-863936-m02 Clientid:01:52:54:00:c0:1e:73}
	I0816 12:37:43.076049   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:43.076189   22106 main.go:141] libmachine: Docker is up and running!
	I0816 12:37:43.076204   22106 main.go:141] libmachine: Reticulating splines...
	I0816 12:37:43.076211   22106 client.go:171] duration metric: took 22.087958589s to LocalClient.Create
	I0816 12:37:43.076229   22106 start.go:167] duration metric: took 22.088010164s to libmachine.API.Create "ha-863936"
	I0816 12:37:43.076237   22106 start.go:293] postStartSetup for "ha-863936-m02" (driver="kvm2")
	I0816 12:37:43.076246   22106 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 12:37:43.076269   22106 main.go:141] libmachine: (ha-863936-m02) Calling .DriverName
	I0816 12:37:43.076484   22106 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 12:37:43.076507   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHHostname
	I0816 12:37:43.078280   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:43.078557   22106 main.go:141] libmachine: (ha-863936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1e:73", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:37:36 +0000 UTC Type:0 Mac:52:54:00:c0:1e:73 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-863936-m02 Clientid:01:52:54:00:c0:1e:73}
	I0816 12:37:43.078583   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:43.078707   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHPort
	I0816 12:37:43.078871   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHKeyPath
	I0816 12:37:43.079017   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHUsername
	I0816 12:37:43.079154   22106 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m02/id_rsa Username:docker}
	I0816 12:37:43.164009   22106 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 12:37:43.168315   22106 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 12:37:43.168331   22106 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/addons for local assets ...
	I0816 12:37:43.168408   22106 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/files for local assets ...
	I0816 12:37:43.168499   22106 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem -> 111492.pem in /etc/ssl/certs
	I0816 12:37:43.168509   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem -> /etc/ssl/certs/111492.pem
	I0816 12:37:43.168615   22106 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 12:37:43.177933   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /etc/ssl/certs/111492.pem (1708 bytes)
	I0816 12:37:43.201327   22106 start.go:296] duration metric: took 125.079274ms for postStartSetup
	I0816 12:37:43.201370   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetConfigRaw
	I0816 12:37:43.201918   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetIP
	I0816 12:37:43.204181   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:43.204514   22106 main.go:141] libmachine: (ha-863936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1e:73", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:37:36 +0000 UTC Type:0 Mac:52:54:00:c0:1e:73 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-863936-m02 Clientid:01:52:54:00:c0:1e:73}
	I0816 12:37:43.204536   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:43.204779   22106 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/config.json ...
	I0816 12:37:43.204971   22106 start.go:128] duration metric: took 22.234710675s to createHost
	I0816 12:37:43.204991   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHHostname
	I0816 12:37:43.206856   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:43.207256   22106 main.go:141] libmachine: (ha-863936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1e:73", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:37:36 +0000 UTC Type:0 Mac:52:54:00:c0:1e:73 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-863936-m02 Clientid:01:52:54:00:c0:1e:73}
	I0816 12:37:43.207281   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:43.207411   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHPort
	I0816 12:37:43.207587   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHKeyPath
	I0816 12:37:43.207749   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHKeyPath
	I0816 12:37:43.207875   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHUsername
	I0816 12:37:43.208032   22106 main.go:141] libmachine: Using SSH client type: native
	I0816 12:37:43.208178   22106 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I0816 12:37:43.208192   22106 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 12:37:43.317639   22106 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723811863.288503434
	
	I0816 12:37:43.317655   22106 fix.go:216] guest clock: 1723811863.288503434
	I0816 12:37:43.317662   22106 fix.go:229] Guest: 2024-08-16 12:37:43.288503434 +0000 UTC Remote: 2024-08-16 12:37:43.204981486 +0000 UTC m=+70.209334380 (delta=83.521948ms)
	I0816 12:37:43.317676   22106 fix.go:200] guest clock delta is within tolerance: 83.521948ms
	I0816 12:37:43.317680   22106 start.go:83] releasing machines lock for "ha-863936-m02", held for 22.347510342s
	I0816 12:37:43.317698   22106 main.go:141] libmachine: (ha-863936-m02) Calling .DriverName
	I0816 12:37:43.317961   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetIP
	I0816 12:37:43.320459   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:43.320822   22106 main.go:141] libmachine: (ha-863936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1e:73", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:37:36 +0000 UTC Type:0 Mac:52:54:00:c0:1e:73 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-863936-m02 Clientid:01:52:54:00:c0:1e:73}
	I0816 12:37:43.320851   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:43.322946   22106 out.go:177] * Found network options:
	I0816 12:37:43.324216   22106 out.go:177]   - NO_PROXY=192.168.39.2
	W0816 12:37:43.325417   22106 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 12:37:43.325449   22106 main.go:141] libmachine: (ha-863936-m02) Calling .DriverName
	I0816 12:37:43.325979   22106 main.go:141] libmachine: (ha-863936-m02) Calling .DriverName
	I0816 12:37:43.326160   22106 main.go:141] libmachine: (ha-863936-m02) Calling .DriverName
	I0816 12:37:43.326272   22106 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 12:37:43.326310   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHHostname
	W0816 12:37:43.326341   22106 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 12:37:43.326413   22106 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 12:37:43.326434   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHHostname
	I0816 12:37:43.328950   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:43.329206   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:43.329315   22106 main.go:141] libmachine: (ha-863936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1e:73", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:37:36 +0000 UTC Type:0 Mac:52:54:00:c0:1e:73 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-863936-m02 Clientid:01:52:54:00:c0:1e:73}
	I0816 12:37:43.329341   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:43.329468   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHPort
	I0816 12:37:43.329545   22106 main.go:141] libmachine: (ha-863936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1e:73", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:37:36 +0000 UTC Type:0 Mac:52:54:00:c0:1e:73 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-863936-m02 Clientid:01:52:54:00:c0:1e:73}
	I0816 12:37:43.329574   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:43.329635   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHKeyPath
	I0816 12:37:43.329705   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHPort
	I0816 12:37:43.329776   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHUsername
	I0816 12:37:43.329827   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHKeyPath
	I0816 12:37:43.329884   22106 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m02/id_rsa Username:docker}
	I0816 12:37:43.329946   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHUsername
	I0816 12:37:43.330046   22106 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m02/id_rsa Username:docker}
	I0816 12:37:43.561485   22106 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 12:37:43.567214   22106 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 12:37:43.567276   22106 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 12:37:43.583545   22106 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 12:37:43.583562   22106 start.go:495] detecting cgroup driver to use...
	I0816 12:37:43.583612   22106 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 12:37:43.599789   22106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 12:37:43.613198   22106 docker.go:217] disabling cri-docker service (if available) ...
	I0816 12:37:43.613254   22106 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 12:37:43.626286   22106 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 12:37:43.640299   22106 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 12:37:43.762732   22106 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 12:37:43.923013   22106 docker.go:233] disabling docker service ...
	I0816 12:37:43.923085   22106 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 12:37:43.937191   22106 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 12:37:43.949587   22106 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 12:37:44.069985   22106 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 12:37:44.185311   22106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 12:37:44.199367   22106 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 12:37:44.217870   22106 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 12:37:44.217927   22106 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:37:44.228954   22106 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 12:37:44.229018   22106 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:37:44.240072   22106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:37:44.251064   22106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:37:44.261798   22106 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 12:37:44.272677   22106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:37:44.283104   22106 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:37:44.300285   22106 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:37:44.311213   22106 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 12:37:44.320966   22106 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 12:37:44.321017   22106 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 12:37:44.334167   22106 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 12:37:44.344659   22106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 12:37:44.465534   22106 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 12:37:44.597973   22106 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 12:37:44.598063   22106 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 12:37:44.603066   22106 start.go:563] Will wait 60s for crictl version
	I0816 12:37:44.603115   22106 ssh_runner.go:195] Run: which crictl
	I0816 12:37:44.606849   22106 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 12:37:44.652499   22106 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 12:37:44.652588   22106 ssh_runner.go:195] Run: crio --version
	I0816 12:37:44.681284   22106 ssh_runner.go:195] Run: crio --version
	I0816 12:37:44.710540   22106 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 12:37:44.711910   22106 out.go:177]   - env NO_PROXY=192.168.39.2
	I0816 12:37:44.712951   22106 main.go:141] libmachine: (ha-863936-m02) Calling .GetIP
	I0816 12:37:44.715737   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:44.716090   22106 main.go:141] libmachine: (ha-863936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1e:73", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:37:36 +0000 UTC Type:0 Mac:52:54:00:c0:1e:73 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-863936-m02 Clientid:01:52:54:00:c0:1e:73}
	I0816 12:37:44.716114   22106 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:37:44.716331   22106 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0816 12:37:44.720468   22106 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 12:37:44.733172   22106 mustload.go:65] Loading cluster: ha-863936
	I0816 12:37:44.733378   22106 config.go:182] Loaded profile config "ha-863936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 12:37:44.733640   22106 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:37:44.733679   22106 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:37:44.747821   22106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40863
	I0816 12:37:44.748195   22106 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:37:44.748665   22106 main.go:141] libmachine: Using API Version  1
	I0816 12:37:44.748683   22106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:37:44.748962   22106 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:37:44.749131   22106 main.go:141] libmachine: (ha-863936) Calling .GetState
	I0816 12:37:44.750510   22106 host.go:66] Checking if "ha-863936" exists ...
	I0816 12:37:44.750816   22106 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:37:44.750850   22106 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:37:44.764419   22106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43919
	I0816 12:37:44.764744   22106 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:37:44.765220   22106 main.go:141] libmachine: Using API Version  1
	I0816 12:37:44.765240   22106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:37:44.765521   22106 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:37:44.765698   22106 main.go:141] libmachine: (ha-863936) Calling .DriverName
	I0816 12:37:44.765825   22106 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936 for IP: 192.168.39.101
	I0816 12:37:44.765834   22106 certs.go:194] generating shared ca certs ...
	I0816 12:37:44.765852   22106 certs.go:226] acquiring lock for ca certs: {Name:mkdb46755373c135acf8239d2d4352f4b0b3d1d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:37:44.765973   22106 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key
	I0816 12:37:44.766028   22106 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key
	I0816 12:37:44.766041   22106 certs.go:256] generating profile certs ...
	I0816 12:37:44.766123   22106 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/client.key
	I0816 12:37:44.766153   22106 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.key.f75229f1
	I0816 12:37:44.766174   22106 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.crt.f75229f1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.2 192.168.39.101 192.168.39.254]
	I0816 12:37:44.830541   22106 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.crt.f75229f1 ...
	I0816 12:37:44.830570   22106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.crt.f75229f1: {Name:mkfed86040fee228ea9f3c3ee1e30bba4a154412 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:37:44.830749   22106 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.key.f75229f1 ...
	I0816 12:37:44.830771   22106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.key.f75229f1: {Name:mk2c664260d68b6ab0552ce83b5ab0e9b76f731f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:37:44.830883   22106 certs.go:381] copying /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.crt.f75229f1 -> /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.crt
	I0816 12:37:44.831032   22106 certs.go:385] copying /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.key.f75229f1 -> /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.key
	I0816 12:37:44.831182   22106 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/proxy-client.key
	I0816 12:37:44.831199   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0816 12:37:44.831224   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0816 12:37:44.831243   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0816 12:37:44.831262   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0816 12:37:44.831280   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0816 12:37:44.831298   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0816 12:37:44.831316   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0816 12:37:44.831335   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0816 12:37:44.831395   22106 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem (1338 bytes)
	W0816 12:37:44.831434   22106 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149_empty.pem, impossibly tiny 0 bytes
	I0816 12:37:44.831447   22106 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 12:37:44.831591   22106 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem (1082 bytes)
	I0816 12:37:44.831704   22106 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem (1123 bytes)
	I0816 12:37:44.831740   22106 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem (1675 bytes)
	I0816 12:37:44.831807   22106 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem (1708 bytes)
	I0816 12:37:44.831849   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0816 12:37:44.831871   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem -> /usr/share/ca-certificates/11149.pem
	I0816 12:37:44.831890   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem -> /usr/share/ca-certificates/111492.pem
	I0816 12:37:44.831929   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:37:44.834714   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:37:44.835032   22106 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:37:44.835051   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:37:44.835232   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHPort
	I0816 12:37:44.835417   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:37:44.835565   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHUsername
	I0816 12:37:44.835674   22106 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936/id_rsa Username:docker}
	I0816 12:37:44.905239   22106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0816 12:37:44.910247   22106 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0816 12:37:44.920953   22106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0816 12:37:44.925193   22106 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0816 12:37:44.935453   22106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0816 12:37:44.939756   22106 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0816 12:37:44.949753   22106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0816 12:37:44.953788   22106 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0816 12:37:44.963955   22106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0816 12:37:44.968038   22106 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0816 12:37:44.977647   22106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0816 12:37:44.981682   22106 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0816 12:37:44.991402   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 12:37:45.016672   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 12:37:45.040484   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 12:37:45.063887   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 12:37:45.086552   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0816 12:37:45.111080   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 12:37:45.134678   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 12:37:45.158653   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 12:37:45.181591   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 12:37:45.204802   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem --> /usr/share/ca-certificates/11149.pem (1338 bytes)
	I0816 12:37:45.228118   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /usr/share/ca-certificates/111492.pem (1708 bytes)
	I0816 12:37:45.251573   22106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0816 12:37:45.269286   22106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0816 12:37:45.285849   22106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0816 12:37:45.301292   22106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0816 12:37:45.317478   22106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0816 12:37:45.334653   22106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0816 12:37:45.351552   22106 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0816 12:37:45.367606   22106 ssh_runner.go:195] Run: openssl version
	I0816 12:37:45.373185   22106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111492.pem && ln -fs /usr/share/ca-certificates/111492.pem /etc/ssl/certs/111492.pem"
	I0816 12:37:45.383850   22106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111492.pem
	I0816 12:37:45.388246   22106 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 12:33 /usr/share/ca-certificates/111492.pem
	I0816 12:37:45.388283   22106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111492.pem
	I0816 12:37:45.394072   22106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111492.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 12:37:45.405658   22106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 12:37:45.416660   22106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 12:37:45.421231   22106 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 12:22 /usr/share/ca-certificates/minikubeCA.pem
	I0816 12:37:45.421280   22106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 12:37:45.427251   22106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 12:37:45.438506   22106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11149.pem && ln -fs /usr/share/ca-certificates/11149.pem /etc/ssl/certs/11149.pem"
	I0816 12:37:45.449273   22106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11149.pem
	I0816 12:37:45.453684   22106 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 12:33 /usr/share/ca-certificates/11149.pem
	I0816 12:37:45.453733   22106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11149.pem
	I0816 12:37:45.459563   22106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11149.pem /etc/ssl/certs/51391683.0"
	I0816 12:37:45.470406   22106 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 12:37:45.474505   22106 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0816 12:37:45.474550   22106 kubeadm.go:934] updating node {m02 192.168.39.101 8443 v1.31.0 crio true true} ...
	I0816 12:37:45.474626   22106 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-863936-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.101
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-863936 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 12:37:45.474649   22106 kube-vip.go:115] generating kube-vip config ...
	I0816 12:37:45.474676   22106 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0816 12:37:45.492902   22106 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0816 12:37:45.492972   22106 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0816 12:37:45.493019   22106 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 12:37:45.502996   22106 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0816 12:37:45.503055   22106 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0816 12:37:45.512956   22106 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0816 12:37:45.512978   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0816 12:37:45.513033   22106 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0816 12:37:45.513091   22106 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19423-3966/.minikube/cache/linux/amd64/v1.31.0/kubeadm
	I0816 12:37:45.513124   22106 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19423-3966/.minikube/cache/linux/amd64/v1.31.0/kubelet
	I0816 12:37:45.517363   22106 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0816 12:37:45.517389   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0816 12:38:24.802198   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0816 12:38:24.802276   22106 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0816 12:38:24.807378   22106 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0816 12:38:24.807427   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0816 12:38:36.355721   22106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 12:38:36.370820   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0816 12:38:36.370943   22106 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0816 12:38:36.375474   22106 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0816 12:38:36.375500   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0816 12:38:36.684466   22106 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0816 12:38:36.694187   22106 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0816 12:38:36.710456   22106 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 12:38:36.726742   22106 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0816 12:38:36.742268   22106 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0816 12:38:36.745775   22106 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 12:38:36.757469   22106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 12:38:36.877689   22106 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 12:38:36.893843   22106 host.go:66] Checking if "ha-863936" exists ...
	I0816 12:38:36.894275   22106 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:38:36.894326   22106 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:38:36.909385   22106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44887
	I0816 12:38:36.909852   22106 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:38:36.910330   22106 main.go:141] libmachine: Using API Version  1
	I0816 12:38:36.910349   22106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:38:36.910641   22106 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:38:36.910826   22106 main.go:141] libmachine: (ha-863936) Calling .DriverName
	I0816 12:38:36.910982   22106 start.go:317] joinCluster: &{Name:ha-863936 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-863936 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 12:38:36.911093   22106 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0816 12:38:36.911114   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:38:36.914091   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:38:36.914463   22106 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:38:36.914491   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:38:36.914737   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHPort
	I0816 12:38:36.914950   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:38:36.915092   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHUsername
	I0816 12:38:36.915239   22106 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936/id_rsa Username:docker}
	I0816 12:38:37.057232   22106 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 12:38:37.057277   22106 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ly6k5a.xfrdulb4vc1nup4v --discovery-token-ca-cert-hash sha256:4dc33a2efd2478f5cbf5aa38b4f971f510c5b41ce450063af834610254851641 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-863936-m02 --control-plane --apiserver-advertise-address=192.168.39.101 --apiserver-bind-port=8443"
	I0816 12:38:57.130571   22106 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ly6k5a.xfrdulb4vc1nup4v --discovery-token-ca-cert-hash sha256:4dc33a2efd2478f5cbf5aa38b4f971f510c5b41ce450063af834610254851641 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-863936-m02 --control-plane --apiserver-advertise-address=192.168.39.101 --apiserver-bind-port=8443": (20.073270808s)
	I0816 12:38:57.130611   22106 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0816 12:38:57.758540   22106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-863936-m02 minikube.k8s.io/updated_at=2024_08_16T12_38_57_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ab84f9bc76071a77c857a14f5c66dccc01002b05 minikube.k8s.io/name=ha-863936 minikube.k8s.io/primary=false
	I0816 12:38:57.887510   22106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-863936-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0816 12:38:58.012739   22106 start.go:319] duration metric: took 21.101753547s to joinCluster
	I0816 12:38:58.012807   22106 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 12:38:58.013086   22106 config.go:182] Loaded profile config "ha-863936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 12:38:58.014382   22106 out.go:177] * Verifying Kubernetes components...
	I0816 12:38:58.015777   22106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 12:38:58.266323   22106 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 12:38:58.326752   22106 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 12:38:58.326977   22106 kapi.go:59] client config for ha-863936: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/client.crt", KeyFile:"/home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/client.key", CAFile:"/home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f189a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0816 12:38:58.327034   22106 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.2:8443
	I0816 12:38:58.327221   22106 node_ready.go:35] waiting up to 6m0s for node "ha-863936-m02" to be "Ready" ...
	I0816 12:38:58.327320   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:38:58.327332   22106 round_trippers.go:469] Request Headers:
	I0816 12:38:58.327340   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:38:58.327344   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:38:58.350331   22106 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I0816 12:38:58.828408   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:38:58.828437   22106 round_trippers.go:469] Request Headers:
	I0816 12:38:58.828450   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:38:58.828455   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:38:58.837724   22106 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0816 12:38:59.327834   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:38:59.327861   22106 round_trippers.go:469] Request Headers:
	I0816 12:38:59.327871   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:38:59.327876   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:38:59.331034   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:38:59.828050   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:38:59.828073   22106 round_trippers.go:469] Request Headers:
	I0816 12:38:59.828085   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:38:59.828090   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:38:59.832321   22106 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 12:39:00.328319   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:00.328340   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:00.328348   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:00.328353   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:00.331910   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:00.332603   22106 node_ready.go:53] node "ha-863936-m02" has status "Ready":"False"
	I0816 12:39:00.828079   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:00.828107   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:00.828118   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:00.828126   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:00.834382   22106 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0816 12:39:01.327891   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:01.327914   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:01.327923   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:01.327928   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:01.331659   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:01.828076   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:01.828100   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:01.828112   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:01.828118   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:01.832572   22106 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 12:39:02.327817   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:02.327841   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:02.327849   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:02.327853   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:02.332131   22106 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 12:39:02.332714   22106 node_ready.go:53] node "ha-863936-m02" has status "Ready":"False"
	I0816 12:39:02.827732   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:02.827753   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:02.827761   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:02.827765   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:02.831158   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:03.328253   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:03.328274   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:03.328282   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:03.328289   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:03.335833   22106 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0816 12:39:03.828004   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:03.828023   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:03.828032   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:03.828036   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:03.830735   22106 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 12:39:04.327644   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:04.327669   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:04.327678   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:04.327682   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:04.331296   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:04.827865   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:04.827885   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:04.827893   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:04.827896   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:04.831546   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:04.832154   22106 node_ready.go:53] node "ha-863936-m02" has status "Ready":"False"
	I0816 12:39:05.327517   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:05.327545   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:05.327556   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:05.327562   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:05.330381   22106 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 12:39:05.828427   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:05.828451   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:05.828460   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:05.828465   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:05.832531   22106 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 12:39:06.327878   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:06.327899   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:06.327907   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:06.327910   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:06.331640   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:06.828067   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:06.828087   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:06.828095   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:06.828101   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:06.831261   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:07.327758   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:07.327779   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:07.327787   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:07.327792   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:07.331448   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:07.332038   22106 node_ready.go:53] node "ha-863936-m02" has status "Ready":"False"
	I0816 12:39:07.827436   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:07.827460   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:07.827470   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:07.827475   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:07.830494   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:08.327510   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:08.327534   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:08.327542   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:08.327548   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:08.332651   22106 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0816 12:39:08.827538   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:08.827562   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:08.827571   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:08.827576   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:08.830857   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:09.327698   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:09.327719   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:09.327727   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:09.327730   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:09.331441   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:09.828240   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:09.828263   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:09.828269   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:09.828274   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:09.831561   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:09.832339   22106 node_ready.go:53] node "ha-863936-m02" has status "Ready":"False"
	I0816 12:39:10.327914   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:10.327936   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:10.327944   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:10.327948   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:10.330956   22106 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 12:39:10.828073   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:10.828093   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:10.828101   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:10.828105   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:10.831785   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:11.328326   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:11.328351   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:11.328360   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:11.328365   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:11.331884   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:11.827591   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:11.827613   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:11.827621   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:11.827624   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:11.830668   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:12.328250   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:12.328276   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:12.328288   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:12.328294   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:12.331805   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:12.332422   22106 node_ready.go:53] node "ha-863936-m02" has status "Ready":"False"
	I0816 12:39:12.827614   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:12.827641   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:12.827651   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:12.827657   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:12.831532   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:13.328315   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:13.328339   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:13.328347   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:13.328353   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:13.331899   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:13.827992   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:13.828020   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:13.828032   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:13.828039   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:13.832214   22106 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 12:39:14.328331   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:14.328357   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:14.328366   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:14.328371   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:14.331879   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:14.827609   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:14.827637   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:14.827649   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:14.827655   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:14.831305   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:14.831893   22106 node_ready.go:53] node "ha-863936-m02" has status "Ready":"False"
	I0816 12:39:15.328285   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:15.328312   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:15.328323   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:15.328328   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:15.331836   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:15.828063   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:15.828088   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:15.828101   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:15.828108   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:15.840834   22106 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0816 12:39:16.328382   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:16.328404   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:16.328412   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:16.328417   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:16.332032   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:16.827633   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:16.827654   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:16.827662   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:16.827666   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:16.831155   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:17.327417   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:17.327449   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:17.327457   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:17.327461   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:17.331265   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:17.331872   22106 node_ready.go:49] node "ha-863936-m02" has status "Ready":"True"
	I0816 12:39:17.331889   22106 node_ready.go:38] duration metric: took 19.004645121s for node "ha-863936-m02" to be "Ready" ...
	I0816 12:39:17.331898   22106 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 12:39:17.331957   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods
	I0816 12:39:17.331966   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:17.331973   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:17.331981   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:17.336440   22106 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 12:39:17.345623   22106 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-7gfgm" in "kube-system" namespace to be "Ready" ...
	I0816 12:39:17.345712   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-7gfgm
	I0816 12:39:17.345722   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:17.345730   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:17.345734   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:17.350186   22106 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 12:39:17.351194   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936
	I0816 12:39:17.351207   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:17.351213   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:17.351216   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:17.353859   22106 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 12:39:17.354806   22106 pod_ready.go:93] pod "coredns-6f6b679f8f-7gfgm" in "kube-system" namespace has status "Ready":"True"
	I0816 12:39:17.354822   22106 pod_ready.go:82] duration metric: took 9.175178ms for pod "coredns-6f6b679f8f-7gfgm" in "kube-system" namespace to be "Ready" ...
	I0816 12:39:17.354834   22106 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-ssb5h" in "kube-system" namespace to be "Ready" ...
	I0816 12:39:17.354885   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-ssb5h
	I0816 12:39:17.354895   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:17.354904   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:17.354912   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:17.357694   22106 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 12:39:17.358348   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936
	I0816 12:39:17.358359   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:17.358365   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:17.358368   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:17.360647   22106 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 12:39:17.361047   22106 pod_ready.go:93] pod "coredns-6f6b679f8f-ssb5h" in "kube-system" namespace has status "Ready":"True"
	I0816 12:39:17.361061   22106 pod_ready.go:82] duration metric: took 6.22116ms for pod "coredns-6f6b679f8f-ssb5h" in "kube-system" namespace to be "Ready" ...
	I0816 12:39:17.361070   22106 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-863936" in "kube-system" namespace to be "Ready" ...
	I0816 12:39:17.361122   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-863936
	I0816 12:39:17.361132   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:17.361141   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:17.361146   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:17.363551   22106 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 12:39:17.364299   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936
	I0816 12:39:17.364312   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:17.364321   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:17.364328   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:17.366668   22106 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 12:39:17.367071   22106 pod_ready.go:93] pod "etcd-ha-863936" in "kube-system" namespace has status "Ready":"True"
	I0816 12:39:17.367087   22106 pod_ready.go:82] duration metric: took 6.010864ms for pod "etcd-ha-863936" in "kube-system" namespace to be "Ready" ...
	I0816 12:39:17.367099   22106 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-863936-m02" in "kube-system" namespace to be "Ready" ...
	I0816 12:39:17.367159   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-863936-m02
	I0816 12:39:17.367169   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:17.367188   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:17.367196   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:17.370108   22106 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 12:39:17.370764   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:17.370779   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:17.370786   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:17.370789   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:17.373172   22106 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 12:39:17.373650   22106 pod_ready.go:93] pod "etcd-ha-863936-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 12:39:17.373667   22106 pod_ready.go:82] duration metric: took 6.560533ms for pod "etcd-ha-863936-m02" in "kube-system" namespace to be "Ready" ...
	I0816 12:39:17.373685   22106 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-863936" in "kube-system" namespace to be "Ready" ...
	I0816 12:39:17.528070   22106 request.go:632] Waited for 154.326739ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-863936
	I0816 12:39:17.528141   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-863936
	I0816 12:39:17.528148   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:17.528155   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:17.528158   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:17.531759   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:17.727755   22106 request.go:632] Waited for 195.323563ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-863936
	I0816 12:39:17.727817   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936
	I0816 12:39:17.727822   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:17.727830   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:17.727838   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:17.730878   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:17.731434   22106 pod_ready.go:93] pod "kube-apiserver-ha-863936" in "kube-system" namespace has status "Ready":"True"
	I0816 12:39:17.731452   22106 pod_ready.go:82] duration metric: took 357.759007ms for pod "kube-apiserver-ha-863936" in "kube-system" namespace to be "Ready" ...
	I0816 12:39:17.731465   22106 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-863936-m02" in "kube-system" namespace to be "Ready" ...
	I0816 12:39:17.927625   22106 request.go:632] Waited for 196.086028ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-863936-m02
	I0816 12:39:17.927679   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-863936-m02
	I0816 12:39:17.927686   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:17.927695   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:17.927701   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:17.930446   22106 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 12:39:18.128100   22106 request.go:632] Waited for 197.13209ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:18.128173   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:18.128180   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:18.128188   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:18.128196   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:18.131748   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:18.132266   22106 pod_ready.go:93] pod "kube-apiserver-ha-863936-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 12:39:18.132289   22106 pod_ready.go:82] duration metric: took 400.816169ms for pod "kube-apiserver-ha-863936-m02" in "kube-system" namespace to be "Ready" ...
	I0816 12:39:18.132301   22106 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-863936" in "kube-system" namespace to be "Ready" ...
	I0816 12:39:18.327778   22106 request.go:632] Waited for 195.404436ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-863936
	I0816 12:39:18.327839   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-863936
	I0816 12:39:18.327845   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:18.327852   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:18.327856   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:18.330979   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:18.527899   22106 request.go:632] Waited for 196.351485ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-863936
	I0816 12:39:18.527973   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936
	I0816 12:39:18.527983   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:18.527991   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:18.527998   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:18.531595   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:18.532029   22106 pod_ready.go:93] pod "kube-controller-manager-ha-863936" in "kube-system" namespace has status "Ready":"True"
	I0816 12:39:18.532046   22106 pod_ready.go:82] duration metric: took 399.737901ms for pod "kube-controller-manager-ha-863936" in "kube-system" namespace to be "Ready" ...
	I0816 12:39:18.532057   22106 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-863936-m02" in "kube-system" namespace to be "Ready" ...
	I0816 12:39:18.728195   22106 request.go:632] Waited for 196.05883ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-863936-m02
	I0816 12:39:18.728249   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-863936-m02
	I0816 12:39:18.728254   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:18.728261   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:18.728265   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:18.731338   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:18.927599   22106 request.go:632] Waited for 195.289378ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:18.927668   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:18.927674   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:18.927681   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:18.927686   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:18.930788   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:18.931536   22106 pod_ready.go:93] pod "kube-controller-manager-ha-863936-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 12:39:18.931553   22106 pod_ready.go:82] duration metric: took 399.485231ms for pod "kube-controller-manager-ha-863936-m02" in "kube-system" namespace to be "Ready" ...
	I0816 12:39:18.931562   22106 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7lvfc" in "kube-system" namespace to be "Ready" ...
	I0816 12:39:19.127787   22106 request.go:632] Waited for 196.163483ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7lvfc
	I0816 12:39:19.127874   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7lvfc
	I0816 12:39:19.127883   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:19.127892   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:19.127900   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:19.131246   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:19.328207   22106 request.go:632] Waited for 196.36073ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:19.328281   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:19.328287   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:19.328296   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:19.328300   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:19.331668   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:19.332189   22106 pod_ready.go:93] pod "kube-proxy-7lvfc" in "kube-system" namespace has status "Ready":"True"
	I0816 12:39:19.332209   22106 pod_ready.go:82] duration metric: took 400.637905ms for pod "kube-proxy-7lvfc" in "kube-system" namespace to be "Ready" ...
	I0816 12:39:19.332217   22106 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g75mg" in "kube-system" namespace to be "Ready" ...
	I0816 12:39:19.528262   22106 request.go:632] Waited for 195.977836ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g75mg
	I0816 12:39:19.528317   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g75mg
	I0816 12:39:19.528322   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:19.528331   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:19.528337   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:19.531651   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:19.727478   22106 request.go:632] Waited for 195.290304ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-863936
	I0816 12:39:19.727555   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936
	I0816 12:39:19.727564   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:19.727572   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:19.727577   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:19.730371   22106 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 12:39:19.731361   22106 pod_ready.go:93] pod "kube-proxy-g75mg" in "kube-system" namespace has status "Ready":"True"
	I0816 12:39:19.731383   22106 pod_ready.go:82] duration metric: took 399.159306ms for pod "kube-proxy-g75mg" in "kube-system" namespace to be "Ready" ...
	I0816 12:39:19.731394   22106 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-863936" in "kube-system" namespace to be "Ready" ...
	I0816 12:39:19.928120   22106 request.go:632] Waited for 196.616943ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-863936
	I0816 12:39:19.928191   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-863936
	I0816 12:39:19.928198   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:19.928209   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:19.928215   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:19.932268   22106 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 12:39:20.127965   22106 request.go:632] Waited for 195.079329ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-863936
	I0816 12:39:20.128044   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936
	I0816 12:39:20.128067   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:20.128078   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:20.128086   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:20.131267   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:20.131865   22106 pod_ready.go:93] pod "kube-scheduler-ha-863936" in "kube-system" namespace has status "Ready":"True"
	I0816 12:39:20.131880   22106 pod_ready.go:82] duration metric: took 400.477557ms for pod "kube-scheduler-ha-863936" in "kube-system" namespace to be "Ready" ...
	I0816 12:39:20.131890   22106 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-863936-m02" in "kube-system" namespace to be "Ready" ...
	I0816 12:39:20.328055   22106 request.go:632] Waited for 196.107585ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-863936-m02
	I0816 12:39:20.328107   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-863936-m02
	I0816 12:39:20.328111   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:20.328119   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:20.328125   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:20.331220   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:20.528159   22106 request.go:632] Waited for 196.390149ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:20.528231   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:39:20.528237   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:20.528248   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:20.528258   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:20.531257   22106 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 12:39:20.531961   22106 pod_ready.go:93] pod "kube-scheduler-ha-863936-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 12:39:20.531979   22106 pod_ready.go:82] duration metric: took 400.081662ms for pod "kube-scheduler-ha-863936-m02" in "kube-system" namespace to be "Ready" ...
	I0816 12:39:20.531990   22106 pod_ready.go:39] duration metric: took 3.200081497s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 12:39:20.532005   22106 api_server.go:52] waiting for apiserver process to appear ...
	I0816 12:39:20.532062   22106 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 12:39:20.548728   22106 api_server.go:72] duration metric: took 22.535890335s to wait for apiserver process to appear ...
	I0816 12:39:20.548756   22106 api_server.go:88] waiting for apiserver healthz status ...
	I0816 12:39:20.548774   22106 api_server.go:253] Checking apiserver healthz at https://192.168.39.2:8443/healthz ...
	I0816 12:39:20.553303   22106 api_server.go:279] https://192.168.39.2:8443/healthz returned 200:
	ok
	I0816 12:39:20.553367   22106 round_trippers.go:463] GET https://192.168.39.2:8443/version
	I0816 12:39:20.553375   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:20.553383   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:20.553386   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:20.554238   22106 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0816 12:39:20.554352   22106 api_server.go:141] control plane version: v1.31.0
	I0816 12:39:20.554369   22106 api_server.go:131] duration metric: took 5.606374ms to wait for apiserver health ...
	I0816 12:39:20.554379   22106 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 12:39:20.727788   22106 request.go:632] Waited for 173.337204ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods
	I0816 12:39:20.727865   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods
	I0816 12:39:20.727871   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:20.727879   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:20.727886   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:20.732600   22106 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 12:39:20.737952   22106 system_pods.go:59] 17 kube-system pods found
	I0816 12:39:20.737977   22106 system_pods.go:61] "coredns-6f6b679f8f-7gfgm" [797ae351-63bf-4994-a9bd-901367887b58] Running
	I0816 12:39:20.737983   22106 system_pods.go:61] "coredns-6f6b679f8f-ssb5h" [5162fb17-6897-40d2-9c2c-80157ea46e07] Running
	I0816 12:39:20.737987   22106 system_pods.go:61] "etcd-ha-863936" [cc32212e-19e1-4ff6-9940-70a580978946] Running
	I0816 12:39:20.737990   22106 system_pods.go:61] "etcd-ha-863936-m02" [2ee4ba71-e936-499e-988a-6a0a3b0c6d65] Running
	I0816 12:39:20.737994   22106 system_pods.go:61] "kindnet-dddkq" [87bd9636-168b-4f61-9382-0914014af5c0] Running
	I0816 12:39:20.737997   22106 system_pods.go:61] "kindnet-qmrb2" [66996322-476e-4322-a1df-bd8cc820cb59] Running
	I0816 12:39:20.738000   22106 system_pods.go:61] "kube-apiserver-ha-863936" [ec7e5aa8-ffe7-4b42-950b-7fd3911e83e0] Running
	I0816 12:39:20.738004   22106 system_pods.go:61] "kube-apiserver-ha-863936-m02" [a32eab45-93ac-4993-b5dd-f73eb91029ce] Running
	I0816 12:39:20.738007   22106 system_pods.go:61] "kube-controller-manager-ha-863936" [b46326a0-950f-4b23-82a4-7793da0d9e9c] Running
	I0816 12:39:20.738012   22106 system_pods.go:61] "kube-controller-manager-ha-863936-m02" [c0bf3d0c-b461-460b-8523-7b0c76741e17] Running
	I0816 12:39:20.738015   22106 system_pods.go:61] "kube-proxy-7lvfc" [d3e6918e-a097-4037-b962-ed996efda26f] Running
	I0816 12:39:20.738018   22106 system_pods.go:61] "kube-proxy-g75mg" [8d22ea17-7ddd-4c07-89d5-0ebaa170066c] Running
	I0816 12:39:20.738021   22106 system_pods.go:61] "kube-scheduler-ha-863936" [51e497db-1e2d-4020-b030-23702fc7a568] Running
	I0816 12:39:20.738024   22106 system_pods.go:61] "kube-scheduler-ha-863936-m02" [ec98ee42-008b-4f36-95cc-3defde74c964] Running
	I0816 12:39:20.738028   22106 system_pods.go:61] "kube-vip-ha-863936" [55dba92f-60c5-416c-9165-cbde743fbfe2] Running
	I0816 12:39:20.738032   22106 system_pods.go:61] "kube-vip-ha-863936-m02" [b385c963-3f91-4810-9cf7-101fa14e28c6] Running
	I0816 12:39:20.738039   22106 system_pods.go:61] "storage-provisioner" [e6e7b7e6-00b6-42e2-9680-e6660e76bc6f] Running
	I0816 12:39:20.738047   22106 system_pods.go:74] duration metric: took 183.660899ms to wait for pod list to return data ...
	I0816 12:39:20.738059   22106 default_sa.go:34] waiting for default service account to be created ...
	I0816 12:39:20.927437   22106 request.go:632] Waited for 189.309729ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/default/serviceaccounts
	I0816 12:39:20.927494   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/default/serviceaccounts
	I0816 12:39:20.927499   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:20.927505   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:20.927514   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:20.931457   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:39:20.931679   22106 default_sa.go:45] found service account: "default"
	I0816 12:39:20.931696   22106 default_sa.go:55] duration metric: took 193.631337ms for default service account to be created ...
	I0816 12:39:20.931705   22106 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 12:39:21.127633   22106 request.go:632] Waited for 195.859654ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods
	I0816 12:39:21.127713   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods
	I0816 12:39:21.127723   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:21.127732   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:21.127740   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:21.132641   22106 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 12:39:21.136822   22106 system_pods.go:86] 17 kube-system pods found
	I0816 12:39:21.136846   22106 system_pods.go:89] "coredns-6f6b679f8f-7gfgm" [797ae351-63bf-4994-a9bd-901367887b58] Running
	I0816 12:39:21.136852   22106 system_pods.go:89] "coredns-6f6b679f8f-ssb5h" [5162fb17-6897-40d2-9c2c-80157ea46e07] Running
	I0816 12:39:21.136856   22106 system_pods.go:89] "etcd-ha-863936" [cc32212e-19e1-4ff6-9940-70a580978946] Running
	I0816 12:39:21.136860   22106 system_pods.go:89] "etcd-ha-863936-m02" [2ee4ba71-e936-499e-988a-6a0a3b0c6d65] Running
	I0816 12:39:21.136864   22106 system_pods.go:89] "kindnet-dddkq" [87bd9636-168b-4f61-9382-0914014af5c0] Running
	I0816 12:39:21.136869   22106 system_pods.go:89] "kindnet-qmrb2" [66996322-476e-4322-a1df-bd8cc820cb59] Running
	I0816 12:39:21.136873   22106 system_pods.go:89] "kube-apiserver-ha-863936" [ec7e5aa8-ffe7-4b42-950b-7fd3911e83e0] Running
	I0816 12:39:21.136876   22106 system_pods.go:89] "kube-apiserver-ha-863936-m02" [a32eab45-93ac-4993-b5dd-f73eb91029ce] Running
	I0816 12:39:21.136880   22106 system_pods.go:89] "kube-controller-manager-ha-863936" [b46326a0-950f-4b23-82a4-7793da0d9e9c] Running
	I0816 12:39:21.136884   22106 system_pods.go:89] "kube-controller-manager-ha-863936-m02" [c0bf3d0c-b461-460b-8523-7b0c76741e17] Running
	I0816 12:39:21.136889   22106 system_pods.go:89] "kube-proxy-7lvfc" [d3e6918e-a097-4037-b962-ed996efda26f] Running
	I0816 12:39:21.136893   22106 system_pods.go:89] "kube-proxy-g75mg" [8d22ea17-7ddd-4c07-89d5-0ebaa170066c] Running
	I0816 12:39:21.136902   22106 system_pods.go:89] "kube-scheduler-ha-863936" [51e497db-1e2d-4020-b030-23702fc7a568] Running
	I0816 12:39:21.136923   22106 system_pods.go:89] "kube-scheduler-ha-863936-m02" [ec98ee42-008b-4f36-95cc-3defde74c964] Running
	I0816 12:39:21.136933   22106 system_pods.go:89] "kube-vip-ha-863936" [55dba92f-60c5-416c-9165-cbde743fbfe2] Running
	I0816 12:39:21.136938   22106 system_pods.go:89] "kube-vip-ha-863936-m02" [b385c963-3f91-4810-9cf7-101fa14e28c6] Running
	I0816 12:39:21.136943   22106 system_pods.go:89] "storage-provisioner" [e6e7b7e6-00b6-42e2-9680-e6660e76bc6f] Running
	I0816 12:39:21.136956   22106 system_pods.go:126] duration metric: took 205.243032ms to wait for k8s-apps to be running ...
	I0816 12:39:21.136967   22106 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 12:39:21.137011   22106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 12:39:21.153172   22106 system_svc.go:56] duration metric: took 16.194838ms WaitForService to wait for kubelet
	I0816 12:39:21.153206   22106 kubeadm.go:582] duration metric: took 23.140371377s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 12:39:21.153231   22106 node_conditions.go:102] verifying NodePressure condition ...
	I0816 12:39:21.327569   22106 request.go:632] Waited for 174.246241ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes
	I0816 12:39:21.327624   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes
	I0816 12:39:21.327629   22106 round_trippers.go:469] Request Headers:
	I0816 12:39:21.327637   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:39:21.327640   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:39:21.331685   22106 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 12:39:21.332618   22106 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 12:39:21.332641   22106 node_conditions.go:123] node cpu capacity is 2
	I0816 12:39:21.332654   22106 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 12:39:21.332660   22106 node_conditions.go:123] node cpu capacity is 2
	I0816 12:39:21.332667   22106 node_conditions.go:105] duration metric: took 179.429674ms to run NodePressure ...
	I0816 12:39:21.332684   22106 start.go:241] waiting for startup goroutines ...
	I0816 12:39:21.332713   22106 start.go:255] writing updated cluster config ...
	I0816 12:39:21.335242   22106 out.go:201] 
	I0816 12:39:21.337077   22106 config.go:182] Loaded profile config "ha-863936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 12:39:21.337195   22106 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/config.json ...
	I0816 12:39:21.338879   22106 out.go:177] * Starting "ha-863936-m03" control-plane node in "ha-863936" cluster
	I0816 12:39:21.340238   22106 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 12:39:21.340264   22106 cache.go:56] Caching tarball of preloaded images
	I0816 12:39:21.340378   22106 preload.go:172] Found /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 12:39:21.340394   22106 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0816 12:39:21.340485   22106 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/config.json ...
	I0816 12:39:21.340659   22106 start.go:360] acquireMachinesLock for ha-863936-m03: {Name:mk0b4b88ee8893cefe753073bc08069ac50e5641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 12:39:21.340698   22106 start.go:364] duration metric: took 21.46µs to acquireMachinesLock for "ha-863936-m03"
	I0816 12:39:21.340715   22106 start.go:93] Provisioning new machine with config: &{Name:ha-863936 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-863936 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 12:39:21.340805   22106 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0816 12:39:21.342415   22106 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 12:39:21.342506   22106 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:39:21.342542   22106 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:39:21.357466   22106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35695
	I0816 12:39:21.357918   22106 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:39:21.358353   22106 main.go:141] libmachine: Using API Version  1
	I0816 12:39:21.358374   22106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:39:21.358661   22106 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:39:21.358813   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetMachineName
	I0816 12:39:21.358960   22106 main.go:141] libmachine: (ha-863936-m03) Calling .DriverName
	I0816 12:39:21.359165   22106 start.go:159] libmachine.API.Create for "ha-863936" (driver="kvm2")
	I0816 12:39:21.359191   22106 client.go:168] LocalClient.Create starting
	I0816 12:39:21.359219   22106 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem
	I0816 12:39:21.359256   22106 main.go:141] libmachine: Decoding PEM data...
	I0816 12:39:21.359276   22106 main.go:141] libmachine: Parsing certificate...
	I0816 12:39:21.359342   22106 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem
	I0816 12:39:21.359372   22106 main.go:141] libmachine: Decoding PEM data...
	I0816 12:39:21.359389   22106 main.go:141] libmachine: Parsing certificate...
	I0816 12:39:21.359413   22106 main.go:141] libmachine: Running pre-create checks...
	I0816 12:39:21.359424   22106 main.go:141] libmachine: (ha-863936-m03) Calling .PreCreateCheck
	I0816 12:39:21.359602   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetConfigRaw
	I0816 12:39:21.359988   22106 main.go:141] libmachine: Creating machine...
	I0816 12:39:21.360000   22106 main.go:141] libmachine: (ha-863936-m03) Calling .Create
	I0816 12:39:21.360136   22106 main.go:141] libmachine: (ha-863936-m03) Creating KVM machine...
	I0816 12:39:21.361486   22106 main.go:141] libmachine: (ha-863936-m03) DBG | found existing default KVM network
	I0816 12:39:21.361652   22106 main.go:141] libmachine: (ha-863936-m03) DBG | found existing private KVM network mk-ha-863936
	I0816 12:39:21.361767   22106 main.go:141] libmachine: (ha-863936-m03) Setting up store path in /home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m03 ...
	I0816 12:39:21.361788   22106 main.go:141] libmachine: (ha-863936-m03) Building disk image from file:///home/jenkins/minikube-integration/19423-3966/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso
	I0816 12:39:21.361860   22106 main.go:141] libmachine: (ha-863936-m03) DBG | I0816 12:39:21.361775   23071 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19423-3966/.minikube
	I0816 12:39:21.361947   22106 main.go:141] libmachine: (ha-863936-m03) Downloading /home/jenkins/minikube-integration/19423-3966/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19423-3966/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso...
	I0816 12:39:21.588422   22106 main.go:141] libmachine: (ha-863936-m03) DBG | I0816 12:39:21.588305   23071 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m03/id_rsa...
	I0816 12:39:21.689781   22106 main.go:141] libmachine: (ha-863936-m03) DBG | I0816 12:39:21.689670   23071 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m03/ha-863936-m03.rawdisk...
	I0816 12:39:21.689815   22106 main.go:141] libmachine: (ha-863936-m03) DBG | Writing magic tar header
	I0816 12:39:21.689829   22106 main.go:141] libmachine: (ha-863936-m03) DBG | Writing SSH key tar header
	I0816 12:39:21.689840   22106 main.go:141] libmachine: (ha-863936-m03) DBG | I0816 12:39:21.689803   23071 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m03 ...
	I0816 12:39:21.689929   22106 main.go:141] libmachine: (ha-863936-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m03
	I0816 12:39:21.689961   22106 main.go:141] libmachine: (ha-863936-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-3966/.minikube/machines
	I0816 12:39:21.689974   22106 main.go:141] libmachine: (ha-863936-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-3966/.minikube
	I0816 12:39:21.690013   22106 main.go:141] libmachine: (ha-863936-m03) Setting executable bit set on /home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m03 (perms=drwx------)
	I0816 12:39:21.690024   22106 main.go:141] libmachine: (ha-863936-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-3966
	I0816 12:39:21.690039   22106 main.go:141] libmachine: (ha-863936-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0816 12:39:21.690050   22106 main.go:141] libmachine: (ha-863936-m03) DBG | Checking permissions on dir: /home/jenkins
	I0816 12:39:21.690061   22106 main.go:141] libmachine: (ha-863936-m03) DBG | Checking permissions on dir: /home
	I0816 12:39:21.690073   22106 main.go:141] libmachine: (ha-863936-m03) DBG | Skipping /home - not owner
	I0816 12:39:21.690085   22106 main.go:141] libmachine: (ha-863936-m03) Setting executable bit set on /home/jenkins/minikube-integration/19423-3966/.minikube/machines (perms=drwxr-xr-x)
	I0816 12:39:21.690101   22106 main.go:141] libmachine: (ha-863936-m03) Setting executable bit set on /home/jenkins/minikube-integration/19423-3966/.minikube (perms=drwxr-xr-x)
	I0816 12:39:21.690116   22106 main.go:141] libmachine: (ha-863936-m03) Setting executable bit set on /home/jenkins/minikube-integration/19423-3966 (perms=drwxrwxr-x)
	I0816 12:39:21.690133   22106 main.go:141] libmachine: (ha-863936-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0816 12:39:21.690146   22106 main.go:141] libmachine: (ha-863936-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0816 12:39:21.690157   22106 main.go:141] libmachine: (ha-863936-m03) Creating domain...
	I0816 12:39:21.691185   22106 main.go:141] libmachine: (ha-863936-m03) define libvirt domain using xml: 
	I0816 12:39:21.691205   22106 main.go:141] libmachine: (ha-863936-m03) <domain type='kvm'>
	I0816 12:39:21.691215   22106 main.go:141] libmachine: (ha-863936-m03)   <name>ha-863936-m03</name>
	I0816 12:39:21.691223   22106 main.go:141] libmachine: (ha-863936-m03)   <memory unit='MiB'>2200</memory>
	I0816 12:39:21.691231   22106 main.go:141] libmachine: (ha-863936-m03)   <vcpu>2</vcpu>
	I0816 12:39:21.691244   22106 main.go:141] libmachine: (ha-863936-m03)   <features>
	I0816 12:39:21.691256   22106 main.go:141] libmachine: (ha-863936-m03)     <acpi/>
	I0816 12:39:21.691266   22106 main.go:141] libmachine: (ha-863936-m03)     <apic/>
	I0816 12:39:21.691276   22106 main.go:141] libmachine: (ha-863936-m03)     <pae/>
	I0816 12:39:21.691292   22106 main.go:141] libmachine: (ha-863936-m03)     
	I0816 12:39:21.691332   22106 main.go:141] libmachine: (ha-863936-m03)   </features>
	I0816 12:39:21.691356   22106 main.go:141] libmachine: (ha-863936-m03)   <cpu mode='host-passthrough'>
	I0816 12:39:21.691387   22106 main.go:141] libmachine: (ha-863936-m03)   
	I0816 12:39:21.691415   22106 main.go:141] libmachine: (ha-863936-m03)   </cpu>
	I0816 12:39:21.691445   22106 main.go:141] libmachine: (ha-863936-m03)   <os>
	I0816 12:39:21.691460   22106 main.go:141] libmachine: (ha-863936-m03)     <type>hvm</type>
	I0816 12:39:21.691472   22106 main.go:141] libmachine: (ha-863936-m03)     <boot dev='cdrom'/>
	I0816 12:39:21.691479   22106 main.go:141] libmachine: (ha-863936-m03)     <boot dev='hd'/>
	I0816 12:39:21.691489   22106 main.go:141] libmachine: (ha-863936-m03)     <bootmenu enable='no'/>
	I0816 12:39:21.691500   22106 main.go:141] libmachine: (ha-863936-m03)   </os>
	I0816 12:39:21.691509   22106 main.go:141] libmachine: (ha-863936-m03)   <devices>
	I0816 12:39:21.691518   22106 main.go:141] libmachine: (ha-863936-m03)     <disk type='file' device='cdrom'>
	I0816 12:39:21.691531   22106 main.go:141] libmachine: (ha-863936-m03)       <source file='/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m03/boot2docker.iso'/>
	I0816 12:39:21.691542   22106 main.go:141] libmachine: (ha-863936-m03)       <target dev='hdc' bus='scsi'/>
	I0816 12:39:21.691550   22106 main.go:141] libmachine: (ha-863936-m03)       <readonly/>
	I0816 12:39:21.691561   22106 main.go:141] libmachine: (ha-863936-m03)     </disk>
	I0816 12:39:21.691571   22106 main.go:141] libmachine: (ha-863936-m03)     <disk type='file' device='disk'>
	I0816 12:39:21.691584   22106 main.go:141] libmachine: (ha-863936-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0816 12:39:21.691600   22106 main.go:141] libmachine: (ha-863936-m03)       <source file='/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m03/ha-863936-m03.rawdisk'/>
	I0816 12:39:21.691611   22106 main.go:141] libmachine: (ha-863936-m03)       <target dev='hda' bus='virtio'/>
	I0816 12:39:21.691622   22106 main.go:141] libmachine: (ha-863936-m03)     </disk>
	I0816 12:39:21.691630   22106 main.go:141] libmachine: (ha-863936-m03)     <interface type='network'>
	I0816 12:39:21.691642   22106 main.go:141] libmachine: (ha-863936-m03)       <source network='mk-ha-863936'/>
	I0816 12:39:21.691655   22106 main.go:141] libmachine: (ha-863936-m03)       <model type='virtio'/>
	I0816 12:39:21.691667   22106 main.go:141] libmachine: (ha-863936-m03)     </interface>
	I0816 12:39:21.691678   22106 main.go:141] libmachine: (ha-863936-m03)     <interface type='network'>
	I0816 12:39:21.691688   22106 main.go:141] libmachine: (ha-863936-m03)       <source network='default'/>
	I0816 12:39:21.691698   22106 main.go:141] libmachine: (ha-863936-m03)       <model type='virtio'/>
	I0816 12:39:21.691707   22106 main.go:141] libmachine: (ha-863936-m03)     </interface>
	I0816 12:39:21.691717   22106 main.go:141] libmachine: (ha-863936-m03)     <serial type='pty'>
	I0816 12:39:21.691736   22106 main.go:141] libmachine: (ha-863936-m03)       <target port='0'/>
	I0816 12:39:21.691755   22106 main.go:141] libmachine: (ha-863936-m03)     </serial>
	I0816 12:39:21.691784   22106 main.go:141] libmachine: (ha-863936-m03)     <console type='pty'>
	I0816 12:39:21.691806   22106 main.go:141] libmachine: (ha-863936-m03)       <target type='serial' port='0'/>
	I0816 12:39:21.691827   22106 main.go:141] libmachine: (ha-863936-m03)     </console>
	I0816 12:39:21.691838   22106 main.go:141] libmachine: (ha-863936-m03)     <rng model='virtio'>
	I0816 12:39:21.691851   22106 main.go:141] libmachine: (ha-863936-m03)       <backend model='random'>/dev/random</backend>
	I0816 12:39:21.691867   22106 main.go:141] libmachine: (ha-863936-m03)     </rng>
	I0816 12:39:21.691879   22106 main.go:141] libmachine: (ha-863936-m03)     
	I0816 12:39:21.691888   22106 main.go:141] libmachine: (ha-863936-m03)     
	I0816 12:39:21.691897   22106 main.go:141] libmachine: (ha-863936-m03)   </devices>
	I0816 12:39:21.691909   22106 main.go:141] libmachine: (ha-863936-m03) </domain>
	I0816 12:39:21.691920   22106 main.go:141] libmachine: (ha-863936-m03) 
	I0816 12:39:21.698387   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:1b:b9:b5 in network default
	I0816 12:39:21.698904   22106 main.go:141] libmachine: (ha-863936-m03) Ensuring networks are active...
	I0816 12:39:21.698923   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:21.699565   22106 main.go:141] libmachine: (ha-863936-m03) Ensuring network default is active
	I0816 12:39:21.699871   22106 main.go:141] libmachine: (ha-863936-m03) Ensuring network mk-ha-863936 is active
	I0816 12:39:21.700365   22106 main.go:141] libmachine: (ha-863936-m03) Getting domain xml...
	I0816 12:39:21.701033   22106 main.go:141] libmachine: (ha-863936-m03) Creating domain...
	I0816 12:39:22.916117   22106 main.go:141] libmachine: (ha-863936-m03) Waiting to get IP...
	I0816 12:39:22.916874   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:22.917291   22106 main.go:141] libmachine: (ha-863936-m03) DBG | unable to find current IP address of domain ha-863936-m03 in network mk-ha-863936
	I0816 12:39:22.917317   22106 main.go:141] libmachine: (ha-863936-m03) DBG | I0816 12:39:22.917275   23071 retry.go:31] will retry after 233.955582ms: waiting for machine to come up
	I0816 12:39:23.152974   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:23.153467   22106 main.go:141] libmachine: (ha-863936-m03) DBG | unable to find current IP address of domain ha-863936-m03 in network mk-ha-863936
	I0816 12:39:23.153493   22106 main.go:141] libmachine: (ha-863936-m03) DBG | I0816 12:39:23.153415   23071 retry.go:31] will retry after 270.571352ms: waiting for machine to come up
	I0816 12:39:23.425833   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:23.426386   22106 main.go:141] libmachine: (ha-863936-m03) DBG | unable to find current IP address of domain ha-863936-m03 in network mk-ha-863936
	I0816 12:39:23.426411   22106 main.go:141] libmachine: (ha-863936-m03) DBG | I0816 12:39:23.426333   23071 retry.go:31] will retry after 308.115392ms: waiting for machine to come up
	I0816 12:39:23.735782   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:23.736291   22106 main.go:141] libmachine: (ha-863936-m03) DBG | unable to find current IP address of domain ha-863936-m03 in network mk-ha-863936
	I0816 12:39:23.736326   22106 main.go:141] libmachine: (ha-863936-m03) DBG | I0816 12:39:23.736237   23071 retry.go:31] will retry after 580.049804ms: waiting for machine to come up
	I0816 12:39:24.318069   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:24.318561   22106 main.go:141] libmachine: (ha-863936-m03) DBG | unable to find current IP address of domain ha-863936-m03 in network mk-ha-863936
	I0816 12:39:24.318586   22106 main.go:141] libmachine: (ha-863936-m03) DBG | I0816 12:39:24.318523   23071 retry.go:31] will retry after 602.942822ms: waiting for machine to come up
	I0816 12:39:24.923074   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:24.923490   22106 main.go:141] libmachine: (ha-863936-m03) DBG | unable to find current IP address of domain ha-863936-m03 in network mk-ha-863936
	I0816 12:39:24.923516   22106 main.go:141] libmachine: (ha-863936-m03) DBG | I0816 12:39:24.923446   23071 retry.go:31] will retry after 579.631175ms: waiting for machine to come up
	I0816 12:39:25.504124   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:25.504540   22106 main.go:141] libmachine: (ha-863936-m03) DBG | unable to find current IP address of domain ha-863936-m03 in network mk-ha-863936
	I0816 12:39:25.504566   22106 main.go:141] libmachine: (ha-863936-m03) DBG | I0816 12:39:25.504503   23071 retry.go:31] will retry after 943.910472ms: waiting for machine to come up
	I0816 12:39:26.450255   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:26.450645   22106 main.go:141] libmachine: (ha-863936-m03) DBG | unable to find current IP address of domain ha-863936-m03 in network mk-ha-863936
	I0816 12:39:26.450696   22106 main.go:141] libmachine: (ha-863936-m03) DBG | I0816 12:39:26.450641   23071 retry.go:31] will retry after 1.228766387s: waiting for machine to come up
	I0816 12:39:27.680944   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:27.681389   22106 main.go:141] libmachine: (ha-863936-m03) DBG | unable to find current IP address of domain ha-863936-m03 in network mk-ha-863936
	I0816 12:39:27.681417   22106 main.go:141] libmachine: (ha-863936-m03) DBG | I0816 12:39:27.681342   23071 retry.go:31] will retry after 1.495017949s: waiting for machine to come up
	I0816 12:39:29.178303   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:29.178728   22106 main.go:141] libmachine: (ha-863936-m03) DBG | unable to find current IP address of domain ha-863936-m03 in network mk-ha-863936
	I0816 12:39:29.178756   22106 main.go:141] libmachine: (ha-863936-m03) DBG | I0816 12:39:29.178677   23071 retry.go:31] will retry after 2.251323948s: waiting for machine to come up
	I0816 12:39:31.431594   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:31.432007   22106 main.go:141] libmachine: (ha-863936-m03) DBG | unable to find current IP address of domain ha-863936-m03 in network mk-ha-863936
	I0816 12:39:31.432038   22106 main.go:141] libmachine: (ha-863936-m03) DBG | I0816 12:39:31.431964   23071 retry.go:31] will retry after 2.837656375s: waiting for machine to come up
	I0816 12:39:34.271694   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:34.272287   22106 main.go:141] libmachine: (ha-863936-m03) DBG | unable to find current IP address of domain ha-863936-m03 in network mk-ha-863936
	I0816 12:39:34.272311   22106 main.go:141] libmachine: (ha-863936-m03) DBG | I0816 12:39:34.272238   23071 retry.go:31] will retry after 2.568098948s: waiting for machine to come up
	I0816 12:39:36.842648   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:36.843094   22106 main.go:141] libmachine: (ha-863936-m03) DBG | unable to find current IP address of domain ha-863936-m03 in network mk-ha-863936
	I0816 12:39:36.843117   22106 main.go:141] libmachine: (ha-863936-m03) DBG | I0816 12:39:36.843049   23071 retry.go:31] will retry after 3.039763146s: waiting for machine to come up
	I0816 12:39:39.885857   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:39.886300   22106 main.go:141] libmachine: (ha-863936-m03) DBG | unable to find current IP address of domain ha-863936-m03 in network mk-ha-863936
	I0816 12:39:39.886334   22106 main.go:141] libmachine: (ha-863936-m03) DBG | I0816 12:39:39.886244   23071 retry.go:31] will retry after 4.12414469s: waiting for machine to come up
	I0816 12:39:44.013251   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:44.013799   22106 main.go:141] libmachine: (ha-863936-m03) Found IP for machine: 192.168.39.116
	I0816 12:39:44.013824   22106 main.go:141] libmachine: (ha-863936-m03) Reserving static IP address...
	I0816 12:39:44.013837   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has current primary IP address 192.168.39.116 and MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:44.014226   22106 main.go:141] libmachine: (ha-863936-m03) DBG | unable to find host DHCP lease matching {name: "ha-863936-m03", mac: "52:54:00:ec:05:59", ip: "192.168.39.116"} in network mk-ha-863936
	I0816 12:39:44.085190   22106 main.go:141] libmachine: (ha-863936-m03) DBG | Getting to WaitForSSH function...
	I0816 12:39:44.085217   22106 main.go:141] libmachine: (ha-863936-m03) Reserved static IP address: 192.168.39.116
	I0816 12:39:44.085229   22106 main.go:141] libmachine: (ha-863936-m03) Waiting for SSH to be available...
	I0816 12:39:44.087613   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:44.087988   22106 main.go:141] libmachine: (ha-863936-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:05:59", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:39:35 +0000 UTC Type:0 Mac:52:54:00:ec:05:59 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ec:05:59}
	I0816 12:39:44.088019   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined IP address 192.168.39.116 and MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:44.088107   22106 main.go:141] libmachine: (ha-863936-m03) DBG | Using SSH client type: external
	I0816 12:39:44.088132   22106 main.go:141] libmachine: (ha-863936-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m03/id_rsa (-rw-------)
	I0816 12:39:44.088162   22106 main.go:141] libmachine: (ha-863936-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 12:39:44.088176   22106 main.go:141] libmachine: (ha-863936-m03) DBG | About to run SSH command:
	I0816 12:39:44.088198   22106 main.go:141] libmachine: (ha-863936-m03) DBG | exit 0
	I0816 12:39:44.213073   22106 main.go:141] libmachine: (ha-863936-m03) DBG | SSH cmd err, output: <nil>: 
	I0816 12:39:44.213332   22106 main.go:141] libmachine: (ha-863936-m03) KVM machine creation complete!
	I0816 12:39:44.213667   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetConfigRaw
	I0816 12:39:44.214144   22106 main.go:141] libmachine: (ha-863936-m03) Calling .DriverName
	I0816 12:39:44.214319   22106 main.go:141] libmachine: (ha-863936-m03) Calling .DriverName
	I0816 12:39:44.214508   22106 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0816 12:39:44.214522   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetState
	I0816 12:39:44.215811   22106 main.go:141] libmachine: Detecting operating system of created instance...
	I0816 12:39:44.215827   22106 main.go:141] libmachine: Waiting for SSH to be available...
	I0816 12:39:44.215835   22106 main.go:141] libmachine: Getting to WaitForSSH function...
	I0816 12:39:44.215843   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHHostname
	I0816 12:39:44.218240   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:44.218645   22106 main.go:141] libmachine: (ha-863936-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:05:59", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:39:35 +0000 UTC Type:0 Mac:52:54:00:ec:05:59 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-863936-m03 Clientid:01:52:54:00:ec:05:59}
	I0816 12:39:44.218675   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined IP address 192.168.39.116 and MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:44.218760   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHPort
	I0816 12:39:44.218924   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHKeyPath
	I0816 12:39:44.219081   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHKeyPath
	I0816 12:39:44.219326   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHUsername
	I0816 12:39:44.219501   22106 main.go:141] libmachine: Using SSH client type: native
	I0816 12:39:44.219695   22106 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0816 12:39:44.219709   22106 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0816 12:39:44.324435   22106 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 12:39:44.324465   22106 main.go:141] libmachine: Detecting the provisioner...
	I0816 12:39:44.324478   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHHostname
	I0816 12:39:44.327086   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:44.327395   22106 main.go:141] libmachine: (ha-863936-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:05:59", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:39:35 +0000 UTC Type:0 Mac:52:54:00:ec:05:59 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-863936-m03 Clientid:01:52:54:00:ec:05:59}
	I0816 12:39:44.327423   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined IP address 192.168.39.116 and MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:44.327600   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHPort
	I0816 12:39:44.327773   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHKeyPath
	I0816 12:39:44.327943   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHKeyPath
	I0816 12:39:44.328090   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHUsername
	I0816 12:39:44.328256   22106 main.go:141] libmachine: Using SSH client type: native
	I0816 12:39:44.328415   22106 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0816 12:39:44.328432   22106 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0816 12:39:44.433470   22106 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0816 12:39:44.433538   22106 main.go:141] libmachine: found compatible host: buildroot
	I0816 12:39:44.433546   22106 main.go:141] libmachine: Provisioning with buildroot...
	I0816 12:39:44.433553   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetMachineName
	I0816 12:39:44.433800   22106 buildroot.go:166] provisioning hostname "ha-863936-m03"
	I0816 12:39:44.433823   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetMachineName
	I0816 12:39:44.434019   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHHostname
	I0816 12:39:44.436864   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:44.437379   22106 main.go:141] libmachine: (ha-863936-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:05:59", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:39:35 +0000 UTC Type:0 Mac:52:54:00:ec:05:59 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-863936-m03 Clientid:01:52:54:00:ec:05:59}
	I0816 12:39:44.437408   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined IP address 192.168.39.116 and MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:44.437564   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHPort
	I0816 12:39:44.437762   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHKeyPath
	I0816 12:39:44.437955   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHKeyPath
	I0816 12:39:44.438139   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHUsername
	I0816 12:39:44.438335   22106 main.go:141] libmachine: Using SSH client type: native
	I0816 12:39:44.438537   22106 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0816 12:39:44.438556   22106 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-863936-m03 && echo "ha-863936-m03" | sudo tee /etc/hostname
	I0816 12:39:44.563716   22106 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-863936-m03
	
	I0816 12:39:44.563742   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHHostname
	I0816 12:39:44.566487   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:44.566844   22106 main.go:141] libmachine: (ha-863936-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:05:59", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:39:35 +0000 UTC Type:0 Mac:52:54:00:ec:05:59 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-863936-m03 Clientid:01:52:54:00:ec:05:59}
	I0816 12:39:44.566871   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined IP address 192.168.39.116 and MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:44.567111   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHPort
	I0816 12:39:44.567319   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHKeyPath
	I0816 12:39:44.567496   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHKeyPath
	I0816 12:39:44.567613   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHUsername
	I0816 12:39:44.567789   22106 main.go:141] libmachine: Using SSH client type: native
	I0816 12:39:44.567976   22106 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0816 12:39:44.567994   22106 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-863936-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-863936-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-863936-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 12:39:44.681978   22106 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 12:39:44.682010   22106 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-3966/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-3966/.minikube}
	I0816 12:39:44.682023   22106 buildroot.go:174] setting up certificates
	I0816 12:39:44.682033   22106 provision.go:84] configureAuth start
	I0816 12:39:44.682041   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetMachineName
	I0816 12:39:44.682294   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetIP
	I0816 12:39:44.684600   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:44.684925   22106 main.go:141] libmachine: (ha-863936-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:05:59", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:39:35 +0000 UTC Type:0 Mac:52:54:00:ec:05:59 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-863936-m03 Clientid:01:52:54:00:ec:05:59}
	I0816 12:39:44.684955   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined IP address 192.168.39.116 and MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:44.685109   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHHostname
	I0816 12:39:44.687476   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:44.687806   22106 main.go:141] libmachine: (ha-863936-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:05:59", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:39:35 +0000 UTC Type:0 Mac:52:54:00:ec:05:59 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-863936-m03 Clientid:01:52:54:00:ec:05:59}
	I0816 12:39:44.687834   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined IP address 192.168.39.116 and MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:44.687928   22106 provision.go:143] copyHostCerts
	I0816 12:39:44.687959   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem
	I0816 12:39:44.688002   22106 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem, removing ...
	I0816 12:39:44.688020   22106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem
	I0816 12:39:44.688093   22106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem (1082 bytes)
	I0816 12:39:44.688186   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem
	I0816 12:39:44.688215   22106 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem, removing ...
	I0816 12:39:44.688224   22106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem
	I0816 12:39:44.688261   22106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem (1123 bytes)
	I0816 12:39:44.688324   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem
	I0816 12:39:44.688352   22106 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem, removing ...
	I0816 12:39:44.688361   22106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem
	I0816 12:39:44.688395   22106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem (1675 bytes)
	I0816 12:39:44.688470   22106 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem org=jenkins.ha-863936-m03 san=[127.0.0.1 192.168.39.116 ha-863936-m03 localhost minikube]
	I0816 12:39:44.848981   22106 provision.go:177] copyRemoteCerts
	I0816 12:39:44.849044   22106 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 12:39:44.849073   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHHostname
	I0816 12:39:44.851543   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:44.851859   22106 main.go:141] libmachine: (ha-863936-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:05:59", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:39:35 +0000 UTC Type:0 Mac:52:54:00:ec:05:59 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-863936-m03 Clientid:01:52:54:00:ec:05:59}
	I0816 12:39:44.851884   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined IP address 192.168.39.116 and MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:44.852088   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHPort
	I0816 12:39:44.852259   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHKeyPath
	I0816 12:39:44.852403   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHUsername
	I0816 12:39:44.852547   22106 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m03/id_rsa Username:docker}
	I0816 12:39:44.935198   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 12:39:44.935272   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 12:39:44.959658   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 12:39:44.959735   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 12:39:44.985094   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 12:39:44.985166   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0816 12:39:45.009372   22106 provision.go:87] duration metric: took 327.327581ms to configureAuth
	I0816 12:39:45.009405   22106 buildroot.go:189] setting minikube options for container-runtime
	I0816 12:39:45.009620   22106 config.go:182] Loaded profile config "ha-863936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 12:39:45.009688   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHHostname
	I0816 12:39:45.012702   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:45.013070   22106 main.go:141] libmachine: (ha-863936-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:05:59", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:39:35 +0000 UTC Type:0 Mac:52:54:00:ec:05:59 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-863936-m03 Clientid:01:52:54:00:ec:05:59}
	I0816 12:39:45.013102   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined IP address 192.168.39.116 and MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:45.013282   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHPort
	I0816 12:39:45.013464   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHKeyPath
	I0816 12:39:45.013667   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHKeyPath
	I0816 12:39:45.013889   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHUsername
	I0816 12:39:45.014066   22106 main.go:141] libmachine: Using SSH client type: native
	I0816 12:39:45.014285   22106 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0816 12:39:45.014303   22106 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 12:39:45.292636   22106 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 12:39:45.292665   22106 main.go:141] libmachine: Checking connection to Docker...
	I0816 12:39:45.292675   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetURL
	I0816 12:39:45.294064   22106 main.go:141] libmachine: (ha-863936-m03) DBG | Using libvirt version 6000000
	I0816 12:39:45.296263   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:45.296581   22106 main.go:141] libmachine: (ha-863936-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:05:59", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:39:35 +0000 UTC Type:0 Mac:52:54:00:ec:05:59 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-863936-m03 Clientid:01:52:54:00:ec:05:59}
	I0816 12:39:45.296608   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined IP address 192.168.39.116 and MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:45.296770   22106 main.go:141] libmachine: Docker is up and running!
	I0816 12:39:45.296785   22106 main.go:141] libmachine: Reticulating splines...
	I0816 12:39:45.296793   22106 client.go:171] duration metric: took 23.937594799s to LocalClient.Create
	I0816 12:39:45.296820   22106 start.go:167] duration metric: took 23.937668178s to libmachine.API.Create "ha-863936"
	I0816 12:39:45.296831   22106 start.go:293] postStartSetup for "ha-863936-m03" (driver="kvm2")
	I0816 12:39:45.296842   22106 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 12:39:45.296858   22106 main.go:141] libmachine: (ha-863936-m03) Calling .DriverName
	I0816 12:39:45.297073   22106 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 12:39:45.297098   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHHostname
	I0816 12:39:45.299166   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:45.299488   22106 main.go:141] libmachine: (ha-863936-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:05:59", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:39:35 +0000 UTC Type:0 Mac:52:54:00:ec:05:59 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-863936-m03 Clientid:01:52:54:00:ec:05:59}
	I0816 12:39:45.299514   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined IP address 192.168.39.116 and MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:45.299630   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHPort
	I0816 12:39:45.299783   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHKeyPath
	I0816 12:39:45.299942   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHUsername
	I0816 12:39:45.300065   22106 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m03/id_rsa Username:docker}
	I0816 12:39:45.383766   22106 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 12:39:45.388142   22106 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 12:39:45.388161   22106 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/addons for local assets ...
	I0816 12:39:45.388242   22106 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/files for local assets ...
	I0816 12:39:45.388326   22106 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem -> 111492.pem in /etc/ssl/certs
	I0816 12:39:45.388338   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem -> /etc/ssl/certs/111492.pem
	I0816 12:39:45.388432   22106 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 12:39:45.398171   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /etc/ssl/certs/111492.pem (1708 bytes)
	I0816 12:39:45.422664   22106 start.go:296] duration metric: took 125.819541ms for postStartSetup
	I0816 12:39:45.422716   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetConfigRaw
	I0816 12:39:45.423243   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetIP
	I0816 12:39:45.425865   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:45.426320   22106 main.go:141] libmachine: (ha-863936-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:05:59", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:39:35 +0000 UTC Type:0 Mac:52:54:00:ec:05:59 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-863936-m03 Clientid:01:52:54:00:ec:05:59}
	I0816 12:39:45.426353   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined IP address 192.168.39.116 and MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:45.426678   22106 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/config.json ...
	I0816 12:39:45.426870   22106 start.go:128] duration metric: took 24.086054434s to createHost
	I0816 12:39:45.426893   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHHostname
	I0816 12:39:45.429438   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:45.429827   22106 main.go:141] libmachine: (ha-863936-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:05:59", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:39:35 +0000 UTC Type:0 Mac:52:54:00:ec:05:59 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-863936-m03 Clientid:01:52:54:00:ec:05:59}
	I0816 12:39:45.429857   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined IP address 192.168.39.116 and MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:45.430024   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHPort
	I0816 12:39:45.430210   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHKeyPath
	I0816 12:39:45.430386   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHKeyPath
	I0816 12:39:45.430539   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHUsername
	I0816 12:39:45.430715   22106 main.go:141] libmachine: Using SSH client type: native
	I0816 12:39:45.430903   22106 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0816 12:39:45.430916   22106 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 12:39:45.537998   22106 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723811985.515221918
	
	I0816 12:39:45.538019   22106 fix.go:216] guest clock: 1723811985.515221918
	I0816 12:39:45.538028   22106 fix.go:229] Guest: 2024-08-16 12:39:45.515221918 +0000 UTC Remote: 2024-08-16 12:39:45.426882078 +0000 UTC m=+192.431234971 (delta=88.33984ms)
	I0816 12:39:45.538049   22106 fix.go:200] guest clock delta is within tolerance: 88.33984ms
	I0816 12:39:45.538056   22106 start.go:83] releasing machines lock for "ha-863936-m03", held for 24.197348079s
	I0816 12:39:45.538075   22106 main.go:141] libmachine: (ha-863936-m03) Calling .DriverName
	I0816 12:39:45.538325   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetIP
	I0816 12:39:45.540772   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:45.541097   22106 main.go:141] libmachine: (ha-863936-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:05:59", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:39:35 +0000 UTC Type:0 Mac:52:54:00:ec:05:59 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-863936-m03 Clientid:01:52:54:00:ec:05:59}
	I0816 12:39:45.541120   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined IP address 192.168.39.116 and MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:45.543397   22106 out.go:177] * Found network options:
	I0816 12:39:45.544579   22106 out.go:177]   - NO_PROXY=192.168.39.2,192.168.39.101
	W0816 12:39:45.545721   22106 proxy.go:119] fail to check proxy env: Error ip not in block
	W0816 12:39:45.545742   22106 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 12:39:45.545755   22106 main.go:141] libmachine: (ha-863936-m03) Calling .DriverName
	I0816 12:39:45.546283   22106 main.go:141] libmachine: (ha-863936-m03) Calling .DriverName
	I0816 12:39:45.546457   22106 main.go:141] libmachine: (ha-863936-m03) Calling .DriverName
	I0816 12:39:45.546539   22106 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 12:39:45.546567   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHHostname
	W0816 12:39:45.546777   22106 proxy.go:119] fail to check proxy env: Error ip not in block
	W0816 12:39:45.546805   22106 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 12:39:45.546856   22106 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 12:39:45.546875   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHHostname
	I0816 12:39:45.549485   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:45.549808   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:45.549957   22106 main.go:141] libmachine: (ha-863936-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:05:59", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:39:35 +0000 UTC Type:0 Mac:52:54:00:ec:05:59 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-863936-m03 Clientid:01:52:54:00:ec:05:59}
	I0816 12:39:45.549987   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined IP address 192.168.39.116 and MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:45.550044   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHPort
	I0816 12:39:45.550179   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHKeyPath
	I0816 12:39:45.550310   22106 main.go:141] libmachine: (ha-863936-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:05:59", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:39:35 +0000 UTC Type:0 Mac:52:54:00:ec:05:59 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-863936-m03 Clientid:01:52:54:00:ec:05:59}
	I0816 12:39:45.550329   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined IP address 192.168.39.116 and MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:45.550361   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHUsername
	I0816 12:39:45.550473   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHPort
	I0816 12:39:45.550566   22106 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m03/id_rsa Username:docker}
	I0816 12:39:45.550646   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHKeyPath
	I0816 12:39:45.550789   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHUsername
	I0816 12:39:45.550916   22106 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m03/id_rsa Username:docker}
	I0816 12:39:45.784207   22106 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 12:39:45.791086   22106 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 12:39:45.791165   22106 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 12:39:45.807570   22106 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 12:39:45.807594   22106 start.go:495] detecting cgroup driver to use...
	I0816 12:39:45.807690   22106 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 12:39:45.823478   22106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 12:39:45.837750   22106 docker.go:217] disabling cri-docker service (if available) ...
	I0816 12:39:45.837808   22106 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 12:39:45.851187   22106 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 12:39:45.864391   22106 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 12:39:45.993300   22106 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 12:39:46.132685   22106 docker.go:233] disabling docker service ...
	I0816 12:39:46.132753   22106 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 12:39:46.149062   22106 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 12:39:46.163221   22106 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 12:39:46.309389   22106 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 12:39:46.431780   22106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 12:39:46.447205   22106 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 12:39:46.467249   22106 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 12:39:46.467316   22106 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:39:46.480187   22106 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 12:39:46.480244   22106 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:39:46.491467   22106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:39:46.503863   22106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:39:46.516208   22106 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 12:39:46.528759   22106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:39:46.541212   22106 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:39:46.560021   22106 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:39:46.572038   22106 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 12:39:46.582979   22106 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 12:39:46.583030   22106 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 12:39:46.598010   22106 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 12:39:46.609096   22106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 12:39:46.744055   22106 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 12:39:46.892116   22106 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 12:39:46.892194   22106 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 12:39:46.897419   22106 start.go:563] Will wait 60s for crictl version
	I0816 12:39:46.897490   22106 ssh_runner.go:195] Run: which crictl
	I0816 12:39:46.901432   22106 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 12:39:46.943865   22106 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 12:39:46.943950   22106 ssh_runner.go:195] Run: crio --version
	I0816 12:39:46.972800   22106 ssh_runner.go:195] Run: crio --version
	I0816 12:39:47.005284   22106 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 12:39:47.006707   22106 out.go:177]   - env NO_PROXY=192.168.39.2
	I0816 12:39:47.007924   22106 out.go:177]   - env NO_PROXY=192.168.39.2,192.168.39.101
	I0816 12:39:47.009234   22106 main.go:141] libmachine: (ha-863936-m03) Calling .GetIP
	I0816 12:39:47.011740   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:47.012103   22106 main.go:141] libmachine: (ha-863936-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:05:59", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:39:35 +0000 UTC Type:0 Mac:52:54:00:ec:05:59 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-863936-m03 Clientid:01:52:54:00:ec:05:59}
	I0816 12:39:47.012130   22106 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined IP address 192.168.39.116 and MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:39:47.012260   22106 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0816 12:39:47.016138   22106 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 12:39:47.028223   22106 mustload.go:65] Loading cluster: ha-863936
	I0816 12:39:47.028463   22106 config.go:182] Loaded profile config "ha-863936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 12:39:47.028807   22106 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:39:47.028848   22106 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:39:47.043959   22106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37957
	I0816 12:39:47.044352   22106 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:39:47.044793   22106 main.go:141] libmachine: Using API Version  1
	I0816 12:39:47.044812   22106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:39:47.045173   22106 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:39:47.045371   22106 main.go:141] libmachine: (ha-863936) Calling .GetState
	I0816 12:39:47.046860   22106 host.go:66] Checking if "ha-863936" exists ...
	I0816 12:39:47.047119   22106 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:39:47.047148   22106 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:39:47.061012   22106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43707
	I0816 12:39:47.061370   22106 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:39:47.061759   22106 main.go:141] libmachine: Using API Version  1
	I0816 12:39:47.061780   22106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:39:47.062050   22106 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:39:47.062234   22106 main.go:141] libmachine: (ha-863936) Calling .DriverName
	I0816 12:39:47.062390   22106 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936 for IP: 192.168.39.116
	I0816 12:39:47.062401   22106 certs.go:194] generating shared ca certs ...
	I0816 12:39:47.062415   22106 certs.go:226] acquiring lock for ca certs: {Name:mkdb46755373c135acf8239d2d4352f4b0b3d1d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:39:47.062555   22106 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key
	I0816 12:39:47.062609   22106 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key
	I0816 12:39:47.062621   22106 certs.go:256] generating profile certs ...
	I0816 12:39:47.062709   22106 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/client.key
	I0816 12:39:47.062740   22106 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.key.fd4b6242
	I0816 12:39:47.062759   22106 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.crt.fd4b6242 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.2 192.168.39.101 192.168.39.116 192.168.39.254]
	I0816 12:39:47.332156   22106 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.crt.fd4b6242 ...
	I0816 12:39:47.332187   22106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.crt.fd4b6242: {Name:mk0783a32718663628076e9a86ffe5813a5bef31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:39:47.332347   22106 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.key.fd4b6242 ...
	I0816 12:39:47.332357   22106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.key.fd4b6242: {Name:mk54e687be730ef92f1235055c48ec58a7b5a2aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:39:47.332423   22106 certs.go:381] copying /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.crt.fd4b6242 -> /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.crt
	I0816 12:39:47.332574   22106 certs.go:385] copying /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.key.fd4b6242 -> /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.key
	I0816 12:39:47.332730   22106 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/proxy-client.key
	I0816 12:39:47.332748   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0816 12:39:47.332768   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0816 12:39:47.332787   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0816 12:39:47.332805   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0816 12:39:47.332822   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0816 12:39:47.332836   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0816 12:39:47.332849   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0816 12:39:47.332867   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0816 12:39:47.332952   22106 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem (1338 bytes)
	W0816 12:39:47.332991   22106 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149_empty.pem, impossibly tiny 0 bytes
	I0816 12:39:47.333005   22106 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 12:39:47.333037   22106 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem (1082 bytes)
	I0816 12:39:47.333067   22106 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem (1123 bytes)
	I0816 12:39:47.333098   22106 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem (1675 bytes)
	I0816 12:39:47.333151   22106 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem (1708 bytes)
	I0816 12:39:47.333189   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem -> /usr/share/ca-certificates/111492.pem
	I0816 12:39:47.333211   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0816 12:39:47.333229   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem -> /usr/share/ca-certificates/11149.pem
	I0816 12:39:47.333270   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:39:47.336364   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:39:47.336766   22106 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:39:47.336793   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:39:47.336996   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHPort
	I0816 12:39:47.337221   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:39:47.337368   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHUsername
	I0816 12:39:47.337501   22106 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936/id_rsa Username:docker}
	I0816 12:39:47.409377   22106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0816 12:39:47.414598   22106 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0816 12:39:47.426795   22106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0816 12:39:47.431854   22106 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0816 12:39:47.443184   22106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0816 12:39:47.451247   22106 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0816 12:39:47.461963   22106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0816 12:39:47.466038   22106 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0816 12:39:47.476237   22106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0816 12:39:47.480646   22106 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0816 12:39:47.491284   22106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0816 12:39:47.495307   22106 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0816 12:39:47.505716   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 12:39:47.533064   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 12:39:47.558569   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 12:39:47.582587   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 12:39:47.605768   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0816 12:39:47.628844   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 12:39:47.653715   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 12:39:47.678391   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 12:39:47.702330   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /usr/share/ca-certificates/111492.pem (1708 bytes)
	I0816 12:39:47.726281   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 12:39:47.751043   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem --> /usr/share/ca-certificates/11149.pem (1338 bytes)
	I0816 12:39:47.777410   22106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0816 12:39:47.795096   22106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0816 12:39:47.812870   22106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0816 12:39:47.830301   22106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0816 12:39:47.848085   22106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0816 12:39:47.866159   22106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0816 12:39:47.882810   22106 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0816 12:39:47.899353   22106 ssh_runner.go:195] Run: openssl version
	I0816 12:39:47.905031   22106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111492.pem && ln -fs /usr/share/ca-certificates/111492.pem /etc/ssl/certs/111492.pem"
	I0816 12:39:47.915599   22106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111492.pem
	I0816 12:39:47.920005   22106 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 12:33 /usr/share/ca-certificates/111492.pem
	I0816 12:39:47.920054   22106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111492.pem
	I0816 12:39:47.925869   22106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111492.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 12:39:47.936028   22106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 12:39:47.946226   22106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 12:39:47.950887   22106 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 12:22 /usr/share/ca-certificates/minikubeCA.pem
	I0816 12:39:47.950937   22106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 12:39:47.956305   22106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 12:39:47.966967   22106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11149.pem && ln -fs /usr/share/ca-certificates/11149.pem /etc/ssl/certs/11149.pem"
	I0816 12:39:47.978398   22106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11149.pem
	I0816 12:39:47.982636   22106 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 12:33 /usr/share/ca-certificates/11149.pem
	I0816 12:39:47.982686   22106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11149.pem
	I0816 12:39:47.988111   22106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11149.pem /etc/ssl/certs/51391683.0"
	I0816 12:39:47.998232   22106 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 12:39:48.002033   22106 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0816 12:39:48.002087   22106 kubeadm.go:934] updating node {m03 192.168.39.116 8443 v1.31.0 crio true true} ...
	I0816 12:39:48.002163   22106 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-863936-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-863936 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 12:39:48.002187   22106 kube-vip.go:115] generating kube-vip config ...
	I0816 12:39:48.002215   22106 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0816 12:39:48.020229   22106 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0816 12:39:48.020300   22106 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0816 12:39:48.020365   22106 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 12:39:48.030631   22106 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0816 12:39:48.030689   22106 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0816 12:39:48.040332   22106 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0816 12:39:48.040357   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0816 12:39:48.040387   22106 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256
	I0816 12:39:48.040430   22106 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0816 12:39:48.040433   22106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 12:39:48.040337   22106 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256
	I0816 12:39:48.040496   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0816 12:39:48.040587   22106 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0816 12:39:48.055398   22106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0816 12:39:48.055478   22106 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0816 12:39:48.055493   22106 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0816 12:39:48.055505   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0816 12:39:48.055551   22106 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0816 12:39:48.055581   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0816 12:39:48.081435   22106 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0816 12:39:48.081471   22106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0816 12:39:48.900755   22106 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0816 12:39:48.910655   22106 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0816 12:39:48.929582   22106 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 12:39:48.947656   22106 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0816 12:39:48.964345   22106 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0816 12:39:48.968662   22106 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 12:39:48.981049   22106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 12:39:49.113493   22106 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 12:39:49.140698   22106 host.go:66] Checking if "ha-863936" exists ...
	I0816 12:39:49.141190   22106 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:39:49.141237   22106 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:39:49.156476   22106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39717
	I0816 12:39:49.156852   22106 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:39:49.157361   22106 main.go:141] libmachine: Using API Version  1
	I0816 12:39:49.157400   22106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:39:49.157748   22106 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:39:49.157915   22106 main.go:141] libmachine: (ha-863936) Calling .DriverName
	I0816 12:39:49.158050   22106 start.go:317] joinCluster: &{Name:ha-863936 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-863936 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.116 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 12:39:49.158216   22106 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0816 12:39:49.158239   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:39:49.161272   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:39:49.161849   22106 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:39:49.161876   22106 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:39:49.162129   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHPort
	I0816 12:39:49.162319   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:39:49.162498   22106 main.go:141] libmachine: (ha-863936) Calling .GetSSHUsername
	I0816 12:39:49.162651   22106 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936/id_rsa Username:docker}
	I0816 12:39:49.308570   22106 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.116 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 12:39:49.308615   22106 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token u6h2w3.uj2dx2uo7mssayjl --discovery-token-ca-cert-hash sha256:4dc33a2efd2478f5cbf5aa38b4f971f510c5b41ce450063af834610254851641 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-863936-m03 --control-plane --apiserver-advertise-address=192.168.39.116 --apiserver-bind-port=8443"
	I0816 12:40:11.090647   22106 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token u6h2w3.uj2dx2uo7mssayjl --discovery-token-ca-cert-hash sha256:4dc33a2efd2478f5cbf5aa38b4f971f510c5b41ce450063af834610254851641 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-863936-m03 --control-plane --apiserver-advertise-address=192.168.39.116 --apiserver-bind-port=8443": (21.781996337s)
	I0816 12:40:11.090686   22106 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0816 12:40:11.713290   22106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-863936-m03 minikube.k8s.io/updated_at=2024_08_16T12_40_11_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ab84f9bc76071a77c857a14f5c66dccc01002b05 minikube.k8s.io/name=ha-863936 minikube.k8s.io/primary=false
	I0816 12:40:11.848300   22106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-863936-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0816 12:40:11.955312   22106 start.go:319] duration metric: took 22.797258761s to joinCluster
	I0816 12:40:11.955390   22106 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.116 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 12:40:11.955718   22106 config.go:182] Loaded profile config "ha-863936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 12:40:11.958009   22106 out.go:177] * Verifying Kubernetes components...
	I0816 12:40:11.959732   22106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 12:40:12.229920   22106 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 12:40:12.277487   22106 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 12:40:12.277772   22106 kapi.go:59] client config for ha-863936: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/client.crt", KeyFile:"/home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/client.key", CAFile:"/home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f189a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0816 12:40:12.277857   22106 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.2:8443
	I0816 12:40:12.278091   22106 node_ready.go:35] waiting up to 6m0s for node "ha-863936-m03" to be "Ready" ...
	I0816 12:40:12.278182   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:12.278195   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:12.278206   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:12.278212   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:12.282256   22106 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 12:40:12.778472   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:12.778495   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:12.778507   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:12.778514   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:12.781886   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:13.278748   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:13.278775   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:13.278787   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:13.278793   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:13.283264   22106 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 12:40:13.778960   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:13.778987   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:13.778999   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:13.779004   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:13.782687   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:14.279100   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:14.279125   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:14.279138   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:14.279143   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:14.283127   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:14.283959   22106 node_ready.go:53] node "ha-863936-m03" has status "Ready":"False"
	I0816 12:40:14.778564   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:14.778582   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:14.778590   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:14.778594   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:14.782146   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:15.279149   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:15.279171   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:15.279190   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:15.279194   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:15.282879   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:15.779236   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:15.779259   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:15.779266   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:15.779270   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:15.782706   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:16.278302   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:16.278330   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:16.278341   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:16.278346   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:16.281685   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:16.779155   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:16.779179   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:16.779189   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:16.779197   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:16.783557   22106 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 12:40:16.784096   22106 node_ready.go:53] node "ha-863936-m03" has status "Ready":"False"
	I0816 12:40:17.278627   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:17.278649   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:17.278660   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:17.278668   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:17.282931   22106 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 12:40:17.779106   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:17.779153   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:17.779165   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:17.779170   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:17.782823   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:18.278705   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:18.278732   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:18.278742   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:18.278749   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:18.281818   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:18.779227   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:18.779246   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:18.779255   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:18.779259   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:18.782255   22106 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 12:40:19.278470   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:19.278491   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:19.278498   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:19.278502   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:19.282409   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:19.282994   22106 node_ready.go:53] node "ha-863936-m03" has status "Ready":"False"
	I0816 12:40:19.778380   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:19.778401   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:19.778408   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:19.778412   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:19.831668   22106 round_trippers.go:574] Response Status: 200 OK in 53 milliseconds
	I0816 12:40:20.278686   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:20.278707   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:20.278716   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:20.278720   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:20.282475   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:20.778533   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:20.778561   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:20.778577   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:20.778583   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:20.782177   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:21.278736   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:21.278761   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:21.278772   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:21.278779   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:21.282659   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:21.283360   22106 node_ready.go:53] node "ha-863936-m03" has status "Ready":"False"
	I0816 12:40:21.779177   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:21.779196   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:21.779204   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:21.779209   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:21.782339   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:22.278604   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:22.278626   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:22.278635   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:22.278639   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:22.282134   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:22.778983   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:22.779008   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:22.779017   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:22.779022   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:22.782768   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:23.278365   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:23.278387   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:23.278395   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:23.278400   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:23.282675   22106 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 12:40:23.778440   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:23.778461   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:23.778469   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:23.778474   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:23.782033   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:23.782575   22106 node_ready.go:53] node "ha-863936-m03" has status "Ready":"False"
	I0816 12:40:24.279307   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:24.279343   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:24.279352   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:24.279360   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:24.282719   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:24.778754   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:24.778775   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:24.778786   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:24.778792   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:24.782183   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:25.278298   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:25.278324   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:25.278334   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:25.278341   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:25.282223   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:25.779215   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:25.779239   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:25.779245   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:25.779250   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:25.782593   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:25.783197   22106 node_ready.go:53] node "ha-863936-m03" has status "Ready":"False"
	I0816 12:40:26.279008   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:26.279029   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:26.279036   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:26.279040   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:26.282803   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:26.778232   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:26.778254   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:26.778262   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:26.778266   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:26.781679   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:27.278612   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:27.278638   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:27.278650   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:27.278655   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:27.282968   22106 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 12:40:27.778330   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:27.778367   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:27.778377   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:27.778382   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:27.781449   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:28.278931   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:28.278953   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:28.278961   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:28.278965   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:28.282714   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:28.283388   22106 node_ready.go:53] node "ha-863936-m03" has status "Ready":"False"
	I0816 12:40:28.778360   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:28.778381   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:28.778389   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:28.778392   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:28.781029   22106 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 12:40:29.279267   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:29.279291   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:29.279301   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:29.279308   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:29.283008   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:29.778429   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:29.778450   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:29.778461   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:29.778467   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:29.782957   22106 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 12:40:30.278492   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:30.278520   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:30.278530   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:30.278536   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:30.282565   22106 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 12:40:30.778612   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:30.778633   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:30.778641   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:30.778645   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:30.781743   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:30.782623   22106 node_ready.go:53] node "ha-863936-m03" has status "Ready":"False"
	I0816 12:40:31.278323   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:31.278350   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:31.278361   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:31.278365   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:31.282230   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:31.779052   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:31.779073   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:31.779083   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:31.779089   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:31.781908   22106 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 12:40:31.782448   22106 node_ready.go:49] node "ha-863936-m03" has status "Ready":"True"
	I0816 12:40:31.782467   22106 node_ready.go:38] duration metric: took 19.504360065s for node "ha-863936-m03" to be "Ready" ...
	I0816 12:40:31.782475   22106 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 12:40:31.782533   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods
	I0816 12:40:31.782543   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:31.782550   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:31.782555   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:31.788627   22106 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0816 12:40:31.795832   22106 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-7gfgm" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:31.795920   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-7gfgm
	I0816 12:40:31.795931   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:31.795941   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:31.795951   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:31.798749   22106 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 12:40:31.799323   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936
	I0816 12:40:31.799338   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:31.799349   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:31.799354   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:31.802274   22106 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 12:40:31.802790   22106 pod_ready.go:93] pod "coredns-6f6b679f8f-7gfgm" in "kube-system" namespace has status "Ready":"True"
	I0816 12:40:31.802806   22106 pod_ready.go:82] duration metric: took 6.951459ms for pod "coredns-6f6b679f8f-7gfgm" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:31.802817   22106 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-ssb5h" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:31.802892   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-ssb5h
	I0816 12:40:31.802903   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:31.802912   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:31.802920   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:31.805842   22106 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 12:40:31.806670   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936
	I0816 12:40:31.806687   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:31.806697   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:31.806704   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:31.809446   22106 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 12:40:31.809991   22106 pod_ready.go:93] pod "coredns-6f6b679f8f-ssb5h" in "kube-system" namespace has status "Ready":"True"
	I0816 12:40:31.810012   22106 pod_ready.go:82] duration metric: took 7.186952ms for pod "coredns-6f6b679f8f-ssb5h" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:31.810030   22106 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-863936" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:31.810159   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-863936
	I0816 12:40:31.810179   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:31.810190   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:31.810195   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:31.813055   22106 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 12:40:31.813625   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936
	I0816 12:40:31.813638   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:31.813646   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:31.813653   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:31.815932   22106 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 12:40:31.816479   22106 pod_ready.go:93] pod "etcd-ha-863936" in "kube-system" namespace has status "Ready":"True"
	I0816 12:40:31.816493   22106 pod_ready.go:82] duration metric: took 6.455136ms for pod "etcd-ha-863936" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:31.816501   22106 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-863936-m02" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:31.816543   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-863936-m02
	I0816 12:40:31.816550   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:31.816557   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:31.816562   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:31.818930   22106 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 12:40:31.819533   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:40:31.819547   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:31.819554   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:31.819557   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:31.821944   22106 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 12:40:31.822437   22106 pod_ready.go:93] pod "etcd-ha-863936-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 12:40:31.822451   22106 pod_ready.go:82] duration metric: took 5.944552ms for pod "etcd-ha-863936-m02" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:31.822458   22106 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-863936-m03" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:31.980023   22106 request.go:632] Waited for 157.516461ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-863936-m03
	I0816 12:40:31.980075   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-863936-m03
	I0816 12:40:31.980080   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:31.980089   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:31.980096   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:31.983243   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:32.179679   22106 request.go:632] Waited for 195.791488ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:32.179741   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:32.179749   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:32.179759   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:32.179768   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:32.183094   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:32.183726   22106 pod_ready.go:93] pod "etcd-ha-863936-m03" in "kube-system" namespace has status "Ready":"True"
	I0816 12:40:32.183747   22106 pod_ready.go:82] duration metric: took 361.282787ms for pod "etcd-ha-863936-m03" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:32.183770   22106 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-863936" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:32.379843   22106 request.go:632] Waited for 195.98205ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-863936
	I0816 12:40:32.379908   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-863936
	I0816 12:40:32.379915   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:32.379929   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:32.379939   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:32.384347   22106 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 12:40:32.579735   22106 request.go:632] Waited for 194.320249ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-863936
	I0816 12:40:32.579824   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936
	I0816 12:40:32.579836   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:32.579844   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:32.579849   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:32.583255   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:32.583891   22106 pod_ready.go:93] pod "kube-apiserver-ha-863936" in "kube-system" namespace has status "Ready":"True"
	I0816 12:40:32.583909   22106 pod_ready.go:82] duration metric: took 400.128194ms for pod "kube-apiserver-ha-863936" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:32.583919   22106 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-863936-m02" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:32.780134   22106 request.go:632] Waited for 196.057891ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-863936-m02
	I0816 12:40:32.780196   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-863936-m02
	I0816 12:40:32.780202   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:32.780209   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:32.780213   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:32.783424   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:32.979527   22106 request.go:632] Waited for 195.450448ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:40:32.979589   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:40:32.979596   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:32.979603   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:32.979606   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:32.983008   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:32.983583   22106 pod_ready.go:93] pod "kube-apiserver-ha-863936-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 12:40:32.983605   22106 pod_ready.go:82] duration metric: took 399.678344ms for pod "kube-apiserver-ha-863936-m02" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:32.983617   22106 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-863936-m03" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:33.179483   22106 request.go:632] Waited for 195.78335ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-863936-m03
	I0816 12:40:33.179548   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-863936-m03
	I0816 12:40:33.179556   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:33.179563   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:33.179567   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:33.182954   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:33.379921   22106 request.go:632] Waited for 196.353072ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:33.379978   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:33.379983   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:33.379989   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:33.379996   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:33.383619   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:33.384225   22106 pod_ready.go:93] pod "kube-apiserver-ha-863936-m03" in "kube-system" namespace has status "Ready":"True"
	I0816 12:40:33.384243   22106 pod_ready.go:82] duration metric: took 400.618667ms for pod "kube-apiserver-ha-863936-m03" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:33.384254   22106 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-863936" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:33.579467   22106 request.go:632] Waited for 195.152422ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-863936
	I0816 12:40:33.579544   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-863936
	I0816 12:40:33.579550   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:33.579557   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:33.579561   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:33.582685   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:33.779846   22106 request.go:632] Waited for 196.387517ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-863936
	I0816 12:40:33.779912   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936
	I0816 12:40:33.779925   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:33.779935   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:33.779944   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:33.785595   22106 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0816 12:40:33.786659   22106 pod_ready.go:93] pod "kube-controller-manager-ha-863936" in "kube-system" namespace has status "Ready":"True"
	I0816 12:40:33.786684   22106 pod_ready.go:82] duration metric: took 402.421297ms for pod "kube-controller-manager-ha-863936" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:33.786698   22106 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-863936-m02" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:33.979597   22106 request.go:632] Waited for 192.829532ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-863936-m02
	I0816 12:40:33.979650   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-863936-m02
	I0816 12:40:33.979655   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:33.979663   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:33.979667   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:33.982926   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:34.179274   22106 request.go:632] Waited for 195.397989ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:40:34.179329   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:40:34.179336   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:34.179346   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:34.179355   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:34.182593   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:34.183339   22106 pod_ready.go:93] pod "kube-controller-manager-ha-863936-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 12:40:34.183357   22106 pod_ready.go:82] duration metric: took 396.647187ms for pod "kube-controller-manager-ha-863936-m02" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:34.183370   22106 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-863936-m03" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:34.379371   22106 request.go:632] Waited for 195.903008ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-863936-m03
	I0816 12:40:34.379446   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-863936-m03
	I0816 12:40:34.379451   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:34.379462   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:34.379473   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:34.382770   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:34.579862   22106 request.go:632] Waited for 196.312482ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:34.579913   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:34.579918   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:34.579925   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:34.579928   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:34.583461   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:34.583954   22106 pod_ready.go:93] pod "kube-controller-manager-ha-863936-m03" in "kube-system" namespace has status "Ready":"True"
	I0816 12:40:34.583972   22106 pod_ready.go:82] duration metric: took 400.581972ms for pod "kube-controller-manager-ha-863936-m03" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:34.583984   22106 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-25gzj" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:34.779470   22106 request.go:632] Waited for 195.416164ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-25gzj
	I0816 12:40:34.779533   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-25gzj
	I0816 12:40:34.779539   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:34.779551   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:34.779560   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:34.782820   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:34.979908   22106 request.go:632] Waited for 196.334527ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:34.979965   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:34.979970   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:34.979978   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:34.979983   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:34.983250   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:34.983739   22106 pod_ready.go:93] pod "kube-proxy-25gzj" in "kube-system" namespace has status "Ready":"True"
	I0816 12:40:34.983756   22106 pod_ready.go:82] duration metric: took 399.761031ms for pod "kube-proxy-25gzj" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:34.983768   22106 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7lvfc" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:35.179868   22106 request.go:632] Waited for 196.036481ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7lvfc
	I0816 12:40:35.179923   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7lvfc
	I0816 12:40:35.179929   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:35.179937   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:35.179940   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:35.183162   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:35.379150   22106 request.go:632] Waited for 195.284661ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:40:35.379226   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:40:35.379232   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:35.379239   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:35.379243   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:35.382532   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:35.383088   22106 pod_ready.go:93] pod "kube-proxy-7lvfc" in "kube-system" namespace has status "Ready":"True"
	I0816 12:40:35.383107   22106 pod_ready.go:82] duration metric: took 399.332611ms for pod "kube-proxy-7lvfc" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:35.383116   22106 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g75mg" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:35.580093   22106 request.go:632] Waited for 196.911721ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g75mg
	I0816 12:40:35.580184   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g75mg
	I0816 12:40:35.580194   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:35.580204   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:35.580210   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:35.584457   22106 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 12:40:35.780082   22106 request.go:632] Waited for 194.340611ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-863936
	I0816 12:40:35.780145   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936
	I0816 12:40:35.780151   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:35.780158   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:35.780162   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:35.783397   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:35.784071   22106 pod_ready.go:93] pod "kube-proxy-g75mg" in "kube-system" namespace has status "Ready":"True"
	I0816 12:40:35.784090   22106 pod_ready.go:82] duration metric: took 400.967246ms for pod "kube-proxy-g75mg" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:35.784101   22106 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-863936" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:35.979311   22106 request.go:632] Waited for 195.12957ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-863936
	I0816 12:40:35.979386   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-863936
	I0816 12:40:35.979396   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:35.979403   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:35.979407   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:35.986447   22106 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0816 12:40:36.179743   22106 request.go:632] Waited for 192.239359ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-863936
	I0816 12:40:36.179811   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936
	I0816 12:40:36.179818   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:36.179826   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:36.179831   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:36.183193   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:36.183606   22106 pod_ready.go:93] pod "kube-scheduler-ha-863936" in "kube-system" namespace has status "Ready":"True"
	I0816 12:40:36.183625   22106 pod_ready.go:82] duration metric: took 399.516281ms for pod "kube-scheduler-ha-863936" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:36.183636   22106 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-863936-m02" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:36.379990   22106 request.go:632] Waited for 196.29926ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-863936-m02
	I0816 12:40:36.380046   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-863936-m02
	I0816 12:40:36.380051   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:36.380058   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:36.380063   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:36.383859   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:36.580013   22106 request.go:632] Waited for 195.391551ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:40:36.580071   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m02
	I0816 12:40:36.580076   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:36.580085   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:36.580089   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:36.583266   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:36.583762   22106 pod_ready.go:93] pod "kube-scheduler-ha-863936-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 12:40:36.583780   22106 pod_ready.go:82] duration metric: took 400.136201ms for pod "kube-scheduler-ha-863936-m02" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:36.583793   22106 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-863936-m03" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:36.779994   22106 request.go:632] Waited for 196.132372ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-863936-m03
	I0816 12:40:36.780066   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-863936-m03
	I0816 12:40:36.780072   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:36.780078   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:36.780111   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:36.783562   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:36.979513   22106 request.go:632] Waited for 195.236791ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:36.979562   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-863936-m03
	I0816 12:40:36.979567   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:36.979574   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:36.979580   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:36.982705   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:36.983397   22106 pod_ready.go:93] pod "kube-scheduler-ha-863936-m03" in "kube-system" namespace has status "Ready":"True"
	I0816 12:40:36.983412   22106 pod_ready.go:82] duration metric: took 399.611985ms for pod "kube-scheduler-ha-863936-m03" in "kube-system" namespace to be "Ready" ...
	I0816 12:40:36.983424   22106 pod_ready.go:39] duration metric: took 5.200938239s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 12:40:36.983453   22106 api_server.go:52] waiting for apiserver process to appear ...
	I0816 12:40:36.983504   22106 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 12:40:36.999977   22106 api_server.go:72] duration metric: took 25.04455467s to wait for apiserver process to appear ...
	I0816 12:40:36.999996   22106 api_server.go:88] waiting for apiserver healthz status ...
	I0816 12:40:37.000013   22106 api_server.go:253] Checking apiserver healthz at https://192.168.39.2:8443/healthz ...
	I0816 12:40:37.004167   22106 api_server.go:279] https://192.168.39.2:8443/healthz returned 200:
	ok
	I0816 12:40:37.004245   22106 round_trippers.go:463] GET https://192.168.39.2:8443/version
	I0816 12:40:37.004254   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:37.004262   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:37.004266   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:37.005260   22106 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0816 12:40:37.005325   22106 api_server.go:141] control plane version: v1.31.0
	I0816 12:40:37.005358   22106 api_server.go:131] duration metric: took 5.348086ms to wait for apiserver health ...
	I0816 12:40:37.005368   22106 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 12:40:37.179338   22106 request.go:632] Waited for 173.906545ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods
	I0816 12:40:37.179395   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods
	I0816 12:40:37.179404   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:37.179414   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:37.179424   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:37.184969   22106 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0816 12:40:37.191790   22106 system_pods.go:59] 24 kube-system pods found
	I0816 12:40:37.191817   22106 system_pods.go:61] "coredns-6f6b679f8f-7gfgm" [797ae351-63bf-4994-a9bd-901367887b58] Running
	I0816 12:40:37.191824   22106 system_pods.go:61] "coredns-6f6b679f8f-ssb5h" [5162fb17-6897-40d2-9c2c-80157ea46e07] Running
	I0816 12:40:37.191829   22106 system_pods.go:61] "etcd-ha-863936" [cc32212e-19e1-4ff6-9940-70a580978946] Running
	I0816 12:40:37.191834   22106 system_pods.go:61] "etcd-ha-863936-m02" [2ee4ba71-e936-499e-988a-6a0a3b0c6d65] Running
	I0816 12:40:37.191838   22106 system_pods.go:61] "etcd-ha-863936-m03" [7df0a1f8-b762-4019-96d4-ba0c9431169e] Running
	I0816 12:40:37.191843   22106 system_pods.go:61] "kindnet-dddkq" [87bd9636-168b-4f61-9382-0914014af5c0] Running
	I0816 12:40:37.191847   22106 system_pods.go:61] "kindnet-qmrb2" [66996322-476e-4322-a1df-bd8cc820cb59] Running
	I0816 12:40:37.191851   22106 system_pods.go:61] "kindnet-zqs4l" [b9054301-c9d9-4f2e-94c9-4557d6f4af2c] Running
	I0816 12:40:37.191857   22106 system_pods.go:61] "kube-apiserver-ha-863936" [ec7e5aa8-ffe7-4b42-950b-7fd3911e83e0] Running
	I0816 12:40:37.191862   22106 system_pods.go:61] "kube-apiserver-ha-863936-m02" [a32eab45-93ac-4993-b5dd-f73eb91029ce] Running
	I0816 12:40:37.191867   22106 system_pods.go:61] "kube-apiserver-ha-863936-m03" [0ad1dc81-9baf-46cf-854a-61fcbb617fab] Running
	I0816 12:40:37.191873   22106 system_pods.go:61] "kube-controller-manager-ha-863936" [b46326a0-950f-4b23-82a4-7793da0d9e9c] Running
	I0816 12:40:37.191881   22106 system_pods.go:61] "kube-controller-manager-ha-863936-m02" [c0bf3d0c-b461-460b-8523-7b0c76741e17] Running
	I0816 12:40:37.191888   22106 system_pods.go:61] "kube-controller-manager-ha-863936-m03" [9f20b501-1733-41f6-a303-26e384227d1d] Running
	I0816 12:40:37.191893   22106 system_pods.go:61] "kube-proxy-25gzj" [8014f69d-cbe6-4369-8dbc-95bb5a429c22] Running
	I0816 12:40:37.191900   22106 system_pods.go:61] "kube-proxy-7lvfc" [d3e6918e-a097-4037-b962-ed996efda26f] Running
	I0816 12:40:37.191905   22106 system_pods.go:61] "kube-proxy-g75mg" [8d22ea17-7ddd-4c07-89d5-0ebaa170066c] Running
	I0816 12:40:37.191911   22106 system_pods.go:61] "kube-scheduler-ha-863936" [51e497db-1e2d-4020-b030-23702fc7a568] Running
	I0816 12:40:37.191919   22106 system_pods.go:61] "kube-scheduler-ha-863936-m02" [ec98ee42-008b-4f36-95cc-3defde74c964] Running
	I0816 12:40:37.191925   22106 system_pods.go:61] "kube-scheduler-ha-863936-m03" [4b3cb586-9afe-4d2d-845b-e6fd397c75d5] Running
	I0816 12:40:37.191930   22106 system_pods.go:61] "kube-vip-ha-863936" [55dba92f-60c5-416c-9165-cbde743fbfe2] Running
	I0816 12:40:37.191936   22106 system_pods.go:61] "kube-vip-ha-863936-m02" [b385c963-3f91-4810-9cf7-101fa14e28c6] Running
	I0816 12:40:37.191942   22106 system_pods.go:61] "kube-vip-ha-863936-m03" [3c5c462a-b019-4973-89aa-af666e620286] Running
	I0816 12:40:37.191947   22106 system_pods.go:61] "storage-provisioner" [e6e7b7e6-00b6-42e2-9680-e6660e76bc6f] Running
	I0816 12:40:37.191956   22106 system_pods.go:74] duration metric: took 186.580365ms to wait for pod list to return data ...
	I0816 12:40:37.191967   22106 default_sa.go:34] waiting for default service account to be created ...
	I0816 12:40:37.379407   22106 request.go:632] Waited for 187.360835ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/default/serviceaccounts
	I0816 12:40:37.379471   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/default/serviceaccounts
	I0816 12:40:37.379478   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:37.379485   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:37.379488   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:37.383234   22106 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 12:40:37.383368   22106 default_sa.go:45] found service account: "default"
	I0816 12:40:37.383390   22106 default_sa.go:55] duration metric: took 191.415483ms for default service account to be created ...
	I0816 12:40:37.383404   22106 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 12:40:37.579826   22106 request.go:632] Waited for 196.353434ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods
	I0816 12:40:37.579907   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods
	I0816 12:40:37.579917   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:37.579927   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:37.579936   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:37.585456   22106 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0816 12:40:37.591619   22106 system_pods.go:86] 24 kube-system pods found
	I0816 12:40:37.591648   22106 system_pods.go:89] "coredns-6f6b679f8f-7gfgm" [797ae351-63bf-4994-a9bd-901367887b58] Running
	I0816 12:40:37.591654   22106 system_pods.go:89] "coredns-6f6b679f8f-ssb5h" [5162fb17-6897-40d2-9c2c-80157ea46e07] Running
	I0816 12:40:37.591659   22106 system_pods.go:89] "etcd-ha-863936" [cc32212e-19e1-4ff6-9940-70a580978946] Running
	I0816 12:40:37.591662   22106 system_pods.go:89] "etcd-ha-863936-m02" [2ee4ba71-e936-499e-988a-6a0a3b0c6d65] Running
	I0816 12:40:37.591666   22106 system_pods.go:89] "etcd-ha-863936-m03" [7df0a1f8-b762-4019-96d4-ba0c9431169e] Running
	I0816 12:40:37.591670   22106 system_pods.go:89] "kindnet-dddkq" [87bd9636-168b-4f61-9382-0914014af5c0] Running
	I0816 12:40:37.591675   22106 system_pods.go:89] "kindnet-qmrb2" [66996322-476e-4322-a1df-bd8cc820cb59] Running
	I0816 12:40:37.591679   22106 system_pods.go:89] "kindnet-zqs4l" [b9054301-c9d9-4f2e-94c9-4557d6f4af2c] Running
	I0816 12:40:37.591683   22106 system_pods.go:89] "kube-apiserver-ha-863936" [ec7e5aa8-ffe7-4b42-950b-7fd3911e83e0] Running
	I0816 12:40:37.591686   22106 system_pods.go:89] "kube-apiserver-ha-863936-m02" [a32eab45-93ac-4993-b5dd-f73eb91029ce] Running
	I0816 12:40:37.591690   22106 system_pods.go:89] "kube-apiserver-ha-863936-m03" [0ad1dc81-9baf-46cf-854a-61fcbb617fab] Running
	I0816 12:40:37.591694   22106 system_pods.go:89] "kube-controller-manager-ha-863936" [b46326a0-950f-4b23-82a4-7793da0d9e9c] Running
	I0816 12:40:37.591700   22106 system_pods.go:89] "kube-controller-manager-ha-863936-m02" [c0bf3d0c-b461-460b-8523-7b0c76741e17] Running
	I0816 12:40:37.591706   22106 system_pods.go:89] "kube-controller-manager-ha-863936-m03" [9f20b501-1733-41f6-a303-26e384227d1d] Running
	I0816 12:40:37.591710   22106 system_pods.go:89] "kube-proxy-25gzj" [8014f69d-cbe6-4369-8dbc-95bb5a429c22] Running
	I0816 12:40:37.591714   22106 system_pods.go:89] "kube-proxy-7lvfc" [d3e6918e-a097-4037-b962-ed996efda26f] Running
	I0816 12:40:37.591718   22106 system_pods.go:89] "kube-proxy-g75mg" [8d22ea17-7ddd-4c07-89d5-0ebaa170066c] Running
	I0816 12:40:37.591722   22106 system_pods.go:89] "kube-scheduler-ha-863936" [51e497db-1e2d-4020-b030-23702fc7a568] Running
	I0816 12:40:37.591725   22106 system_pods.go:89] "kube-scheduler-ha-863936-m02" [ec98ee42-008b-4f36-95cc-3defde74c964] Running
	I0816 12:40:37.591731   22106 system_pods.go:89] "kube-scheduler-ha-863936-m03" [4b3cb586-9afe-4d2d-845b-e6fd397c75d5] Running
	I0816 12:40:37.591734   22106 system_pods.go:89] "kube-vip-ha-863936" [55dba92f-60c5-416c-9165-cbde743fbfe2] Running
	I0816 12:40:37.591737   22106 system_pods.go:89] "kube-vip-ha-863936-m02" [b385c963-3f91-4810-9cf7-101fa14e28c6] Running
	I0816 12:40:37.591740   22106 system_pods.go:89] "kube-vip-ha-863936-m03" [3c5c462a-b019-4973-89aa-af666e620286] Running
	I0816 12:40:37.591743   22106 system_pods.go:89] "storage-provisioner" [e6e7b7e6-00b6-42e2-9680-e6660e76bc6f] Running
	I0816 12:40:37.591749   22106 system_pods.go:126] duration metric: took 208.336649ms to wait for k8s-apps to be running ...
	I0816 12:40:37.591758   22106 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 12:40:37.591801   22106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 12:40:37.608424   22106 system_svc.go:56] duration metric: took 16.656838ms WaitForService to wait for kubelet
	I0816 12:40:37.608446   22106 kubeadm.go:582] duration metric: took 25.65302687s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 12:40:37.608467   22106 node_conditions.go:102] verifying NodePressure condition ...
	I0816 12:40:37.779945   22106 request.go:632] Waited for 171.399328ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes
	I0816 12:40:37.780025   22106 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes
	I0816 12:40:37.780036   22106 round_trippers.go:469] Request Headers:
	I0816 12:40:37.780047   22106 round_trippers.go:473]     Accept: application/json, */*
	I0816 12:40:37.780054   22106 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 12:40:37.784395   22106 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 12:40:37.786298   22106 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 12:40:37.786331   22106 node_conditions.go:123] node cpu capacity is 2
	I0816 12:40:37.786346   22106 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 12:40:37.786351   22106 node_conditions.go:123] node cpu capacity is 2
	I0816 12:40:37.786357   22106 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 12:40:37.786362   22106 node_conditions.go:123] node cpu capacity is 2
	I0816 12:40:37.786368   22106 node_conditions.go:105] duration metric: took 177.896291ms to run NodePressure ...
	I0816 12:40:37.786382   22106 start.go:241] waiting for startup goroutines ...
	I0816 12:40:37.786414   22106 start.go:255] writing updated cluster config ...
	I0816 12:40:37.786855   22106 ssh_runner.go:195] Run: rm -f paused
	I0816 12:40:37.840521   22106 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 12:40:37.843493   22106 out.go:177] * Done! kubectl is now configured to use "ha-863936" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 16 12:45:08 ha-863936 crio[680]: time="2024-08-16 12:45:08.909561349Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812308909529616,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=615174b7-f5fc-4e6f-a8e8-c7d472764724 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 12:45:08 ha-863936 crio[680]: time="2024-08-16 12:45:08.910309588Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d213e9f5-631e-4f18-970d-9ab93697ec87 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 12:45:08 ha-863936 crio[680]: time="2024-08-16 12:45:08.910409225Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d213e9f5-631e-4f18-970d-9ab93697ec87 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 12:45:08 ha-863936 crio[680]: time="2024-08-16 12:45:08.910734679Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e73d7f930e1761fdc56db10356ad46f9f2acd1f4aead50f57efa3558af4e0b18,PodSandboxId:5f9b33b7fe6f25a53393dfc965ee81bb65952c3ab4fc610bd3fa7395f2ed6d3a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723812042160061472,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zqpfx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c52c866f-81c3-423f-a604-f792834e341e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a32107a6690bf88691db72f9019b8ec9c8c6e1ce5e447fa21e37000fbe8fe696,PodSandboxId:13e4c008cfb7ea17cb823e290756e07b0177dd0379a53dafaff6302e03252b5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723811856865690872,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-ssb5h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5162fb17-6897-40d2-9c2c-80157ea46e07,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fb58a4d7b8e84f3429f58f729e1f93b54a8aadb737f10f997bf64e601d6edd6,PodSandboxId:7061cc0bd22ace243b66f598d9799b3e59733e06ba7f688f1f4a72a56387bfd3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723811856826422012,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-7gfgm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
797ae351-63bf-4994-a9bd-901367887b58,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7eccab4aea0bb6f0bc7c4549fb8ee6bdfb5c2805f3bd08c2c101869d2d91f44,PodSandboxId:17d99db1f4e4f93d1c171d0d47f3cd255f97dd2c89e9bdad7274573d55fc5109,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1723811856781500226,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6e7b7e6-00b6-42e2-9680-e6660e76bc6f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b83ba25619ab61e7a449e6a735eb3cca59c184e250a87f2af5b712fbc37fc331,PodSandboxId:d524a508e86ff890d883786349c2b55fe61dc345620d11bc49cfc83efa8c5816,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1723811844925839346,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dddkq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87bd9636-168b-4f61-9382-0914014af5c0,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4aa588906cdcd0cf4fcb973469df4a1d02cafe5e3388d6c516665f0d1af8ceb4,PodSandboxId:e0fda91da3630c4c4c4612e48a47583f0c6a77f263ee246204a23e60b2f9156c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172381184
0918258276,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g75mg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d22ea17-7ddd-4c07-89d5-0ebaa170066c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50ae5af99f5970011dec9ba89fd0047f1f9b657bdad8b1e90a1718aa00bdd86a,PodSandboxId:440481aadacb06709d51c423b632e279ae02e3d4dbb17c738b0eff0b2c6c4ee1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172381183232
7212273,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f21ce045d97e5d71d18a00985c30116f,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee882e5e99dadc7370d79fccecde5adec2c82fc5cf4d93a04c88222c888fc1a9,PodSandboxId:6ebc21b6e76559aefefb4672c28d96c9b1f956e38bb4a72c99eda68a533786ac,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723811829558483285,Labels:map[string]string{
io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fb02b673e0a97e6d66a5a7404114d26,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f34879b3d9bde9efea1f6d29ab90f29e4d7c261b375b99154b6292d41fc10559,PodSandboxId:30242516e8e9ac227e7aba5fcf3357980c39bf1d53d5180208366d9151a9f6e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723811829571304558,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4eb1802b446ee0233a6ed400bf8fd33,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a0281c780fc2216d75b34fb0ab5edeca5d750269010e7a86e842bc53970539d,PodSandboxId:40cdcfe4bd9df902d0159353292c04634d78c4dfe6f98b844b9ee744dd1f4204,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723811829473844589,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0bcffbcfcc9f18fc26b991d99b329e9,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2beea397951195fcf59b5f00713ebd9cc8a260e3975fa901a4733ac52610bd62,PodSandboxId:07219fcbf99eb43de5a7eaff62f9fbdfb6ea996deb4608e094841f000b349224,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723811829421176392,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1758561f6a2148ce3a7eabea3ce99a1a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d213e9f5-631e-4f18-970d-9ab93697ec87 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 12:45:08 ha-863936 crio[680]: time="2024-08-16 12:45:08.953540882Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dc587914-b0eb-4e57-b120-e3f8b0dd6032 name=/runtime.v1.RuntimeService/Version
	Aug 16 12:45:08 ha-863936 crio[680]: time="2024-08-16 12:45:08.953612750Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dc587914-b0eb-4e57-b120-e3f8b0dd6032 name=/runtime.v1.RuntimeService/Version
	Aug 16 12:45:08 ha-863936 crio[680]: time="2024-08-16 12:45:08.954879332Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d4d5e33d-15dd-4bcd-8065-210ffa0f048b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 12:45:08 ha-863936 crio[680]: time="2024-08-16 12:45:08.955481135Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812308955452091,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d4d5e33d-15dd-4bcd-8065-210ffa0f048b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 12:45:08 ha-863936 crio[680]: time="2024-08-16 12:45:08.956137314Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c055d3b5-e2bf-4a3a-9900-aad5a958758a name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 12:45:08 ha-863936 crio[680]: time="2024-08-16 12:45:08.956212514Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c055d3b5-e2bf-4a3a-9900-aad5a958758a name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 12:45:08 ha-863936 crio[680]: time="2024-08-16 12:45:08.956482195Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e73d7f930e1761fdc56db10356ad46f9f2acd1f4aead50f57efa3558af4e0b18,PodSandboxId:5f9b33b7fe6f25a53393dfc965ee81bb65952c3ab4fc610bd3fa7395f2ed6d3a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723812042160061472,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zqpfx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c52c866f-81c3-423f-a604-f792834e341e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a32107a6690bf88691db72f9019b8ec9c8c6e1ce5e447fa21e37000fbe8fe696,PodSandboxId:13e4c008cfb7ea17cb823e290756e07b0177dd0379a53dafaff6302e03252b5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723811856865690872,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-ssb5h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5162fb17-6897-40d2-9c2c-80157ea46e07,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fb58a4d7b8e84f3429f58f729e1f93b54a8aadb737f10f997bf64e601d6edd6,PodSandboxId:7061cc0bd22ace243b66f598d9799b3e59733e06ba7f688f1f4a72a56387bfd3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723811856826422012,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-7gfgm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
797ae351-63bf-4994-a9bd-901367887b58,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7eccab4aea0bb6f0bc7c4549fb8ee6bdfb5c2805f3bd08c2c101869d2d91f44,PodSandboxId:17d99db1f4e4f93d1c171d0d47f3cd255f97dd2c89e9bdad7274573d55fc5109,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1723811856781500226,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6e7b7e6-00b6-42e2-9680-e6660e76bc6f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b83ba25619ab61e7a449e6a735eb3cca59c184e250a87f2af5b712fbc37fc331,PodSandboxId:d524a508e86ff890d883786349c2b55fe61dc345620d11bc49cfc83efa8c5816,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1723811844925839346,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dddkq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87bd9636-168b-4f61-9382-0914014af5c0,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4aa588906cdcd0cf4fcb973469df4a1d02cafe5e3388d6c516665f0d1af8ceb4,PodSandboxId:e0fda91da3630c4c4c4612e48a47583f0c6a77f263ee246204a23e60b2f9156c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172381184
0918258276,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g75mg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d22ea17-7ddd-4c07-89d5-0ebaa170066c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50ae5af99f5970011dec9ba89fd0047f1f9b657bdad8b1e90a1718aa00bdd86a,PodSandboxId:440481aadacb06709d51c423b632e279ae02e3d4dbb17c738b0eff0b2c6c4ee1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172381183232
7212273,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f21ce045d97e5d71d18a00985c30116f,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee882e5e99dadc7370d79fccecde5adec2c82fc5cf4d93a04c88222c888fc1a9,PodSandboxId:6ebc21b6e76559aefefb4672c28d96c9b1f956e38bb4a72c99eda68a533786ac,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723811829558483285,Labels:map[string]string{
io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fb02b673e0a97e6d66a5a7404114d26,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f34879b3d9bde9efea1f6d29ab90f29e4d7c261b375b99154b6292d41fc10559,PodSandboxId:30242516e8e9ac227e7aba5fcf3357980c39bf1d53d5180208366d9151a9f6e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723811829571304558,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4eb1802b446ee0233a6ed400bf8fd33,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a0281c780fc2216d75b34fb0ab5edeca5d750269010e7a86e842bc53970539d,PodSandboxId:40cdcfe4bd9df902d0159353292c04634d78c4dfe6f98b844b9ee744dd1f4204,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723811829473844589,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0bcffbcfcc9f18fc26b991d99b329e9,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2beea397951195fcf59b5f00713ebd9cc8a260e3975fa901a4733ac52610bd62,PodSandboxId:07219fcbf99eb43de5a7eaff62f9fbdfb6ea996deb4608e094841f000b349224,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723811829421176392,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1758561f6a2148ce3a7eabea3ce99a1a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c055d3b5-e2bf-4a3a-9900-aad5a958758a name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 12:45:09 ha-863936 crio[680]: time="2024-08-16 12:45:09.000769567Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c02ad7f4-9aa2-430a-8514-7e7b8a566533 name=/runtime.v1.RuntimeService/Version
	Aug 16 12:45:09 ha-863936 crio[680]: time="2024-08-16 12:45:09.000842593Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c02ad7f4-9aa2-430a-8514-7e7b8a566533 name=/runtime.v1.RuntimeService/Version
	Aug 16 12:45:09 ha-863936 crio[680]: time="2024-08-16 12:45:09.001919461Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ee7b9d4f-4564-4188-8a55-dc53ccd1cc94 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 12:45:09 ha-863936 crio[680]: time="2024-08-16 12:45:09.002606634Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812309002581318,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ee7b9d4f-4564-4188-8a55-dc53ccd1cc94 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 12:45:09 ha-863936 crio[680]: time="2024-08-16 12:45:09.003249539Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=720da56c-5bba-47e5-b617-c456f4003a02 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 12:45:09 ha-863936 crio[680]: time="2024-08-16 12:45:09.003317368Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=720da56c-5bba-47e5-b617-c456f4003a02 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 12:45:09 ha-863936 crio[680]: time="2024-08-16 12:45:09.003569652Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e73d7f930e1761fdc56db10356ad46f9f2acd1f4aead50f57efa3558af4e0b18,PodSandboxId:5f9b33b7fe6f25a53393dfc965ee81bb65952c3ab4fc610bd3fa7395f2ed6d3a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723812042160061472,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zqpfx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c52c866f-81c3-423f-a604-f792834e341e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a32107a6690bf88691db72f9019b8ec9c8c6e1ce5e447fa21e37000fbe8fe696,PodSandboxId:13e4c008cfb7ea17cb823e290756e07b0177dd0379a53dafaff6302e03252b5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723811856865690872,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-ssb5h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5162fb17-6897-40d2-9c2c-80157ea46e07,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fb58a4d7b8e84f3429f58f729e1f93b54a8aadb737f10f997bf64e601d6edd6,PodSandboxId:7061cc0bd22ace243b66f598d9799b3e59733e06ba7f688f1f4a72a56387bfd3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723811856826422012,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-7gfgm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
797ae351-63bf-4994-a9bd-901367887b58,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7eccab4aea0bb6f0bc7c4549fb8ee6bdfb5c2805f3bd08c2c101869d2d91f44,PodSandboxId:17d99db1f4e4f93d1c171d0d47f3cd255f97dd2c89e9bdad7274573d55fc5109,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1723811856781500226,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6e7b7e6-00b6-42e2-9680-e6660e76bc6f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b83ba25619ab61e7a449e6a735eb3cca59c184e250a87f2af5b712fbc37fc331,PodSandboxId:d524a508e86ff890d883786349c2b55fe61dc345620d11bc49cfc83efa8c5816,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1723811844925839346,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dddkq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87bd9636-168b-4f61-9382-0914014af5c0,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4aa588906cdcd0cf4fcb973469df4a1d02cafe5e3388d6c516665f0d1af8ceb4,PodSandboxId:e0fda91da3630c4c4c4612e48a47583f0c6a77f263ee246204a23e60b2f9156c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172381184
0918258276,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g75mg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d22ea17-7ddd-4c07-89d5-0ebaa170066c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50ae5af99f5970011dec9ba89fd0047f1f9b657bdad8b1e90a1718aa00bdd86a,PodSandboxId:440481aadacb06709d51c423b632e279ae02e3d4dbb17c738b0eff0b2c6c4ee1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172381183232
7212273,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f21ce045d97e5d71d18a00985c30116f,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee882e5e99dadc7370d79fccecde5adec2c82fc5cf4d93a04c88222c888fc1a9,PodSandboxId:6ebc21b6e76559aefefb4672c28d96c9b1f956e38bb4a72c99eda68a533786ac,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723811829558483285,Labels:map[string]string{
io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fb02b673e0a97e6d66a5a7404114d26,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f34879b3d9bde9efea1f6d29ab90f29e4d7c261b375b99154b6292d41fc10559,PodSandboxId:30242516e8e9ac227e7aba5fcf3357980c39bf1d53d5180208366d9151a9f6e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723811829571304558,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4eb1802b446ee0233a6ed400bf8fd33,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a0281c780fc2216d75b34fb0ab5edeca5d750269010e7a86e842bc53970539d,PodSandboxId:40cdcfe4bd9df902d0159353292c04634d78c4dfe6f98b844b9ee744dd1f4204,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723811829473844589,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0bcffbcfcc9f18fc26b991d99b329e9,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2beea397951195fcf59b5f00713ebd9cc8a260e3975fa901a4733ac52610bd62,PodSandboxId:07219fcbf99eb43de5a7eaff62f9fbdfb6ea996deb4608e094841f000b349224,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723811829421176392,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1758561f6a2148ce3a7eabea3ce99a1a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=720da56c-5bba-47e5-b617-c456f4003a02 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 12:45:09 ha-863936 crio[680]: time="2024-08-16 12:45:09.049442635Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f608891e-f72d-4acd-9718-195fdc2f2494 name=/runtime.v1.RuntimeService/Version
	Aug 16 12:45:09 ha-863936 crio[680]: time="2024-08-16 12:45:09.049565478Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f608891e-f72d-4acd-9718-195fdc2f2494 name=/runtime.v1.RuntimeService/Version
	Aug 16 12:45:09 ha-863936 crio[680]: time="2024-08-16 12:45:09.050552568Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=eaaa5e4a-bce6-4f58-8a94-a59bb165d449 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 12:45:09 ha-863936 crio[680]: time="2024-08-16 12:45:09.051166780Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812309051139359,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eaaa5e4a-bce6-4f58-8a94-a59bb165d449 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 12:45:09 ha-863936 crio[680]: time="2024-08-16 12:45:09.051659914Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a452af3b-ba53-444d-976b-3febe70f3309 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 12:45:09 ha-863936 crio[680]: time="2024-08-16 12:45:09.051753111Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a452af3b-ba53-444d-976b-3febe70f3309 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 12:45:09 ha-863936 crio[680]: time="2024-08-16 12:45:09.052147434Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e73d7f930e1761fdc56db10356ad46f9f2acd1f4aead50f57efa3558af4e0b18,PodSandboxId:5f9b33b7fe6f25a53393dfc965ee81bb65952c3ab4fc610bd3fa7395f2ed6d3a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723812042160061472,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zqpfx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c52c866f-81c3-423f-a604-f792834e341e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a32107a6690bf88691db72f9019b8ec9c8c6e1ce5e447fa21e37000fbe8fe696,PodSandboxId:13e4c008cfb7ea17cb823e290756e07b0177dd0379a53dafaff6302e03252b5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723811856865690872,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-ssb5h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5162fb17-6897-40d2-9c2c-80157ea46e07,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fb58a4d7b8e84f3429f58f729e1f93b54a8aadb737f10f997bf64e601d6edd6,PodSandboxId:7061cc0bd22ace243b66f598d9799b3e59733e06ba7f688f1f4a72a56387bfd3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723811856826422012,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-7gfgm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
797ae351-63bf-4994-a9bd-901367887b58,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7eccab4aea0bb6f0bc7c4549fb8ee6bdfb5c2805f3bd08c2c101869d2d91f44,PodSandboxId:17d99db1f4e4f93d1c171d0d47f3cd255f97dd2c89e9bdad7274573d55fc5109,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1723811856781500226,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6e7b7e6-00b6-42e2-9680-e6660e76bc6f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b83ba25619ab61e7a449e6a735eb3cca59c184e250a87f2af5b712fbc37fc331,PodSandboxId:d524a508e86ff890d883786349c2b55fe61dc345620d11bc49cfc83efa8c5816,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1723811844925839346,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dddkq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87bd9636-168b-4f61-9382-0914014af5c0,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4aa588906cdcd0cf4fcb973469df4a1d02cafe5e3388d6c516665f0d1af8ceb4,PodSandboxId:e0fda91da3630c4c4c4612e48a47583f0c6a77f263ee246204a23e60b2f9156c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172381184
0918258276,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g75mg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d22ea17-7ddd-4c07-89d5-0ebaa170066c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50ae5af99f5970011dec9ba89fd0047f1f9b657bdad8b1e90a1718aa00bdd86a,PodSandboxId:440481aadacb06709d51c423b632e279ae02e3d4dbb17c738b0eff0b2c6c4ee1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172381183232
7212273,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f21ce045d97e5d71d18a00985c30116f,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee882e5e99dadc7370d79fccecde5adec2c82fc5cf4d93a04c88222c888fc1a9,PodSandboxId:6ebc21b6e76559aefefb4672c28d96c9b1f956e38bb4a72c99eda68a533786ac,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723811829558483285,Labels:map[string]string{
io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fb02b673e0a97e6d66a5a7404114d26,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f34879b3d9bde9efea1f6d29ab90f29e4d7c261b375b99154b6292d41fc10559,PodSandboxId:30242516e8e9ac227e7aba5fcf3357980c39bf1d53d5180208366d9151a9f6e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723811829571304558,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4eb1802b446ee0233a6ed400bf8fd33,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a0281c780fc2216d75b34fb0ab5edeca5d750269010e7a86e842bc53970539d,PodSandboxId:40cdcfe4bd9df902d0159353292c04634d78c4dfe6f98b844b9ee744dd1f4204,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723811829473844589,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0bcffbcfcc9f18fc26b991d99b329e9,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2beea397951195fcf59b5f00713ebd9cc8a260e3975fa901a4733ac52610bd62,PodSandboxId:07219fcbf99eb43de5a7eaff62f9fbdfb6ea996deb4608e094841f000b349224,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723811829421176392,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1758561f6a2148ce3a7eabea3ce99a1a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a452af3b-ba53-444d-976b-3febe70f3309 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e73d7f930e176       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   5f9b33b7fe6f2       busybox-7dff88458-zqpfx
	a32107a6690bf       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   13e4c008cfb7e       coredns-6f6b679f8f-ssb5h
	8fb58a4d7b8e8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   7061cc0bd22ac       coredns-6f6b679f8f-7gfgm
	c7eccab4aea0b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Running             storage-provisioner       0                   17d99db1f4e4f       storage-provisioner
	b83ba25619ab6       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    7 minutes ago       Running             kindnet-cni               0                   d524a508e86ff       kindnet-dddkq
	4aa588906cdcd       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      7 minutes ago       Running             kube-proxy                0                   e0fda91da3630       kube-proxy-g75mg
	50ae5af99f597       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   440481aadacb0       kube-vip-ha-863936
	f34879b3d9bde       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      7 minutes ago       Running             etcd                      0                   30242516e8e9a       etcd-ha-863936
	ee882e5e99dad       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      7 minutes ago       Running             kube-apiserver            0                   6ebc21b6e7655       kube-apiserver-ha-863936
	4a0281c780fc2       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      7 minutes ago       Running             kube-scheduler            0                   40cdcfe4bd9df       kube-scheduler-ha-863936
	2beea39795119       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      7 minutes ago       Running             kube-controller-manager   0                   07219fcbf99eb       kube-controller-manager-ha-863936
	
	
	==> coredns [8fb58a4d7b8e84f3429f58f729e1f93b54a8aadb737f10f997bf64e601d6edd6] <==
	[INFO] 10.244.0.4:57950 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.002003074s
	[INFO] 10.244.2.2:35545 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.013578902s
	[INFO] 10.244.2.2:50915 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000198375s
	[INFO] 10.244.2.2:54351 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003032048s
	[INFO] 10.244.2.2:33554 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000202349s
	[INFO] 10.244.2.2:49854 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000138224s
	[INFO] 10.244.2.2:52911 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000113497s
	[INFO] 10.244.1.2:58083 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001926786s
	[INFO] 10.244.1.2:40090 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000179243s
	[INFO] 10.244.0.4:38072 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001911453s
	[INFO] 10.244.0.4:48123 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000124668s
	[INFO] 10.244.2.2:45589 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104297s
	[INFO] 10.244.2.2:47676 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000096845s
	[INFO] 10.244.2.2:34029 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090037s
	[INFO] 10.244.2.2:44387 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000085042s
	[INFO] 10.244.1.2:39606 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000160442s
	[INFO] 10.244.1.2:35616 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000085764s
	[INFO] 10.244.1.2:41949 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000261174s
	[INFO] 10.244.1.2:33001 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071351s
	[INFO] 10.244.0.4:57464 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000150636s
	[INFO] 10.244.2.2:55232 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000242943s
	[INFO] 10.244.2.2:35398 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000209274s
	[INFO] 10.244.1.2:40761 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122103s
	[INFO] 10.244.1.2:46518 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000133408s
	[INFO] 10.244.1.2:41022 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000117384s
	
	
	==> coredns [a32107a6690bf88691db72f9019b8ec9c8c6e1ce5e447fa21e37000fbe8fe696] <==
	[INFO] 10.244.1.2:36903 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001963097s
	[INFO] 10.244.2.2:42077 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000197227s
	[INFO] 10.244.2.2:53338 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000203298s
	[INFO] 10.244.1.2:37962 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128488s
	[INFO] 10.244.1.2:53685 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000098031s
	[INFO] 10.244.1.2:33689 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000277395s
	[INFO] 10.244.1.2:40131 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001237471s
	[INFO] 10.244.1.2:39633 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000131283s
	[INFO] 10.244.1.2:60171 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000121735s
	[INFO] 10.244.0.4:60191 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114357s
	[INFO] 10.244.0.4:41890 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000066371s
	[INFO] 10.244.0.4:55945 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000119788s
	[INFO] 10.244.0.4:57226 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001318461s
	[INFO] 10.244.0.4:56732 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000093503s
	[INFO] 10.244.0.4:52075 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000104691s
	[INFO] 10.244.0.4:60105 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121048s
	[INFO] 10.244.0.4:43134 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000066121s
	[INFO] 10.244.0.4:44998 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063593s
	[INFO] 10.244.2.2:47337 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00013984s
	[INFO] 10.244.2.2:54916 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000155787s
	[INFO] 10.244.1.2:40477 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000149375s
	[INFO] 10.244.0.4:48877 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125695s
	[INFO] 10.244.0.4:37769 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000100407s
	[INFO] 10.244.0.4:53971 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000045729s
	[INFO] 10.244.0.4:37660 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000216606s
	
	
	==> describe nodes <==
	Name:               ha-863936
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-863936
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab84f9bc76071a77c857a14f5c66dccc01002b05
	                    minikube.k8s.io/name=ha-863936
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_16T12_37_16_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 12:37:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-863936
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 12:45:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 12:40:50 +0000   Fri, 16 Aug 2024 12:37:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 12:40:50 +0000   Fri, 16 Aug 2024 12:37:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 12:40:50 +0000   Fri, 16 Aug 2024 12:37:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 12:40:50 +0000   Fri, 16 Aug 2024 12:37:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.2
	  Hostname:    ha-863936
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 10f8ad5d72f24178a58c9bc9c1f37801
	  System UUID:                10f8ad5d-72f2-4178-a58c-9bc9c1f37801
	  Boot ID:                    4cc922cf-4096-4ce6-955a-2954b5f98b77
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-zqpfx              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	  kube-system                 coredns-6f6b679f8f-7gfgm             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     7m49s
	  kube-system                 coredns-6f6b679f8f-ssb5h             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     7m49s
	  kube-system                 etcd-ha-863936                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         7m53s
	  kube-system                 kindnet-dddkq                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m49s
	  kube-system                 kube-apiserver-ha-863936             250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m53s
	  kube-system                 kube-controller-manager-ha-863936    200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m53s
	  kube-system                 kube-proxy-g75mg                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m49s
	  kube-system                 kube-scheduler-ha-863936             100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m53s
	  kube-system                 kube-vip-ha-863936                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m53s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m48s  kube-proxy       
	  Normal  Starting                 7m54s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m54s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m53s  kubelet          Node ha-863936 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m53s  kubelet          Node ha-863936 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m53s  kubelet          Node ha-863936 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m50s  node-controller  Node ha-863936 event: Registered Node ha-863936 in Controller
	  Normal  NodeReady                7m33s  kubelet          Node ha-863936 status is now: NodeReady
	  Normal  RegisteredNode           6m6s   node-controller  Node ha-863936 event: Registered Node ha-863936 in Controller
	  Normal  RegisteredNode           4m52s  node-controller  Node ha-863936 event: Registered Node ha-863936 in Controller
	
	
	Name:               ha-863936-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-863936-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab84f9bc76071a77c857a14f5c66dccc01002b05
	                    minikube.k8s.io/name=ha-863936
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_16T12_38_57_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 12:38:55 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-863936-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 12:41:48 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 16 Aug 2024 12:40:58 +0000   Fri, 16 Aug 2024 12:42:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 16 Aug 2024 12:40:58 +0000   Fri, 16 Aug 2024 12:42:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 16 Aug 2024 12:40:58 +0000   Fri, 16 Aug 2024 12:42:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 16 Aug 2024 12:40:58 +0000   Fri, 16 Aug 2024 12:42:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.101
	  Hostname:    ha-863936-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c538a90b7afb4607a2068ae6c8689740
	  System UUID:                c538a90b-7afb-4607-a206-8ae6c8689740
	  Boot ID:                    905428ee-99b5-4544-bd9e-3ece49443b02
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-t5tjw                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	  kube-system                 etcd-ha-863936-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m12s
	  kube-system                 kindnet-qmrb2                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m14s
	  kube-system                 kube-apiserver-ha-863936-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m12s
	  kube-system                 kube-controller-manager-ha-863936-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m11s
	  kube-system                 kube-proxy-7lvfc                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m14s
	  kube-system                 kube-scheduler-ha-863936-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m4s
	  kube-system                 kube-vip-ha-863936-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m9s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  6m14s (x8 over 6m14s)  kubelet          Node ha-863936-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m14s (x8 over 6m14s)  kubelet          Node ha-863936-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m14s (x7 over 6m14s)  kubelet          Node ha-863936-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m10s                  node-controller  Node ha-863936-m02 event: Registered Node ha-863936-m02 in Controller
	  Normal  RegisteredNode           6m6s                   node-controller  Node ha-863936-m02 event: Registered Node ha-863936-m02 in Controller
	  Normal  RegisteredNode           4m52s                  node-controller  Node ha-863936-m02 event: Registered Node ha-863936-m02 in Controller
	  Normal  NodeNotReady             2m40s                  node-controller  Node ha-863936-m02 status is now: NodeNotReady
	
	
	Name:               ha-863936-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-863936-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab84f9bc76071a77c857a14f5c66dccc01002b05
	                    minikube.k8s.io/name=ha-863936
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_16T12_40_11_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 12:40:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-863936-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 12:45:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 12:41:10 +0000   Fri, 16 Aug 2024 12:40:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 12:41:10 +0000   Fri, 16 Aug 2024 12:40:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 12:41:10 +0000   Fri, 16 Aug 2024 12:40:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 12:41:10 +0000   Fri, 16 Aug 2024 12:40:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.116
	  Hostname:    ha-863936-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b54ef01aeadc4a70aaecea24c80f74de
	  System UUID:                b54ef01a-eadc-4a70-aaec-ea24c80f74de
	  Boot ID:                    01b09bab-fb3a-4947-8e0c-d6a621aada21
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-gm458                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	  kube-system                 etcd-ha-863936-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m1s
	  kube-system                 kindnet-zqs4l                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m1s
	  kube-system                 kube-apiserver-ha-863936-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m
	  kube-system                 kube-controller-manager-ha-863936-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m
	  kube-system                 kube-proxy-25gzj                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m1s
	  kube-system                 kube-scheduler-ha-863936-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m
	  kube-system                 kube-vip-ha-863936-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m57s                kube-proxy       
	  Normal  Starting                 5m1s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m1s (x8 over 5m1s)  kubelet          Node ha-863936-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m1s (x8 over 5m1s)  kubelet          Node ha-863936-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m1s (x7 over 5m1s)  kubelet          Node ha-863936-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m                   node-controller  Node ha-863936-m03 event: Registered Node ha-863936-m03 in Controller
	  Normal  RegisteredNode           4m56s                node-controller  Node ha-863936-m03 event: Registered Node ha-863936-m03 in Controller
	  Normal  RegisteredNode           4m52s                node-controller  Node ha-863936-m03 event: Registered Node ha-863936-m03 in Controller
	
	
	Name:               ha-863936-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-863936-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab84f9bc76071a77c857a14f5c66dccc01002b05
	                    minikube.k8s.io/name=ha-863936
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_16T12_41_15_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 12:41:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-863936-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 12:45:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 12:41:46 +0000   Fri, 16 Aug 2024 12:41:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 12:41:46 +0000   Fri, 16 Aug 2024 12:41:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 12:41:46 +0000   Fri, 16 Aug 2024 12:41:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 12:41:46 +0000   Fri, 16 Aug 2024 12:41:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.74
	  Hostname:    ha-863936-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 13346cf592d54450aa4bb72c3dba17c9
	  System UUID:                13346cf5-92d5-4450-aa4b-b72c3dba17c9
	  Boot ID:                    51e69c7f-b3b6-4d26-8d6c-cea0170d4a5a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-c6wlb       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m54s
	  kube-system                 kube-proxy-lsjgf    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m49s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m54s (x2 over 3m55s)  kubelet          Node ha-863936-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m54s (x2 over 3m55s)  kubelet          Node ha-863936-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m54s (x2 over 3m55s)  kubelet          Node ha-863936-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m52s                  node-controller  Node ha-863936-m04 event: Registered Node ha-863936-m04 in Controller
	  Normal  RegisteredNode           3m51s                  node-controller  Node ha-863936-m04 event: Registered Node ha-863936-m04 in Controller
	  Normal  RegisteredNode           3m50s                  node-controller  Node ha-863936-m04 event: Registered Node ha-863936-m04 in Controller
	  Normal  NodeReady                3m34s                  kubelet          Node ha-863936-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug16 12:36] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.049893] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039040] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.779878] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.388981] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.556022] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.777615] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.058123] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055634] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.181681] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.119869] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.269746] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[Aug16 12:37] systemd-fstab-generator[771]: Ignoring "noauto" option for root device
	[  +4.293923] systemd-fstab-generator[906]: Ignoring "noauto" option for root device
	[  +0.058457] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.209516] systemd-fstab-generator[1329]: Ignoring "noauto" option for root device
	[  +0.086313] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.133654] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.050308] kauditd_printk_skb: 34 callbacks suppressed
	[Aug16 12:39] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [f34879b3d9bde9efea1f6d29ab90f29e4d7c261b375b99154b6292d41fc10559] <==
	{"level":"warn","ts":"2024-08-16T12:45:09.023404Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"a92ff6c78f5f37a8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T12:45:09.122994Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"a92ff6c78f5f37a8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T12:45:09.141609Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"a92ff6c78f5f37a8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T12:45:09.328749Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"a92ff6c78f5f37a8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T12:45:09.337812Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"a92ff6c78f5f37a8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T12:45:09.338659Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"a92ff6c78f5f37a8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T12:45:09.346271Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"a92ff6c78f5f37a8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T12:45:09.352681Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"a92ff6c78f5f37a8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T12:45:09.356836Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"a92ff6c78f5f37a8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T12:45:09.360109Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"a92ff6c78f5f37a8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T12:45:09.365288Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"a92ff6c78f5f37a8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T12:45:09.371592Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"a92ff6c78f5f37a8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T12:45:09.379472Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"a92ff6c78f5f37a8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T12:45:09.383010Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"a92ff6c78f5f37a8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T12:45:09.386840Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"a92ff6c78f5f37a8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T12:45:09.390024Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"a92ff6c78f5f37a8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T12:45:09.397066Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"a92ff6c78f5f37a8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T12:45:09.404371Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"a92ff6c78f5f37a8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T12:45:09.411756Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"a92ff6c78f5f37a8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T12:45:09.415544Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"a92ff6c78f5f37a8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T12:45:09.418559Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"a92ff6c78f5f37a8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T12:45:09.422264Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"a92ff6c78f5f37a8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T12:45:09.422875Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"a92ff6c78f5f37a8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T12:45:09.428200Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"a92ff6c78f5f37a8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T12:45:09.434353Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"a92ff6c78f5f37a8","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 12:45:09 up 8 min,  0 users,  load average: 0.58, 0.34, 0.18
	Linux ha-863936 5.10.207 #1 SMP Wed Aug 14 19:18:01 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [b83ba25619ab61e7a449e6a735eb3cca59c184e250a87f2af5b712fbc37fc331] <==
	I0816 12:44:36.073754       1 main.go:322] Node ha-863936-m04 has CIDR [10.244.3.0/24] 
	I0816 12:44:46.069352       1 main.go:295] Handling node with IPs: map[192.168.39.2:{}]
	I0816 12:44:46.069409       1 main.go:299] handling current node
	I0816 12:44:46.069435       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0816 12:44:46.069440       1 main.go:322] Node ha-863936-m02 has CIDR [10.244.1.0/24] 
	I0816 12:44:46.069592       1 main.go:295] Handling node with IPs: map[192.168.39.116:{}]
	I0816 12:44:46.069615       1 main.go:322] Node ha-863936-m03 has CIDR [10.244.2.0/24] 
	I0816 12:44:46.069670       1 main.go:295] Handling node with IPs: map[192.168.39.74:{}]
	I0816 12:44:46.069691       1 main.go:322] Node ha-863936-m04 has CIDR [10.244.3.0/24] 
	I0816 12:44:56.070560       1 main.go:295] Handling node with IPs: map[192.168.39.2:{}]
	I0816 12:44:56.070670       1 main.go:299] handling current node
	I0816 12:44:56.070699       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0816 12:44:56.070709       1 main.go:322] Node ha-863936-m02 has CIDR [10.244.1.0/24] 
	I0816 12:44:56.070857       1 main.go:295] Handling node with IPs: map[192.168.39.116:{}]
	I0816 12:44:56.070890       1 main.go:322] Node ha-863936-m03 has CIDR [10.244.2.0/24] 
	I0816 12:44:56.071033       1 main.go:295] Handling node with IPs: map[192.168.39.74:{}]
	I0816 12:44:56.071059       1 main.go:322] Node ha-863936-m04 has CIDR [10.244.3.0/24] 
	I0816 12:45:06.073196       1 main.go:295] Handling node with IPs: map[192.168.39.74:{}]
	I0816 12:45:06.073422       1 main.go:322] Node ha-863936-m04 has CIDR [10.244.3.0/24] 
	I0816 12:45:06.073788       1 main.go:295] Handling node with IPs: map[192.168.39.2:{}]
	I0816 12:45:06.073820       1 main.go:299] handling current node
	I0816 12:45:06.073852       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0816 12:45:06.073857       1 main.go:322] Node ha-863936-m02 has CIDR [10.244.1.0/24] 
	I0816 12:45:06.074012       1 main.go:295] Handling node with IPs: map[192.168.39.116:{}]
	I0816 12:45:06.074033       1 main.go:322] Node ha-863936-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [ee882e5e99dadc7370d79fccecde5adec2c82fc5cf4d93a04c88222c888fc1a9] <==
	W0816 12:37:14.433357       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.2]
	I0816 12:37:14.434364       1 controller.go:615] quota admission added evaluator for: endpoints
	I0816 12:37:14.438668       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0816 12:37:14.653639       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0816 12:37:15.765627       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0816 12:37:15.782676       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0816 12:37:15.950060       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0816 12:37:19.704519       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0816 12:37:20.357612       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0816 12:40:45.061392       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53080: use of closed network connection
	E0816 12:40:45.253892       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53100: use of closed network connection
	E0816 12:40:45.447692       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53124: use of closed network connection
	E0816 12:40:45.653914       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53136: use of closed network connection
	E0816 12:40:45.836291       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53158: use of closed network connection
	E0816 12:40:46.026318       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53176: use of closed network connection
	E0816 12:40:46.206323       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53200: use of closed network connection
	E0816 12:40:46.381802       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53218: use of closed network connection
	E0816 12:40:46.572604       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53236: use of closed network connection
	E0816 12:40:46.860849       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51704: use of closed network connection
	E0816 12:40:47.031367       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51730: use of closed network connection
	E0816 12:40:47.215777       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51750: use of closed network connection
	E0816 12:40:47.385810       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51758: use of closed network connection
	E0816 12:40:47.566136       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51772: use of closed network connection
	E0816 12:40:47.738757       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51796: use of closed network connection
	W0816 12:42:14.447226       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.116 192.168.39.2]
	
	
	==> kube-controller-manager [2beea397951195fcf59b5f00713ebd9cc8a260e3975fa901a4733ac52610bd62] <==
	I0816 12:41:15.332255       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-863936-m04" podCIDRs=["10.244.3.0/24"]
	I0816 12:41:15.332762       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863936-m04"
	I0816 12:41:15.333204       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863936-m04"
	I0816 12:41:15.357606       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863936-m04"
	I0816 12:41:15.454340       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863936-m04"
	I0816 12:41:15.865264       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863936-m04"
	I0816 12:41:17.660549       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863936-m04"
	I0816 12:41:18.343928       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863936-m04"
	I0816 12:41:18.408931       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863936-m04"
	I0816 12:41:19.606298       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863936-m04"
	I0816 12:41:19.606662       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-863936-m04"
	I0816 12:41:19.682214       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863936-m04"
	I0816 12:41:25.705173       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863936-m04"
	I0816 12:41:35.728377       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863936-m04"
	I0816 12:41:35.729126       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-863936-m04"
	I0816 12:41:35.742298       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863936-m04"
	I0816 12:41:37.620362       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863936-m04"
	I0816 12:41:46.018491       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863936-m04"
	I0816 12:42:29.633456       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863936-m02"
	I0816 12:42:29.634075       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-863936-m04"
	I0816 12:42:29.653483       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863936-m02"
	I0816 12:42:29.785285       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="13.692871ms"
	I0816 12:42:29.785365       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="36.842µs"
	I0816 12:42:32.652242       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863936-m02"
	I0816 12:42:34.885086       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863936-m02"
	
	
	==> kube-proxy [4aa588906cdcd0cf4fcb973469df4a1d02cafe5e3388d6c516665f0d1af8ceb4] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0816 12:37:21.157152       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0816 12:37:21.180318       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.2"]
	E0816 12:37:21.180543       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 12:37:21.233094       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0816 12:37:21.233152       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0816 12:37:21.233178       1 server_linux.go:169] "Using iptables Proxier"
	I0816 12:37:21.235918       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 12:37:21.236251       1 server.go:483] "Version info" version="v1.31.0"
	I0816 12:37:21.236279       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 12:37:21.237617       1 config.go:197] "Starting service config controller"
	I0816 12:37:21.237665       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 12:37:21.237685       1 config.go:104] "Starting endpoint slice config controller"
	I0816 12:37:21.237703       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 12:37:21.238479       1 config.go:326] "Starting node config controller"
	I0816 12:37:21.238504       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 12:37:21.338389       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0816 12:37:21.338433       1 shared_informer.go:320] Caches are synced for service config
	I0816 12:37:21.338640       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [4a0281c780fc2216d75b34fb0ab5edeca5d750269010e7a86e842bc53970539d] <==
	E0816 12:40:08.738668       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-zqs4l\": pod kindnet-zqs4l is already assigned to node \"ha-863936-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-zqs4l" node="ha-863936-m03"
	E0816 12:40:08.739389       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod b9054301-c9d9-4f2e-94c9-4557d6f4af2c(kube-system/kindnet-zqs4l) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-zqs4l"
	E0816 12:40:08.739626       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-zqs4l\": pod kindnet-zqs4l is already assigned to node \"ha-863936-m03\"" pod="kube-system/kindnet-zqs4l"
	I0816 12:40:08.739839       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-zqs4l" node="ha-863936-m03"
	E0816 12:40:08.762522       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-25gzj\": pod kube-proxy-25gzj is already assigned to node \"ha-863936-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-25gzj" node="ha-863936-m03"
	E0816 12:40:08.762585       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 8014f69d-cbe6-4369-8dbc-95bb5a429c22(kube-system/kube-proxy-25gzj) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-25gzj"
	E0816 12:40:08.762600       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-25gzj\": pod kube-proxy-25gzj is already assigned to node \"ha-863936-m03\"" pod="kube-system/kube-proxy-25gzj"
	I0816 12:40:08.762640       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-25gzj" node="ha-863936-m03"
	E0816 12:40:38.693364       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-gm458\": pod busybox-7dff88458-gm458 is already assigned to node \"ha-863936-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-gm458" node="ha-863936-m02"
	E0816 12:40:38.693487       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-gm458\": pod busybox-7dff88458-gm458 is already assigned to node \"ha-863936-m03\"" pod="default/busybox-7dff88458-gm458"
	E0816 12:40:38.739428       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-zqpfx\": pod busybox-7dff88458-zqpfx is already assigned to node \"ha-863936\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-zqpfx" node="ha-863936-m02"
	E0816 12:40:38.739543       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-zqpfx\": pod busybox-7dff88458-zqpfx is already assigned to node \"ha-863936\"" pod="default/busybox-7dff88458-zqpfx"
	I0816 12:40:38.740159       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="ac686aab-89e4-4f07-8123-835111b35e68" pod="default/busybox-7dff88458-t5tjw" assumedNode="ha-863936-m02" currentNode="ha-863936-m03"
	E0816 12:40:38.740246       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-t5tjw\": pod busybox-7dff88458-t5tjw is already assigned to node \"ha-863936-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-t5tjw" node="ha-863936-m03"
	E0816 12:40:38.740275       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod ac686aab-89e4-4f07-8123-835111b35e68(default/busybox-7dff88458-t5tjw) was assumed on ha-863936-m03 but assigned to ha-863936-m02" pod="default/busybox-7dff88458-t5tjw"
	E0816 12:40:38.740288       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-t5tjw\": pod busybox-7dff88458-t5tjw is already assigned to node \"ha-863936-m02\"" pod="default/busybox-7dff88458-t5tjw"
	I0816 12:40:38.740306       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-t5tjw" node="ha-863936-m02"
	E0816 12:41:15.413439       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-c6wlb\": pod kindnet-c6wlb is already assigned to node \"ha-863936-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-c6wlb" node="ha-863936-m04"
	E0816 12:41:15.418107       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod d6429c25-2e31-4126-9629-0389aeec7999(kube-system/kindnet-c6wlb) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-c6wlb"
	E0816 12:41:15.420071       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-c6wlb\": pod kindnet-c6wlb is already assigned to node \"ha-863936-m04\"" pod="kube-system/kindnet-c6wlb"
	I0816 12:41:15.420190       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-c6wlb" node="ha-863936-m04"
	E0816 12:41:15.413578       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-lsjgf\": pod kube-proxy-lsjgf is already assigned to node \"ha-863936-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-lsjgf" node="ha-863936-m04"
	E0816 12:41:15.424458       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 71a9943c-8ebe-4a91-876f-8e47aca3f719(kube-system/kube-proxy-lsjgf) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-lsjgf"
	E0816 12:41:15.425608       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-lsjgf\": pod kube-proxy-lsjgf is already assigned to node \"ha-863936-m04\"" pod="kube-system/kube-proxy-lsjgf"
	I0816 12:41:15.425683       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-lsjgf" node="ha-863936-m04"
	
	
	==> kubelet <==
	Aug 16 12:43:36 ha-863936 kubelet[1336]: E0816 12:43:36.032011    1336 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812216031661261,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:43:36 ha-863936 kubelet[1336]: E0816 12:43:36.032049    1336 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812216031661261,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:43:46 ha-863936 kubelet[1336]: E0816 12:43:46.033626    1336 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812226033333739,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:43:46 ha-863936 kubelet[1336]: E0816 12:43:46.033688    1336 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812226033333739,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:43:56 ha-863936 kubelet[1336]: E0816 12:43:56.035790    1336 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812236035332547,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:43:56 ha-863936 kubelet[1336]: E0816 12:43:56.036289    1336 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812236035332547,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:44:06 ha-863936 kubelet[1336]: E0816 12:44:06.039282    1336 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812246038612761,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:44:06 ha-863936 kubelet[1336]: E0816 12:44:06.039310    1336 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812246038612761,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:44:15 ha-863936 kubelet[1336]: E0816 12:44:15.932014    1336 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 12:44:15 ha-863936 kubelet[1336]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 12:44:15 ha-863936 kubelet[1336]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 12:44:15 ha-863936 kubelet[1336]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 12:44:15 ha-863936 kubelet[1336]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 12:44:16 ha-863936 kubelet[1336]: E0816 12:44:16.040997    1336 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812256040525046,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:44:16 ha-863936 kubelet[1336]: E0816 12:44:16.041025    1336 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812256040525046,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:44:26 ha-863936 kubelet[1336]: E0816 12:44:26.042676    1336 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812266042388588,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:44:26 ha-863936 kubelet[1336]: E0816 12:44:26.042718    1336 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812266042388588,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:44:36 ha-863936 kubelet[1336]: E0816 12:44:36.044622    1336 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812276044216489,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:44:36 ha-863936 kubelet[1336]: E0816 12:44:36.045003    1336 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812276044216489,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:44:46 ha-863936 kubelet[1336]: E0816 12:44:46.047051    1336 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812286046677866,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:44:46 ha-863936 kubelet[1336]: E0816 12:44:46.047104    1336 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812286046677866,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:44:56 ha-863936 kubelet[1336]: E0816 12:44:56.049043    1336 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812296048563076,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:44:56 ha-863936 kubelet[1336]: E0816 12:44:56.049334    1336 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812296048563076,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:45:06 ha-863936 kubelet[1336]: E0816 12:45:06.052139    1336 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812306051444983,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:45:06 ha-863936 kubelet[1336]: E0816 12:45:06.052559    1336 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812306051444983,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-863936 -n ha-863936
helpers_test.go:261: (dbg) Run:  kubectl --context ha-863936 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (49.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (356.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-863936 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-863936 -v=7 --alsologtostderr
E0816 12:45:40.921458   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/functional-756697/client.crt: no such file or directory" logger="UnhandledError"
E0816 12:46:08.622026   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/functional-756697/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-863936 -v=7 --alsologtostderr: exit status 82 (2m1.807967574s)

                                                
                                                
-- stdout --
	* Stopping node "ha-863936-m04"  ...
	* Stopping node "ha-863936-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 12:45:10.859212   28013 out.go:345] Setting OutFile to fd 1 ...
	I0816 12:45:10.859439   28013 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 12:45:10.859447   28013 out.go:358] Setting ErrFile to fd 2...
	I0816 12:45:10.859451   28013 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 12:45:10.859610   28013 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-3966/.minikube/bin
	I0816 12:45:10.859884   28013 out.go:352] Setting JSON to false
	I0816 12:45:10.859998   28013 mustload.go:65] Loading cluster: ha-863936
	I0816 12:45:10.860494   28013 config.go:182] Loaded profile config "ha-863936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 12:45:10.860590   28013 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/config.json ...
	I0816 12:45:10.861292   28013 mustload.go:65] Loading cluster: ha-863936
	I0816 12:45:10.861494   28013 config.go:182] Loaded profile config "ha-863936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 12:45:10.861549   28013 stop.go:39] StopHost: ha-863936-m04
	I0816 12:45:10.862092   28013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:45:10.862152   28013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:45:10.876737   28013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42559
	I0816 12:45:10.877207   28013 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:45:10.877726   28013 main.go:141] libmachine: Using API Version  1
	I0816 12:45:10.877746   28013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:45:10.878114   28013 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:45:10.880366   28013 out.go:177] * Stopping node "ha-863936-m04"  ...
	I0816 12:45:10.881604   28013 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0816 12:45:10.881639   28013 main.go:141] libmachine: (ha-863936-m04) Calling .DriverName
	I0816 12:45:10.881837   28013 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0816 12:45:10.881860   28013 main.go:141] libmachine: (ha-863936-m04) Calling .GetSSHHostname
	I0816 12:45:10.884427   28013 main.go:141] libmachine: (ha-863936-m04) DBG | domain ha-863936-m04 has defined MAC address 52:54:00:32:98:98 in network mk-ha-863936
	I0816 12:45:10.884796   28013 main.go:141] libmachine: (ha-863936-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:98:98", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:41:03 +0000 UTC Type:0 Mac:52:54:00:32:98:98 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-863936-m04 Clientid:01:52:54:00:32:98:98}
	I0816 12:45:10.884833   28013 main.go:141] libmachine: (ha-863936-m04) DBG | domain ha-863936-m04 has defined IP address 192.168.39.74 and MAC address 52:54:00:32:98:98 in network mk-ha-863936
	I0816 12:45:10.884931   28013 main.go:141] libmachine: (ha-863936-m04) Calling .GetSSHPort
	I0816 12:45:10.885085   28013 main.go:141] libmachine: (ha-863936-m04) Calling .GetSSHKeyPath
	I0816 12:45:10.885220   28013 main.go:141] libmachine: (ha-863936-m04) Calling .GetSSHUsername
	I0816 12:45:10.885323   28013 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m04/id_rsa Username:docker}
	I0816 12:45:10.969100   28013 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0816 12:45:11.024208   28013 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0816 12:45:11.079916   28013 main.go:141] libmachine: Stopping "ha-863936-m04"...
	I0816 12:45:11.079975   28013 main.go:141] libmachine: (ha-863936-m04) Calling .GetState
	I0816 12:45:11.081567   28013 main.go:141] libmachine: (ha-863936-m04) Calling .Stop
	I0816 12:45:11.084946   28013 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 0/120
	I0816 12:45:12.218915   28013 main.go:141] libmachine: (ha-863936-m04) Calling .GetState
	I0816 12:45:12.220264   28013 main.go:141] libmachine: Machine "ha-863936-m04" was stopped.
	I0816 12:45:12.220280   28013 stop.go:75] duration metric: took 1.338684287s to stop
	I0816 12:45:12.220302   28013 stop.go:39] StopHost: ha-863936-m03
	I0816 12:45:12.220673   28013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:45:12.220725   28013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:45:12.235187   28013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46627
	I0816 12:45:12.235548   28013 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:45:12.236010   28013 main.go:141] libmachine: Using API Version  1
	I0816 12:45:12.236027   28013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:45:12.236307   28013 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:45:12.238234   28013 out.go:177] * Stopping node "ha-863936-m03"  ...
	I0816 12:45:12.239253   28013 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0816 12:45:12.239275   28013 main.go:141] libmachine: (ha-863936-m03) Calling .DriverName
	I0816 12:45:12.239476   28013 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0816 12:45:12.239497   28013 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHHostname
	I0816 12:45:12.242377   28013 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:45:12.242862   28013 main.go:141] libmachine: (ha-863936-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:05:59", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:39:35 +0000 UTC Type:0 Mac:52:54:00:ec:05:59 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-863936-m03 Clientid:01:52:54:00:ec:05:59}
	I0816 12:45:12.242895   28013 main.go:141] libmachine: (ha-863936-m03) DBG | domain ha-863936-m03 has defined IP address 192.168.39.116 and MAC address 52:54:00:ec:05:59 in network mk-ha-863936
	I0816 12:45:12.243032   28013 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHPort
	I0816 12:45:12.243218   28013 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHKeyPath
	I0816 12:45:12.243390   28013 main.go:141] libmachine: (ha-863936-m03) Calling .GetSSHUsername
	I0816 12:45:12.243549   28013 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m03/id_rsa Username:docker}
	I0816 12:45:12.327704   28013 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0816 12:45:12.381543   28013 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0816 12:45:12.435853   28013 main.go:141] libmachine: Stopping "ha-863936-m03"...
	I0816 12:45:12.435877   28013 main.go:141] libmachine: (ha-863936-m03) Calling .GetState
	I0816 12:45:12.437509   28013 main.go:141] libmachine: (ha-863936-m03) Calling .Stop
	I0816 12:45:12.441077   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 0/120
	I0816 12:45:13.442355   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 1/120
	I0816 12:45:14.443897   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 2/120
	I0816 12:45:15.445443   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 3/120
	I0816 12:45:16.447138   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 4/120
	I0816 12:45:17.449332   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 5/120
	I0816 12:45:18.450943   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 6/120
	I0816 12:45:19.452455   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 7/120
	I0816 12:45:20.454070   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 8/120
	I0816 12:45:21.455579   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 9/120
	I0816 12:45:22.457008   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 10/120
	I0816 12:45:23.458783   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 11/120
	I0816 12:45:24.460287   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 12/120
	I0816 12:45:25.461571   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 13/120
	I0816 12:45:26.462987   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 14/120
	I0816 12:45:27.464725   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 15/120
	I0816 12:45:28.466154   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 16/120
	I0816 12:45:29.467517   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 17/120
	I0816 12:45:30.468946   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 18/120
	I0816 12:45:31.470320   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 19/120
	I0816 12:45:32.472183   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 20/120
	I0816 12:45:33.473635   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 21/120
	I0816 12:45:34.475076   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 22/120
	I0816 12:45:35.476579   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 23/120
	I0816 12:45:36.478116   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 24/120
	I0816 12:45:37.479566   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 25/120
	I0816 12:45:38.480977   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 26/120
	I0816 12:45:39.482374   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 27/120
	I0816 12:45:40.483628   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 28/120
	I0816 12:45:41.485649   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 29/120
	I0816 12:45:42.487151   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 30/120
	I0816 12:45:43.488569   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 31/120
	I0816 12:45:44.490106   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 32/120
	I0816 12:45:45.491295   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 33/120
	I0816 12:45:46.492613   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 34/120
	I0816 12:45:47.494520   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 35/120
	I0816 12:45:48.495924   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 36/120
	I0816 12:45:49.497353   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 37/120
	I0816 12:45:50.498689   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 38/120
	I0816 12:45:51.499934   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 39/120
	I0816 12:45:52.501717   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 40/120
	I0816 12:45:53.503339   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 41/120
	I0816 12:45:54.504672   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 42/120
	I0816 12:45:55.506015   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 43/120
	I0816 12:45:56.507313   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 44/120
	I0816 12:45:57.509054   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 45/120
	I0816 12:45:58.511488   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 46/120
	I0816 12:45:59.513042   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 47/120
	I0816 12:46:00.514499   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 48/120
	I0816 12:46:01.515733   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 49/120
	I0816 12:46:02.517578   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 50/120
	I0816 12:46:03.518801   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 51/120
	I0816 12:46:04.520008   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 52/120
	I0816 12:46:05.521318   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 53/120
	I0816 12:46:06.522572   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 54/120
	I0816 12:46:07.524252   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 55/120
	I0816 12:46:08.525673   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 56/120
	I0816 12:46:09.527250   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 57/120
	I0816 12:46:10.528763   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 58/120
	I0816 12:46:11.530149   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 59/120
	I0816 12:46:12.531551   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 60/120
	I0816 12:46:13.532752   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 61/120
	I0816 12:46:14.534045   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 62/120
	I0816 12:46:15.535465   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 63/120
	I0816 12:46:16.536771   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 64/120
	I0816 12:46:17.538730   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 65/120
	I0816 12:46:18.539953   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 66/120
	I0816 12:46:19.541315   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 67/120
	I0816 12:46:20.542683   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 68/120
	I0816 12:46:21.543887   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 69/120
	I0816 12:46:22.545822   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 70/120
	I0816 12:46:23.547186   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 71/120
	I0816 12:46:24.548458   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 72/120
	I0816 12:46:25.550041   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 73/120
	I0816 12:46:26.551557   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 74/120
	I0816 12:46:27.553173   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 75/120
	I0816 12:46:28.554554   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 76/120
	I0816 12:46:29.555776   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 77/120
	I0816 12:46:30.557460   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 78/120
	I0816 12:46:31.558679   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 79/120
	I0816 12:46:32.560576   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 80/120
	I0816 12:46:33.561772   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 81/120
	I0816 12:46:34.563060   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 82/120
	I0816 12:46:35.564238   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 83/120
	I0816 12:46:36.565501   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 84/120
	I0816 12:46:37.567042   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 85/120
	I0816 12:46:38.568265   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 86/120
	I0816 12:46:39.569544   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 87/120
	I0816 12:46:40.570698   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 88/120
	I0816 12:46:41.571997   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 89/120
	I0816 12:46:42.573759   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 90/120
	I0816 12:46:43.574998   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 91/120
	I0816 12:46:44.576116   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 92/120
	I0816 12:46:45.577382   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 93/120
	I0816 12:46:46.578671   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 94/120
	I0816 12:46:47.580115   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 95/120
	I0816 12:46:48.581399   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 96/120
	I0816 12:46:49.582892   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 97/120
	I0816 12:46:50.584299   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 98/120
	I0816 12:46:51.585871   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 99/120
	I0816 12:46:52.587821   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 100/120
	I0816 12:46:53.589221   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 101/120
	I0816 12:46:54.590681   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 102/120
	I0816 12:46:55.592643   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 103/120
	I0816 12:46:56.593931   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 104/120
	I0816 12:46:57.595271   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 105/120
	I0816 12:46:58.596977   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 106/120
	I0816 12:46:59.598308   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 107/120
	I0816 12:47:00.599526   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 108/120
	I0816 12:47:01.600865   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 109/120
	I0816 12:47:02.602211   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 110/120
	I0816 12:47:03.604275   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 111/120
	I0816 12:47:04.605700   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 112/120
	I0816 12:47:05.606978   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 113/120
	I0816 12:47:06.608240   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 114/120
	I0816 12:47:07.609934   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 115/120
	I0816 12:47:08.611312   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 116/120
	I0816 12:47:09.612643   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 117/120
	I0816 12:47:10.614182   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 118/120
	I0816 12:47:11.615338   28013 main.go:141] libmachine: (ha-863936-m03) Waiting for machine to stop 119/120
	I0816 12:47:12.616328   28013 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0816 12:47:12.616372   28013 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0816 12:47:12.618295   28013 out.go:201] 
	W0816 12:47:12.619542   28013 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0816 12:47:12.619566   28013 out.go:270] * 
	* 
	W0816 12:47:12.621904   28013 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 12:47:12.623299   28013 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-863936 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-863936 --wait=true -v=7 --alsologtostderr
E0816 12:48:56.823608   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/client.crt: no such file or directory" logger="UnhandledError"
E0816 12:50:19.889414   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/client.crt: no such file or directory" logger="UnhandledError"
E0816 12:50:40.921012   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/functional-756697/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-863936 --wait=true -v=7 --alsologtostderr: (3m51.585462921s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-863936
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-863936 -n ha-863936
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-863936 logs -n 25: (2.141835821s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-863936 cp ha-863936-m03:/home/docker/cp-test.txt                              | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | ha-863936-m02:/home/docker/cp-test_ha-863936-m03_ha-863936-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-863936 ssh -n                                                                 | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | ha-863936-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-863936 ssh -n ha-863936-m02 sudo cat                                          | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | /home/docker/cp-test_ha-863936-m03_ha-863936-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-863936 cp ha-863936-m03:/home/docker/cp-test.txt                              | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | ha-863936-m04:/home/docker/cp-test_ha-863936-m03_ha-863936-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-863936 ssh -n                                                                 | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | ha-863936-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-863936 ssh -n ha-863936-m04 sudo cat                                          | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | /home/docker/cp-test_ha-863936-m03_ha-863936-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-863936 cp testdata/cp-test.txt                                                | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | ha-863936-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-863936 ssh -n                                                                 | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | ha-863936-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-863936 cp ha-863936-m04:/home/docker/cp-test.txt                              | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2848660471/001/cp-test_ha-863936-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-863936 ssh -n                                                                 | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | ha-863936-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-863936 cp ha-863936-m04:/home/docker/cp-test.txt                              | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | ha-863936:/home/docker/cp-test_ha-863936-m04_ha-863936.txt                       |           |         |         |                     |                     |
	| ssh     | ha-863936 ssh -n                                                                 | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | ha-863936-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-863936 ssh -n ha-863936 sudo cat                                              | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | /home/docker/cp-test_ha-863936-m04_ha-863936.txt                                 |           |         |         |                     |                     |
	| cp      | ha-863936 cp ha-863936-m04:/home/docker/cp-test.txt                              | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | ha-863936-m02:/home/docker/cp-test_ha-863936-m04_ha-863936-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-863936 ssh -n                                                                 | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | ha-863936-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-863936 ssh -n ha-863936-m02 sudo cat                                          | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | /home/docker/cp-test_ha-863936-m04_ha-863936-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-863936 cp ha-863936-m04:/home/docker/cp-test.txt                              | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | ha-863936-m03:/home/docker/cp-test_ha-863936-m04_ha-863936-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-863936 ssh -n                                                                 | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | ha-863936-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-863936 ssh -n ha-863936-m03 sudo cat                                          | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | /home/docker/cp-test_ha-863936-m04_ha-863936-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-863936 node stop m02 -v=7                                                     | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-863936 node start m02 -v=7                                                    | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:44 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-863936 -v=7                                                           | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:45 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-863936 -v=7                                                                | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:45 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-863936 --wait=true -v=7                                                    | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:47 UTC | 16 Aug 24 12:51 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-863936                                                                | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:51 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 12:47:12
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 12:47:12.666417   28466 out.go:345] Setting OutFile to fd 1 ...
	I0816 12:47:12.666669   28466 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 12:47:12.666678   28466 out.go:358] Setting ErrFile to fd 2...
	I0816 12:47:12.666682   28466 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 12:47:12.666831   28466 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-3966/.minikube/bin
	I0816 12:47:12.667392   28466 out.go:352] Setting JSON to false
	I0816 12:47:12.668288   28466 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1778,"bootTime":1723810655,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 12:47:12.668342   28466 start.go:139] virtualization: kvm guest
	I0816 12:47:12.671664   28466 out.go:177] * [ha-863936] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 12:47:12.673302   28466 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 12:47:12.673304   28466 notify.go:220] Checking for updates...
	I0816 12:47:12.674803   28466 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 12:47:12.676443   28466 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 12:47:12.677987   28466 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-3966/.minikube
	I0816 12:47:12.679436   28466 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 12:47:12.680839   28466 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 12:47:12.682494   28466 config.go:182] Loaded profile config "ha-863936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 12:47:12.682607   28466 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 12:47:12.683178   28466 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:47:12.683258   28466 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:47:12.698014   28466 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33817
	I0816 12:47:12.698606   28466 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:47:12.699102   28466 main.go:141] libmachine: Using API Version  1
	I0816 12:47:12.699138   28466 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:47:12.699461   28466 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:47:12.699645   28466 main.go:141] libmachine: (ha-863936) Calling .DriverName
	I0816 12:47:12.733350   28466 out.go:177] * Using the kvm2 driver based on existing profile
	I0816 12:47:12.734770   28466 start.go:297] selected driver: kvm2
	I0816 12:47:12.734792   28466 start.go:901] validating driver "kvm2" against &{Name:ha-863936 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.0 ClusterName:ha-863936 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.116 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.74 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:
false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 12:47:12.734989   28466 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 12:47:12.735447   28466 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 12:47:12.735569   28466 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-3966/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 12:47:12.749799   28466 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0816 12:47:12.750437   28466 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 12:47:12.750498   28466 cni.go:84] Creating CNI manager for ""
	I0816 12:47:12.750509   28466 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0816 12:47:12.750567   28466 start.go:340] cluster config:
	{Name:ha-863936 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-863936 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.116 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.74 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tille
r:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 12:47:12.750688   28466 iso.go:125] acquiring lock: {Name:mk00139b359c6fe4c84d80e843a2099303fb7c5b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 12:47:12.752461   28466 out.go:177] * Starting "ha-863936" primary control-plane node in "ha-863936" cluster
	I0816 12:47:12.753668   28466 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 12:47:12.753701   28466 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0816 12:47:12.753712   28466 cache.go:56] Caching tarball of preloaded images
	I0816 12:47:12.753784   28466 preload.go:172] Found /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 12:47:12.753794   28466 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0816 12:47:12.753899   28466 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/config.json ...
	I0816 12:47:12.754122   28466 start.go:360] acquireMachinesLock for ha-863936: {Name:mk0b4b88ee8893cefe753073bc08069ac50e5641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 12:47:12.754173   28466 start.go:364] duration metric: took 29.398µs to acquireMachinesLock for "ha-863936"
	I0816 12:47:12.754189   28466 start.go:96] Skipping create...Using existing machine configuration
	I0816 12:47:12.754198   28466 fix.go:54] fixHost starting: 
	I0816 12:47:12.754472   28466 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:47:12.754500   28466 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:47:12.768329   28466 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41443
	I0816 12:47:12.768742   28466 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:47:12.769264   28466 main.go:141] libmachine: Using API Version  1
	I0816 12:47:12.769291   28466 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:47:12.769600   28466 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:47:12.769759   28466 main.go:141] libmachine: (ha-863936) Calling .DriverName
	I0816 12:47:12.769888   28466 main.go:141] libmachine: (ha-863936) Calling .GetState
	I0816 12:47:12.771391   28466 fix.go:112] recreateIfNeeded on ha-863936: state=Running err=<nil>
	W0816 12:47:12.771412   28466 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 12:47:12.773101   28466 out.go:177] * Updating the running kvm2 "ha-863936" VM ...
	I0816 12:47:12.774159   28466 machine.go:93] provisionDockerMachine start ...
	I0816 12:47:12.774177   28466 main.go:141] libmachine: (ha-863936) Calling .DriverName
	I0816 12:47:12.774346   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:47:12.776633   28466 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:47:12.777058   28466 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:47:12.777084   28466 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:47:12.777203   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHPort
	I0816 12:47:12.777371   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:47:12.777532   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:47:12.777672   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHUsername
	I0816 12:47:12.777830   28466 main.go:141] libmachine: Using SSH client type: native
	I0816 12:47:12.778014   28466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0816 12:47:12.778025   28466 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 12:47:12.882202   28466 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-863936
	
	I0816 12:47:12.882231   28466 main.go:141] libmachine: (ha-863936) Calling .GetMachineName
	I0816 12:47:12.882501   28466 buildroot.go:166] provisioning hostname "ha-863936"
	I0816 12:47:12.882520   28466 main.go:141] libmachine: (ha-863936) Calling .GetMachineName
	I0816 12:47:12.882671   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:47:12.885538   28466 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:47:12.885951   28466 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:47:12.885974   28466 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:47:12.886198   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHPort
	I0816 12:47:12.886371   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:47:12.886548   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:47:12.886694   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHUsername
	I0816 12:47:12.886864   28466 main.go:141] libmachine: Using SSH client type: native
	I0816 12:47:12.887089   28466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0816 12:47:12.887107   28466 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-863936 && echo "ha-863936" | sudo tee /etc/hostname
	I0816 12:47:13.002689   28466 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-863936
	
	I0816 12:47:13.002712   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:47:13.005811   28466 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:47:13.006178   28466 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:47:13.006199   28466 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:47:13.006410   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHPort
	I0816 12:47:13.006584   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:47:13.006778   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:47:13.006940   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHUsername
	I0816 12:47:13.007102   28466 main.go:141] libmachine: Using SSH client type: native
	I0816 12:47:13.007277   28466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0816 12:47:13.007296   28466 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-863936' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-863936/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-863936' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 12:47:13.109896   28466 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 12:47:13.109933   28466 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-3966/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-3966/.minikube}
	I0816 12:47:13.109984   28466 buildroot.go:174] setting up certificates
	I0816 12:47:13.110017   28466 provision.go:84] configureAuth start
	I0816 12:47:13.110034   28466 main.go:141] libmachine: (ha-863936) Calling .GetMachineName
	I0816 12:47:13.110314   28466 main.go:141] libmachine: (ha-863936) Calling .GetIP
	I0816 12:47:13.112710   28466 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:47:13.113112   28466 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:47:13.113139   28466 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:47:13.113301   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:47:13.115312   28466 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:47:13.115695   28466 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:47:13.115722   28466 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:47:13.115946   28466 provision.go:143] copyHostCerts
	I0816 12:47:13.115979   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem
	I0816 12:47:13.116009   28466 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem, removing ...
	I0816 12:47:13.116024   28466 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem
	I0816 12:47:13.116091   28466 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem (1082 bytes)
	I0816 12:47:13.116167   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem
	I0816 12:47:13.116185   28466 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem, removing ...
	I0816 12:47:13.116191   28466 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem
	I0816 12:47:13.116214   28466 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem (1123 bytes)
	I0816 12:47:13.116253   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem
	I0816 12:47:13.116268   28466 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem, removing ...
	I0816 12:47:13.116277   28466 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem
	I0816 12:47:13.116298   28466 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem (1675 bytes)
	I0816 12:47:13.116340   28466 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem org=jenkins.ha-863936 san=[127.0.0.1 192.168.39.2 ha-863936 localhost minikube]
	I0816 12:47:13.241271   28466 provision.go:177] copyRemoteCerts
	I0816 12:47:13.241327   28466 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 12:47:13.241348   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:47:13.244236   28466 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:47:13.244675   28466 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:47:13.244694   28466 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:47:13.244879   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHPort
	I0816 12:47:13.245069   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:47:13.245226   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHUsername
	I0816 12:47:13.245337   28466 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936/id_rsa Username:docker}
	I0816 12:47:13.324175   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 12:47:13.324258   28466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 12:47:13.352785   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 12:47:13.352858   28466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0816 12:47:13.383693   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 12:47:13.383777   28466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 12:47:13.411313   28466 provision.go:87] duration metric: took 301.279937ms to configureAuth
	I0816 12:47:13.411341   28466 buildroot.go:189] setting minikube options for container-runtime
	I0816 12:47:13.411601   28466 config.go:182] Loaded profile config "ha-863936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 12:47:13.411681   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:47:13.414348   28466 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:47:13.414704   28466 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:47:13.414745   28466 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:47:13.414898   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHPort
	I0816 12:47:13.415076   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:47:13.415225   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:47:13.415392   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHUsername
	I0816 12:47:13.415565   28466 main.go:141] libmachine: Using SSH client type: native
	I0816 12:47:13.415770   28466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0816 12:47:13.415797   28466 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 12:48:44.394310   28466 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 12:48:44.394335   28466 machine.go:96] duration metric: took 1m31.620163698s to provisionDockerMachine
	I0816 12:48:44.394354   28466 start.go:293] postStartSetup for "ha-863936" (driver="kvm2")
	I0816 12:48:44.394366   28466 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 12:48:44.394385   28466 main.go:141] libmachine: (ha-863936) Calling .DriverName
	I0816 12:48:44.394688   28466 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 12:48:44.394719   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:48:44.397993   28466 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:48:44.398427   28466 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:48:44.398456   28466 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:48:44.398607   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHPort
	I0816 12:48:44.398788   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:48:44.398967   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHUsername
	I0816 12:48:44.399085   28466 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936/id_rsa Username:docker}
	I0816 12:48:44.480013   28466 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 12:48:44.484297   28466 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 12:48:44.484330   28466 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/addons for local assets ...
	I0816 12:48:44.484399   28466 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/files for local assets ...
	I0816 12:48:44.484482   28466 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem -> 111492.pem in /etc/ssl/certs
	I0816 12:48:44.484493   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem -> /etc/ssl/certs/111492.pem
	I0816 12:48:44.484580   28466 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 12:48:44.493723   28466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /etc/ssl/certs/111492.pem (1708 bytes)
	I0816 12:48:44.518064   28466 start.go:296] duration metric: took 123.699008ms for postStartSetup
	I0816 12:48:44.518101   28466 main.go:141] libmachine: (ha-863936) Calling .DriverName
	I0816 12:48:44.518362   28466 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0816 12:48:44.518401   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:48:44.521196   28466 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:48:44.521654   28466 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:48:44.521677   28466 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:48:44.521820   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHPort
	I0816 12:48:44.521988   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:48:44.522154   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHUsername
	I0816 12:48:44.522298   28466 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936/id_rsa Username:docker}
	W0816 12:48:44.599191   28466 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0816 12:48:44.599218   28466 fix.go:56] duration metric: took 1m31.845022194s for fixHost
	I0816 12:48:44.599242   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:48:44.601955   28466 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:48:44.602471   28466 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:48:44.602500   28466 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:48:44.602682   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHPort
	I0816 12:48:44.602877   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:48:44.603063   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:48:44.603220   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHUsername
	I0816 12:48:44.603384   28466 main.go:141] libmachine: Using SSH client type: native
	I0816 12:48:44.603551   28466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0816 12:48:44.603560   28466 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 12:48:44.702014   28466 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723812524.668188641
	
	I0816 12:48:44.702039   28466 fix.go:216] guest clock: 1723812524.668188641
	I0816 12:48:44.702048   28466 fix.go:229] Guest: 2024-08-16 12:48:44.668188641 +0000 UTC Remote: 2024-08-16 12:48:44.599226034 +0000 UTC m=+91.966804300 (delta=68.962607ms)
	I0816 12:48:44.702093   28466 fix.go:200] guest clock delta is within tolerance: 68.962607ms
	I0816 12:48:44.702104   28466 start.go:83] releasing machines lock for "ha-863936", held for 1m31.947919353s
	I0816 12:48:44.702142   28466 main.go:141] libmachine: (ha-863936) Calling .DriverName
	I0816 12:48:44.702373   28466 main.go:141] libmachine: (ha-863936) Calling .GetIP
	I0816 12:48:44.704995   28466 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:48:44.705336   28466 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:48:44.705359   28466 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:48:44.705552   28466 main.go:141] libmachine: (ha-863936) Calling .DriverName
	I0816 12:48:44.706017   28466 main.go:141] libmachine: (ha-863936) Calling .DriverName
	I0816 12:48:44.706213   28466 main.go:141] libmachine: (ha-863936) Calling .DriverName
	I0816 12:48:44.706316   28466 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 12:48:44.706359   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:48:44.706454   28466 ssh_runner.go:195] Run: cat /version.json
	I0816 12:48:44.706480   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:48:44.708936   28466 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:48:44.709257   28466 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:48:44.709284   28466 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:48:44.709302   28466 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:48:44.709361   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHPort
	I0816 12:48:44.709542   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:48:44.709685   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHUsername
	I0816 12:48:44.709752   28466 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:48:44.709775   28466 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:48:44.709816   28466 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936/id_rsa Username:docker}
	I0816 12:48:44.709944   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHPort
	I0816 12:48:44.710085   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:48:44.710227   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHUsername
	I0816 12:48:44.710366   28466 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936/id_rsa Username:docker}
	I0816 12:48:44.782444   28466 ssh_runner.go:195] Run: systemctl --version
	I0816 12:48:44.805194   28466 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 12:48:44.967393   28466 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 12:48:44.973191   28466 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 12:48:44.973253   28466 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 12:48:44.982418   28466 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0816 12:48:44.982437   28466 start.go:495] detecting cgroup driver to use...
	I0816 12:48:44.982490   28466 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 12:48:44.998364   28466 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 12:48:45.012279   28466 docker.go:217] disabling cri-docker service (if available) ...
	I0816 12:48:45.012338   28466 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 12:48:45.025798   28466 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 12:48:45.038835   28466 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 12:48:45.180125   28466 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 12:48:45.328402   28466 docker.go:233] disabling docker service ...
	I0816 12:48:45.328478   28466 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 12:48:45.345286   28466 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 12:48:45.359026   28466 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 12:48:45.509178   28466 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 12:48:45.652776   28466 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 12:48:45.666563   28466 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 12:48:45.686132   28466 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 12:48:45.686195   28466 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:48:45.696381   28466 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 12:48:45.696445   28466 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:48:45.706372   28466 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:48:45.716646   28466 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:48:45.726888   28466 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 12:48:45.737421   28466 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:48:45.747282   28466 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:48:45.758357   28466 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:48:45.768222   28466 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 12:48:45.777321   28466 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 12:48:45.786097   28466 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 12:48:45.935318   28466 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 12:48:46.227265   28466 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 12:48:46.227347   28466 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 12:48:46.236106   28466 start.go:563] Will wait 60s for crictl version
	I0816 12:48:46.236176   28466 ssh_runner.go:195] Run: which crictl
	I0816 12:48:46.239945   28466 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 12:48:46.275481   28466 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 12:48:46.275568   28466 ssh_runner.go:195] Run: crio --version
	I0816 12:48:46.305240   28466 ssh_runner.go:195] Run: crio --version
	I0816 12:48:46.336817   28466 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 12:48:46.338167   28466 main.go:141] libmachine: (ha-863936) Calling .GetIP
	I0816 12:48:46.340854   28466 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:48:46.341256   28466 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:48:46.341282   28466 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:48:46.341443   28466 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0816 12:48:46.346335   28466 kubeadm.go:883] updating cluster {Name:ha-863936 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-863936 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.116 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.74 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 12:48:46.346468   28466 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 12:48:46.346515   28466 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 12:48:46.389339   28466 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 12:48:46.389363   28466 crio.go:433] Images already preloaded, skipping extraction
	I0816 12:48:46.389436   28466 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 12:48:46.438791   28466 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 12:48:46.438813   28466 cache_images.go:84] Images are preloaded, skipping loading
	I0816 12:48:46.438822   28466 kubeadm.go:934] updating node { 192.168.39.2 8443 v1.31.0 crio true true} ...
	I0816 12:48:46.438936   28466 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-863936 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-863936 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 12:48:46.439000   28466 ssh_runner.go:195] Run: crio config
	I0816 12:48:46.554876   28466 cni.go:84] Creating CNI manager for ""
	I0816 12:48:46.554897   28466 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0816 12:48:46.554908   28466 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 12:48:46.554935   28466 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-863936 NodeName:ha-863936 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 12:48:46.555102   28466 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-863936"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 12:48:46.555156   28466 kube-vip.go:115] generating kube-vip config ...
	I0816 12:48:46.555206   28466 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0816 12:48:46.571361   28466 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0816 12:48:46.571462   28466 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0816 12:48:46.571516   28466 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 12:48:46.581995   28466 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 12:48:46.582052   28466 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0816 12:48:46.592152   28466 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0816 12:48:46.612010   28466 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 12:48:46.641550   28466 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0816 12:48:46.659956   28466 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0816 12:48:46.683534   28466 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0816 12:48:46.692332   28466 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 12:48:46.860322   28466 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 12:48:46.877204   28466 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936 for IP: 192.168.39.2
	I0816 12:48:46.877223   28466 certs.go:194] generating shared ca certs ...
	I0816 12:48:46.877235   28466 certs.go:226] acquiring lock for ca certs: {Name:mkdb46755373c135acf8239d2d4352f4b0b3d1d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:48:46.877378   28466 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key
	I0816 12:48:46.877421   28466 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key
	I0816 12:48:46.877431   28466 certs.go:256] generating profile certs ...
	I0816 12:48:46.877501   28466 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/client.key
	I0816 12:48:46.877529   28466 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.key.72359c07
	I0816 12:48:46.877550   28466 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.crt.72359c07 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.2 192.168.39.101 192.168.39.116 192.168.39.254]
	I0816 12:48:46.987353   28466 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.crt.72359c07 ...
	I0816 12:48:46.987382   28466 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.crt.72359c07: {Name:mk10d54a2525ec300df31026c8b6dc6102e2744f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:48:46.987569   28466 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.key.72359c07 ...
	I0816 12:48:46.987582   28466 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.key.72359c07: {Name:mk2f0b27b4a347a7366b445074cb7ce586272135 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:48:46.987660   28466 certs.go:381] copying /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.crt.72359c07 -> /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.crt
	I0816 12:48:46.987812   28466 certs.go:385] copying /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.key.72359c07 -> /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.key
	I0816 12:48:46.987934   28466 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/proxy-client.key
	I0816 12:48:46.987949   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0816 12:48:46.987961   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0816 12:48:46.987974   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0816 12:48:46.987986   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0816 12:48:46.987998   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0816 12:48:46.988010   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0816 12:48:46.988022   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0816 12:48:46.988033   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0816 12:48:46.988082   28466 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem (1338 bytes)
	W0816 12:48:46.988141   28466 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149_empty.pem, impossibly tiny 0 bytes
	I0816 12:48:46.988153   28466 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 12:48:46.988177   28466 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem (1082 bytes)
	I0816 12:48:46.988199   28466 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem (1123 bytes)
	I0816 12:48:46.988220   28466 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem (1675 bytes)
	I0816 12:48:46.988257   28466 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem (1708 bytes)
	I0816 12:48:46.988285   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem -> /usr/share/ca-certificates/11149.pem
	I0816 12:48:46.988301   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem -> /usr/share/ca-certificates/111492.pem
	I0816 12:48:46.988314   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0816 12:48:46.988807   28466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 12:48:47.012890   28466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 12:48:47.036546   28466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 12:48:47.060027   28466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 12:48:47.083638   28466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0816 12:48:47.107700   28466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 12:48:47.131823   28466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 12:48:47.156027   28466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 12:48:47.179930   28466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem --> /usr/share/ca-certificates/11149.pem (1338 bytes)
	I0816 12:48:47.203842   28466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /usr/share/ca-certificates/111492.pem (1708 bytes)
	I0816 12:48:47.227641   28466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 12:48:47.252594   28466 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 12:48:47.269431   28466 ssh_runner.go:195] Run: openssl version
	I0816 12:48:47.275563   28466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 12:48:47.285825   28466 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 12:48:47.290289   28466 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 12:22 /usr/share/ca-certificates/minikubeCA.pem
	I0816 12:48:47.290332   28466 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 12:48:47.295837   28466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 12:48:47.304644   28466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11149.pem && ln -fs /usr/share/ca-certificates/11149.pem /etc/ssl/certs/11149.pem"
	I0816 12:48:47.315050   28466 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11149.pem
	I0816 12:48:47.319541   28466 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 12:33 /usr/share/ca-certificates/11149.pem
	I0816 12:48:47.319583   28466 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11149.pem
	I0816 12:48:47.325197   28466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11149.pem /etc/ssl/certs/51391683.0"
	I0816 12:48:47.334118   28466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111492.pem && ln -fs /usr/share/ca-certificates/111492.pem /etc/ssl/certs/111492.pem"
	I0816 12:48:47.344896   28466 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111492.pem
	I0816 12:48:47.349565   28466 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 12:33 /usr/share/ca-certificates/111492.pem
	I0816 12:48:47.349622   28466 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111492.pem
	I0816 12:48:47.355259   28466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111492.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 12:48:47.364307   28466 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 12:48:47.368925   28466 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 12:48:47.374542   28466 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 12:48:47.380047   28466 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 12:48:47.385632   28466 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 12:48:47.391346   28466 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 12:48:47.397080   28466 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 12:48:47.402850   28466 kubeadm.go:392] StartCluster: {Name:ha-863936 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-863936 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.116 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.74 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod
:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 12:48:47.402987   28466 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 12:48:47.403060   28466 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 12:48:47.446398   28466 cri.go:89] found id: "41ebcb2f3d94d6faf106f480f7a3c9a88b9a72f2e4dfd7393af7cd6c72e2079f"
	I0816 12:48:47.446423   28466 cri.go:89] found id: "13ee625d64ac22e3dbd2a411db60aa943aca2b0965240ce6d86470b99d108a28"
	I0816 12:48:47.446427   28466 cri.go:89] found id: "27fd86b233d7915b829d3d87a08450886d7cf55ca3dafce85c215cb3718f4022"
	I0816 12:48:47.446430   28466 cri.go:89] found id: "6c1af75bd6dc5d1a0980fa2b20a308aa9c311599686714bc15f19c6a16dcd811"
	I0816 12:48:47.446433   28466 cri.go:89] found id: "a7e67a022e7b9b1a5a3ea3fbc46623fd4813ff6efeaf4cff5f954a956b23545c"
	I0816 12:48:47.446436   28466 cri.go:89] found id: "a32107a6690bf88691db72f9019b8ec9c8c6e1ce5e447fa21e37000fbe8fe696"
	I0816 12:48:47.446438   28466 cri.go:89] found id: "8fb58a4d7b8e84f3429f58f729e1f93b54a8aadb737f10f997bf64e601d6edd6"
	I0816 12:48:47.446441   28466 cri.go:89] found id: "b83ba25619ab61e7a449e6a735eb3cca59c184e250a87f2af5b712fbc37fc331"
	I0816 12:48:47.446443   28466 cri.go:89] found id: "4aa588906cdcd0cf4fcb973469df4a1d02cafe5e3388d6c516665f0d1af8ceb4"
	I0816 12:48:47.446448   28466 cri.go:89] found id: "50ae5af99f5970011dec9ba89fd0047f1f9b657bdad8b1e90a1718aa00bdd86a"
	I0816 12:48:47.446450   28466 cri.go:89] found id: "f34879b3d9bde9efea1f6d29ab90f29e4d7c261b375b99154b6292d41fc10559"
	I0816 12:48:47.446453   28466 cri.go:89] found id: "ee882e5e99dadc7370d79fccecde5adec2c82fc5cf4d93a04c88222c888fc1a9"
	I0816 12:48:47.446455   28466 cri.go:89] found id: "4a0281c780fc2216d75b34fb0ab5edeca5d750269010e7a86e842bc53970539d"
	I0816 12:48:47.446457   28466 cri.go:89] found id: "2beea397951195fcf59b5f00713ebd9cc8a260e3975fa901a4733ac52610bd62"
	I0816 12:48:47.446461   28466 cri.go:89] found id: ""
	I0816 12:48:47.446500   28466 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 16 12:51:04 ha-863936 crio[3656]: time="2024-08-16 12:51:04.963266644Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812664963235433,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4acad5e0-11b6-4046-b3be-a3392e27ddc7 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 12:51:04 ha-863936 crio[3656]: time="2024-08-16 12:51:04.964235822Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b4cb61e1-8357-4b7a-bcb4-6eaaaf380c7e name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 12:51:04 ha-863936 crio[3656]: time="2024-08-16 12:51:04.964319808Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b4cb61e1-8357-4b7a-bcb4-6eaaaf380c7e name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 12:51:04 ha-863936 crio[3656]: time="2024-08-16 12:51:04.965088850Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d9f9cdb49f2e208b14ce5d538c1296d5ca31308ea50d93a324f1dab81ee4828b,PodSandboxId:6c6ce59b10f027dda0492ea2cc7784e979b2e4575b41a6d3d4b4846c708ff8ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723812579915111302,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6e7b7e6-00b6-42e2-9680-e6660e76bc6f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f272c68cb5f2b671fbb4fde72d736ec8e3c47238d4c785b6a1d30c25b92ce44c,PodSandboxId:2a3662a22babc6afd74ebfb7d9e65638e5c992e2f3819ed20bdb3a33ce29b12b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723812574918416022,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1758561f6a2148ce3a7eabea3ce99a1a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2ffb5eb0f0df4c6a354d94a448dd7733348df5a3111df63b97081e652e00b3e,PodSandboxId:05ebc660f3a004b33b1919d66a501b861a346ffad7393c37ed846418f998c414,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723812567175517801,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zqpfx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c52c866f-81c3-423f-a604-f792834e341e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f57541ef075e2aebbbe3b597c77782777a7e1dbb4dc82e74f19fc2a5cba915d5,PodSandboxId:2e5770a9723b139fb8fcee684674d19f2f484e23f583240f470a287bc07a0a70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723812565604659437,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fb02b673e0a97e6d66a5a7404114d26,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f03a55be0c75f679f976d46b357088d845045a272d05087a1511d8fc11be9ba3,PodSandboxId:7153f574b2fa16ca13f0001f759a4d888f92fba2212bf78c01d878a62f33fffb,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723812548310117133,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ec2b816f95a9c13a68e8d3dd18d3822,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47c8f68686797ce6bff3488c73dbe8881981f9e2018359937476bdda33cffec9,PodSandboxId:6c6ce59b10f027dda0492ea2cc7784e979b2e4575b41a6d3d4b4846c708ff8ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723812534131682004,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6e7b7e6-00b6-42e2-9680-e6660e76bc6f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15e34877aa55b56dd2af2c8b4c94de3639e13e9aa2640f4dc59c76f1d0ffd700,PodSandboxId:4228908a42f0b13c126674c42df44178b1e535b8d1ad73c15191ff27418e1227,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723812534116774628,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g75mg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d22ea17-7ddd-4c07-89d5-0ebaa170066c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:69857a90cec728020099e00ae2fc308ffd2f0b58830b3d9498eb2371af8f090d,PodSandboxId:b6dbe84b7a3435716739b7dbb7ea3a870883e79c799c8dfd8cb89705572eee2b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723812534010316842,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dddkq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87bd9636-168b-4f61-9382-0914014af5c0,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ab681701
e0290677bb191586833dac1bc9c69e12654ddb92b30341260d90fec,PodSandboxId:564511f6f39d5c5233053e8250813d972ced174a53b45f6c71837335ff02ddf8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723812534063334241,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-7gfgm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 797ae351-63bf-4994-a9bd-901367887b58,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4382fddee87cc3d877e1bd39791f2475d440812fbbe775970391626e16ed2c4b,PodSandboxId:e745636924446357136cb347e503cfc0a7a32c790c13d1e18119fc81ec82dc45,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723812533797486126,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4eb1802b446ee0233a6ed400bf8fd33,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:716dd81dd144015c07273ec8072c3f31367582e5d9e70d6f89d3c6b2c8a520ae,PodSandboxId:2a3662a22babc6afd74ebfb7d9e65638e5c992e2f3819ed20bdb3a33ce29b12b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723812533746497816,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1758561f6a2148ce3a7eabea3ce99a1a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec46a3a2004fcad11de1bba2d1c355d99915bafd65d77051d5e38834061756fd,PodSandboxId:2e5be3e2e792c4932d57dc8ec1637bf1d0433315a23c63bbed71994fd7c314e7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723812533587378908,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0bcffbcfcc9f18fc26b991d99b329e9,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de7f3e4f5a38619439599a84b3612d1e59e247b98adf7d481f48fc64ef8228aa,PodSandboxId:2e5770a9723b139fb8fcee684674d19f2f484e23f583240f470a287bc07a0a70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723812533668441533,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fb02b673e0a97e6d66a5a7404114d26,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41ebcb2f3d94d6faf106f480f7a3c9a88b9a72f2e4dfd7393af7cd6c72e2079f,PodSandboxId:9f99975c570c519e8fc16d94f6f3a955a2f0da1baca56b1235fda2b87ed58bcd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723812526612893130,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-ssb5h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5162fb17-6897-40d2-9c2c-80157ea46e07,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e73d7f930e1761fdc56db10356ad46f9f2acd1f4aead50f57efa3558af4e0b18,PodSandboxId:5f9b33b7fe6f25a53393dfc965ee81bb65952c3ab4fc610bd3fa7395f2ed6d3a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723812042160165915,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zqpfx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c52c866f-81c3-423f-a604-f792834e341e,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a32107a6690bf88691db72f9019b8ec9c8c6e1ce5e447fa21e37000fbe8fe696,PodSandboxId:13e4c008cfb7ea17cb823e290756e07b0177dd0379a53dafaff6302e03252b5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723811856865765181,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-ssb5h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5162fb17-6897-40d2-9c2c-80157ea46e07,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fb58a4d7b8e84f3429f58f729e1f93b54a8aadb737f10f997bf64e601d6edd6,PodSandboxId:7061cc0bd22ace243b66f598d9799b3e59733e06ba7f688f1f4a72a56387bfd3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723811856826516024,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-7gfgm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 797ae351-63bf-4994-a9bd-901367887b58,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b83ba25619ab61e7a449e6a735eb3cca59c184e250a87f2af5b712fbc37fc331,PodSandboxId:d524a508e86ff890d883786349c2b55fe61dc345620d11bc49cfc83efa8c5816,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723811844925898697,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dddkq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87bd9636-168b-4f61-9382-0914014af5c0,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4aa588906cdcd0cf4fcb973469df4a1d02cafe5e3388d6c516665f0d1af8ceb4,PodSandboxId:e0fda91da3630c4c4c4612e48a47583f0c6a77f263ee246204a23e60b2f9156c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723811840918266847,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g75mg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d22ea17-7ddd-4c07-89d5-0ebaa170066c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f34879b3d9bde9efea1f6d29ab90f29e4d7c261b375b99154b6292d41fc10559,PodSandboxId:30242516e8e9ac227e7aba5fcf3357980c39bf1d53d5180208366d9151a9f6e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723811829571387585,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4eb1802b446ee0233a6ed400bf8fd33,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a0281c780fc2216d75b34fb0ab5edeca5d750269010e7a86e842bc53970539d,PodSandboxId:40cdcfe4bd9df902d0159353292c04634d78c4dfe6f98b844b9ee744dd1f4204,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
,State:CONTAINER_EXITED,CreatedAt:1723811829474003968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0bcffbcfcc9f18fc26b991d99b329e9,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b4cb61e1-8357-4b7a-bcb4-6eaaaf380c7e name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 12:51:05 ha-863936 crio[3656]: time="2024-08-16 12:51:05.023871740Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0bfc6dd7-3f59-4be7-bb97-c8f434ef9a08 name=/runtime.v1.RuntimeService/Version
	Aug 16 12:51:05 ha-863936 crio[3656]: time="2024-08-16 12:51:05.024054848Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0bfc6dd7-3f59-4be7-bb97-c8f434ef9a08 name=/runtime.v1.RuntimeService/Version
	Aug 16 12:51:05 ha-863936 crio[3656]: time="2024-08-16 12:51:05.025910146Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=41377581-17b6-46d4-90bc-a02be90b5b45 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 12:51:05 ha-863936 crio[3656]: time="2024-08-16 12:51:05.026639895Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812665026609896,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=41377581-17b6-46d4-90bc-a02be90b5b45 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 12:51:05 ha-863936 crio[3656]: time="2024-08-16 12:51:05.027838715Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4696be06-2124-4a65-ace5-57b752a27ebd name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 12:51:05 ha-863936 crio[3656]: time="2024-08-16 12:51:05.028154320Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4696be06-2124-4a65-ace5-57b752a27ebd name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 12:51:05 ha-863936 crio[3656]: time="2024-08-16 12:51:05.029497160Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d9f9cdb49f2e208b14ce5d538c1296d5ca31308ea50d93a324f1dab81ee4828b,PodSandboxId:6c6ce59b10f027dda0492ea2cc7784e979b2e4575b41a6d3d4b4846c708ff8ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723812579915111302,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6e7b7e6-00b6-42e2-9680-e6660e76bc6f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f272c68cb5f2b671fbb4fde72d736ec8e3c47238d4c785b6a1d30c25b92ce44c,PodSandboxId:2a3662a22babc6afd74ebfb7d9e65638e5c992e2f3819ed20bdb3a33ce29b12b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723812574918416022,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1758561f6a2148ce3a7eabea3ce99a1a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2ffb5eb0f0df4c6a354d94a448dd7733348df5a3111df63b97081e652e00b3e,PodSandboxId:05ebc660f3a004b33b1919d66a501b861a346ffad7393c37ed846418f998c414,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723812567175517801,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zqpfx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c52c866f-81c3-423f-a604-f792834e341e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f57541ef075e2aebbbe3b597c77782777a7e1dbb4dc82e74f19fc2a5cba915d5,PodSandboxId:2e5770a9723b139fb8fcee684674d19f2f484e23f583240f470a287bc07a0a70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723812565604659437,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fb02b673e0a97e6d66a5a7404114d26,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f03a55be0c75f679f976d46b357088d845045a272d05087a1511d8fc11be9ba3,PodSandboxId:7153f574b2fa16ca13f0001f759a4d888f92fba2212bf78c01d878a62f33fffb,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723812548310117133,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ec2b816f95a9c13a68e8d3dd18d3822,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47c8f68686797ce6bff3488c73dbe8881981f9e2018359937476bdda33cffec9,PodSandboxId:6c6ce59b10f027dda0492ea2cc7784e979b2e4575b41a6d3d4b4846c708ff8ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723812534131682004,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6e7b7e6-00b6-42e2-9680-e6660e76bc6f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15e34877aa55b56dd2af2c8b4c94de3639e13e9aa2640f4dc59c76f1d0ffd700,PodSandboxId:4228908a42f0b13c126674c42df44178b1e535b8d1ad73c15191ff27418e1227,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723812534116774628,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g75mg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d22ea17-7ddd-4c07-89d5-0ebaa170066c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:69857a90cec728020099e00ae2fc308ffd2f0b58830b3d9498eb2371af8f090d,PodSandboxId:b6dbe84b7a3435716739b7dbb7ea3a870883e79c799c8dfd8cb89705572eee2b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723812534010316842,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dddkq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87bd9636-168b-4f61-9382-0914014af5c0,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ab681701
e0290677bb191586833dac1bc9c69e12654ddb92b30341260d90fec,PodSandboxId:564511f6f39d5c5233053e8250813d972ced174a53b45f6c71837335ff02ddf8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723812534063334241,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-7gfgm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 797ae351-63bf-4994-a9bd-901367887b58,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4382fddee87cc3d877e1bd39791f2475d440812fbbe775970391626e16ed2c4b,PodSandboxId:e745636924446357136cb347e503cfc0a7a32c790c13d1e18119fc81ec82dc45,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723812533797486126,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4eb1802b446ee0233a6ed400bf8fd33,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:716dd81dd144015c07273ec8072c3f31367582e5d9e70d6f89d3c6b2c8a520ae,PodSandboxId:2a3662a22babc6afd74ebfb7d9e65638e5c992e2f3819ed20bdb3a33ce29b12b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723812533746497816,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1758561f6a2148ce3a7eabea3ce99a1a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec46a3a2004fcad11de1bba2d1c355d99915bafd65d77051d5e38834061756fd,PodSandboxId:2e5be3e2e792c4932d57dc8ec1637bf1d0433315a23c63bbed71994fd7c314e7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723812533587378908,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0bcffbcfcc9f18fc26b991d99b329e9,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de7f3e4f5a38619439599a84b3612d1e59e247b98adf7d481f48fc64ef8228aa,PodSandboxId:2e5770a9723b139fb8fcee684674d19f2f484e23f583240f470a287bc07a0a70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723812533668441533,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fb02b673e0a97e6d66a5a7404114d26,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41ebcb2f3d94d6faf106f480f7a3c9a88b9a72f2e4dfd7393af7cd6c72e2079f,PodSandboxId:9f99975c570c519e8fc16d94f6f3a955a2f0da1baca56b1235fda2b87ed58bcd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723812526612893130,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-ssb5h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5162fb17-6897-40d2-9c2c-80157ea46e07,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e73d7f930e1761fdc56db10356ad46f9f2acd1f4aead50f57efa3558af4e0b18,PodSandboxId:5f9b33b7fe6f25a53393dfc965ee81bb65952c3ab4fc610bd3fa7395f2ed6d3a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723812042160165915,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zqpfx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c52c866f-81c3-423f-a604-f792834e341e,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a32107a6690bf88691db72f9019b8ec9c8c6e1ce5e447fa21e37000fbe8fe696,PodSandboxId:13e4c008cfb7ea17cb823e290756e07b0177dd0379a53dafaff6302e03252b5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723811856865765181,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-ssb5h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5162fb17-6897-40d2-9c2c-80157ea46e07,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fb58a4d7b8e84f3429f58f729e1f93b54a8aadb737f10f997bf64e601d6edd6,PodSandboxId:7061cc0bd22ace243b66f598d9799b3e59733e06ba7f688f1f4a72a56387bfd3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723811856826516024,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-7gfgm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 797ae351-63bf-4994-a9bd-901367887b58,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b83ba25619ab61e7a449e6a735eb3cca59c184e250a87f2af5b712fbc37fc331,PodSandboxId:d524a508e86ff890d883786349c2b55fe61dc345620d11bc49cfc83efa8c5816,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723811844925898697,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dddkq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87bd9636-168b-4f61-9382-0914014af5c0,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4aa588906cdcd0cf4fcb973469df4a1d02cafe5e3388d6c516665f0d1af8ceb4,PodSandboxId:e0fda91da3630c4c4c4612e48a47583f0c6a77f263ee246204a23e60b2f9156c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723811840918266847,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g75mg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d22ea17-7ddd-4c07-89d5-0ebaa170066c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f34879b3d9bde9efea1f6d29ab90f29e4d7c261b375b99154b6292d41fc10559,PodSandboxId:30242516e8e9ac227e7aba5fcf3357980c39bf1d53d5180208366d9151a9f6e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723811829571387585,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4eb1802b446ee0233a6ed400bf8fd33,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a0281c780fc2216d75b34fb0ab5edeca5d750269010e7a86e842bc53970539d,PodSandboxId:40cdcfe4bd9df902d0159353292c04634d78c4dfe6f98b844b9ee744dd1f4204,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
,State:CONTAINER_EXITED,CreatedAt:1723811829474003968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0bcffbcfcc9f18fc26b991d99b329e9,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4696be06-2124-4a65-ace5-57b752a27ebd name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 12:51:05 ha-863936 crio[3656]: time="2024-08-16 12:51:05.106107509Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e3582cc7-c222-45ba-a661-24849b6f85c5 name=/runtime.v1.RuntimeService/Version
	Aug 16 12:51:05 ha-863936 crio[3656]: time="2024-08-16 12:51:05.106229965Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e3582cc7-c222-45ba-a661-24849b6f85c5 name=/runtime.v1.RuntimeService/Version
	Aug 16 12:51:05 ha-863936 crio[3656]: time="2024-08-16 12:51:05.107789539Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=48c956be-348c-4acc-a6ff-f17f80ca8688 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 12:51:05 ha-863936 crio[3656]: time="2024-08-16 12:51:05.108621289Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812665108569505,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=48c956be-348c-4acc-a6ff-f17f80ca8688 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 12:51:05 ha-863936 crio[3656]: time="2024-08-16 12:51:05.109331622Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d25f416d-725e-43c1-8d38-4928a2b24d42 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 12:51:05 ha-863936 crio[3656]: time="2024-08-16 12:51:05.109431681Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d25f416d-725e-43c1-8d38-4928a2b24d42 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 12:51:05 ha-863936 crio[3656]: time="2024-08-16 12:51:05.110609264Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d9f9cdb49f2e208b14ce5d538c1296d5ca31308ea50d93a324f1dab81ee4828b,PodSandboxId:6c6ce59b10f027dda0492ea2cc7784e979b2e4575b41a6d3d4b4846c708ff8ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723812579915111302,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6e7b7e6-00b6-42e2-9680-e6660e76bc6f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f272c68cb5f2b671fbb4fde72d736ec8e3c47238d4c785b6a1d30c25b92ce44c,PodSandboxId:2a3662a22babc6afd74ebfb7d9e65638e5c992e2f3819ed20bdb3a33ce29b12b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723812574918416022,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1758561f6a2148ce3a7eabea3ce99a1a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2ffb5eb0f0df4c6a354d94a448dd7733348df5a3111df63b97081e652e00b3e,PodSandboxId:05ebc660f3a004b33b1919d66a501b861a346ffad7393c37ed846418f998c414,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723812567175517801,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zqpfx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c52c866f-81c3-423f-a604-f792834e341e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f57541ef075e2aebbbe3b597c77782777a7e1dbb4dc82e74f19fc2a5cba915d5,PodSandboxId:2e5770a9723b139fb8fcee684674d19f2f484e23f583240f470a287bc07a0a70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723812565604659437,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fb02b673e0a97e6d66a5a7404114d26,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f03a55be0c75f679f976d46b357088d845045a272d05087a1511d8fc11be9ba3,PodSandboxId:7153f574b2fa16ca13f0001f759a4d888f92fba2212bf78c01d878a62f33fffb,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723812548310117133,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ec2b816f95a9c13a68e8d3dd18d3822,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47c8f68686797ce6bff3488c73dbe8881981f9e2018359937476bdda33cffec9,PodSandboxId:6c6ce59b10f027dda0492ea2cc7784e979b2e4575b41a6d3d4b4846c708ff8ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723812534131682004,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6e7b7e6-00b6-42e2-9680-e6660e76bc6f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15e34877aa55b56dd2af2c8b4c94de3639e13e9aa2640f4dc59c76f1d0ffd700,PodSandboxId:4228908a42f0b13c126674c42df44178b1e535b8d1ad73c15191ff27418e1227,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723812534116774628,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g75mg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d22ea17-7ddd-4c07-89d5-0ebaa170066c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:69857a90cec728020099e00ae2fc308ffd2f0b58830b3d9498eb2371af8f090d,PodSandboxId:b6dbe84b7a3435716739b7dbb7ea3a870883e79c799c8dfd8cb89705572eee2b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723812534010316842,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dddkq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87bd9636-168b-4f61-9382-0914014af5c0,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ab681701
e0290677bb191586833dac1bc9c69e12654ddb92b30341260d90fec,PodSandboxId:564511f6f39d5c5233053e8250813d972ced174a53b45f6c71837335ff02ddf8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723812534063334241,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-7gfgm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 797ae351-63bf-4994-a9bd-901367887b58,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4382fddee87cc3d877e1bd39791f2475d440812fbbe775970391626e16ed2c4b,PodSandboxId:e745636924446357136cb347e503cfc0a7a32c790c13d1e18119fc81ec82dc45,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723812533797486126,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4eb1802b446ee0233a6ed400bf8fd33,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:716dd81dd144015c07273ec8072c3f31367582e5d9e70d6f89d3c6b2c8a520ae,PodSandboxId:2a3662a22babc6afd74ebfb7d9e65638e5c992e2f3819ed20bdb3a33ce29b12b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723812533746497816,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1758561f6a2148ce3a7eabea3ce99a1a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec46a3a2004fcad11de1bba2d1c355d99915bafd65d77051d5e38834061756fd,PodSandboxId:2e5be3e2e792c4932d57dc8ec1637bf1d0433315a23c63bbed71994fd7c314e7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723812533587378908,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0bcffbcfcc9f18fc26b991d99b329e9,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de7f3e4f5a38619439599a84b3612d1e59e247b98adf7d481f48fc64ef8228aa,PodSandboxId:2e5770a9723b139fb8fcee684674d19f2f484e23f583240f470a287bc07a0a70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723812533668441533,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fb02b673e0a97e6d66a5a7404114d26,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41ebcb2f3d94d6faf106f480f7a3c9a88b9a72f2e4dfd7393af7cd6c72e2079f,PodSandboxId:9f99975c570c519e8fc16d94f6f3a955a2f0da1baca56b1235fda2b87ed58bcd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723812526612893130,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-ssb5h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5162fb17-6897-40d2-9c2c-80157ea46e07,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e73d7f930e1761fdc56db10356ad46f9f2acd1f4aead50f57efa3558af4e0b18,PodSandboxId:5f9b33b7fe6f25a53393dfc965ee81bb65952c3ab4fc610bd3fa7395f2ed6d3a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723812042160165915,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zqpfx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c52c866f-81c3-423f-a604-f792834e341e,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a32107a6690bf88691db72f9019b8ec9c8c6e1ce5e447fa21e37000fbe8fe696,PodSandboxId:13e4c008cfb7ea17cb823e290756e07b0177dd0379a53dafaff6302e03252b5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723811856865765181,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-ssb5h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5162fb17-6897-40d2-9c2c-80157ea46e07,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fb58a4d7b8e84f3429f58f729e1f93b54a8aadb737f10f997bf64e601d6edd6,PodSandboxId:7061cc0bd22ace243b66f598d9799b3e59733e06ba7f688f1f4a72a56387bfd3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723811856826516024,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-7gfgm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 797ae351-63bf-4994-a9bd-901367887b58,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b83ba25619ab61e7a449e6a735eb3cca59c184e250a87f2af5b712fbc37fc331,PodSandboxId:d524a508e86ff890d883786349c2b55fe61dc345620d11bc49cfc83efa8c5816,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723811844925898697,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dddkq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87bd9636-168b-4f61-9382-0914014af5c0,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4aa588906cdcd0cf4fcb973469df4a1d02cafe5e3388d6c516665f0d1af8ceb4,PodSandboxId:e0fda91da3630c4c4c4612e48a47583f0c6a77f263ee246204a23e60b2f9156c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723811840918266847,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g75mg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d22ea17-7ddd-4c07-89d5-0ebaa170066c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f34879b3d9bde9efea1f6d29ab90f29e4d7c261b375b99154b6292d41fc10559,PodSandboxId:30242516e8e9ac227e7aba5fcf3357980c39bf1d53d5180208366d9151a9f6e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723811829571387585,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4eb1802b446ee0233a6ed400bf8fd33,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a0281c780fc2216d75b34fb0ab5edeca5d750269010e7a86e842bc53970539d,PodSandboxId:40cdcfe4bd9df902d0159353292c04634d78c4dfe6f98b844b9ee744dd1f4204,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
,State:CONTAINER_EXITED,CreatedAt:1723811829474003968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0bcffbcfcc9f18fc26b991d99b329e9,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d25f416d-725e-43c1-8d38-4928a2b24d42 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 12:51:05 ha-863936 crio[3656]: time="2024-08-16 12:51:05.173454642Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4ce8c5a1-5373-4dae-9cda-c86d4f0b9fcf name=/runtime.v1.RuntimeService/Version
	Aug 16 12:51:05 ha-863936 crio[3656]: time="2024-08-16 12:51:05.173524270Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4ce8c5a1-5373-4dae-9cda-c86d4f0b9fcf name=/runtime.v1.RuntimeService/Version
	Aug 16 12:51:05 ha-863936 crio[3656]: time="2024-08-16 12:51:05.174713695Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=91843ab9-6546-4d2a-9a93-7c10b45cf104 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 12:51:05 ha-863936 crio[3656]: time="2024-08-16 12:51:05.175270174Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812665175245654,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=91843ab9-6546-4d2a-9a93-7c10b45cf104 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 12:51:05 ha-863936 crio[3656]: time="2024-08-16 12:51:05.176089106Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=671a3e1b-33ae-477e-8b28-409581c6d772 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 12:51:05 ha-863936 crio[3656]: time="2024-08-16 12:51:05.176168927Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=671a3e1b-33ae-477e-8b28-409581c6d772 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 12:51:05 ha-863936 crio[3656]: time="2024-08-16 12:51:05.177668359Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d9f9cdb49f2e208b14ce5d538c1296d5ca31308ea50d93a324f1dab81ee4828b,PodSandboxId:6c6ce59b10f027dda0492ea2cc7784e979b2e4575b41a6d3d4b4846c708ff8ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723812579915111302,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6e7b7e6-00b6-42e2-9680-e6660e76bc6f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f272c68cb5f2b671fbb4fde72d736ec8e3c47238d4c785b6a1d30c25b92ce44c,PodSandboxId:2a3662a22babc6afd74ebfb7d9e65638e5c992e2f3819ed20bdb3a33ce29b12b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723812574918416022,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1758561f6a2148ce3a7eabea3ce99a1a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2ffb5eb0f0df4c6a354d94a448dd7733348df5a3111df63b97081e652e00b3e,PodSandboxId:05ebc660f3a004b33b1919d66a501b861a346ffad7393c37ed846418f998c414,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723812567175517801,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zqpfx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c52c866f-81c3-423f-a604-f792834e341e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f57541ef075e2aebbbe3b597c77782777a7e1dbb4dc82e74f19fc2a5cba915d5,PodSandboxId:2e5770a9723b139fb8fcee684674d19f2f484e23f583240f470a287bc07a0a70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723812565604659437,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fb02b673e0a97e6d66a5a7404114d26,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f03a55be0c75f679f976d46b357088d845045a272d05087a1511d8fc11be9ba3,PodSandboxId:7153f574b2fa16ca13f0001f759a4d888f92fba2212bf78c01d878a62f33fffb,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723812548310117133,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ec2b816f95a9c13a68e8d3dd18d3822,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47c8f68686797ce6bff3488c73dbe8881981f9e2018359937476bdda33cffec9,PodSandboxId:6c6ce59b10f027dda0492ea2cc7784e979b2e4575b41a6d3d4b4846c708ff8ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723812534131682004,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6e7b7e6-00b6-42e2-9680-e6660e76bc6f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15e34877aa55b56dd2af2c8b4c94de3639e13e9aa2640f4dc59c76f1d0ffd700,PodSandboxId:4228908a42f0b13c126674c42df44178b1e535b8d1ad73c15191ff27418e1227,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723812534116774628,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g75mg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d22ea17-7ddd-4c07-89d5-0ebaa170066c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:69857a90cec728020099e00ae2fc308ffd2f0b58830b3d9498eb2371af8f090d,PodSandboxId:b6dbe84b7a3435716739b7dbb7ea3a870883e79c799c8dfd8cb89705572eee2b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723812534010316842,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dddkq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87bd9636-168b-4f61-9382-0914014af5c0,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ab681701
e0290677bb191586833dac1bc9c69e12654ddb92b30341260d90fec,PodSandboxId:564511f6f39d5c5233053e8250813d972ced174a53b45f6c71837335ff02ddf8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723812534063334241,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-7gfgm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 797ae351-63bf-4994-a9bd-901367887b58,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4382fddee87cc3d877e1bd39791f2475d440812fbbe775970391626e16ed2c4b,PodSandboxId:e745636924446357136cb347e503cfc0a7a32c790c13d1e18119fc81ec82dc45,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723812533797486126,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4eb1802b446ee0233a6ed400bf8fd33,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:716dd81dd144015c07273ec8072c3f31367582e5d9e70d6f89d3c6b2c8a520ae,PodSandboxId:2a3662a22babc6afd74ebfb7d9e65638e5c992e2f3819ed20bdb3a33ce29b12b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723812533746497816,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1758561f6a2148ce3a7eabea3ce99a1a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec46a3a2004fcad11de1bba2d1c355d99915bafd65d77051d5e38834061756fd,PodSandboxId:2e5be3e2e792c4932d57dc8ec1637bf1d0433315a23c63bbed71994fd7c314e7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723812533587378908,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0bcffbcfcc9f18fc26b991d99b329e9,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de7f3e4f5a38619439599a84b3612d1e59e247b98adf7d481f48fc64ef8228aa,PodSandboxId:2e5770a9723b139fb8fcee684674d19f2f484e23f583240f470a287bc07a0a70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723812533668441533,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fb02b673e0a97e6d66a5a7404114d26,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41ebcb2f3d94d6faf106f480f7a3c9a88b9a72f2e4dfd7393af7cd6c72e2079f,PodSandboxId:9f99975c570c519e8fc16d94f6f3a955a2f0da1baca56b1235fda2b87ed58bcd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723812526612893130,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-ssb5h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5162fb17-6897-40d2-9c2c-80157ea46e07,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e73d7f930e1761fdc56db10356ad46f9f2acd1f4aead50f57efa3558af4e0b18,PodSandboxId:5f9b33b7fe6f25a53393dfc965ee81bb65952c3ab4fc610bd3fa7395f2ed6d3a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723812042160165915,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zqpfx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c52c866f-81c3-423f-a604-f792834e341e,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a32107a6690bf88691db72f9019b8ec9c8c6e1ce5e447fa21e37000fbe8fe696,PodSandboxId:13e4c008cfb7ea17cb823e290756e07b0177dd0379a53dafaff6302e03252b5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723811856865765181,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-ssb5h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5162fb17-6897-40d2-9c2c-80157ea46e07,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fb58a4d7b8e84f3429f58f729e1f93b54a8aadb737f10f997bf64e601d6edd6,PodSandboxId:7061cc0bd22ace243b66f598d9799b3e59733e06ba7f688f1f4a72a56387bfd3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723811856826516024,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-7gfgm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 797ae351-63bf-4994-a9bd-901367887b58,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b83ba25619ab61e7a449e6a735eb3cca59c184e250a87f2af5b712fbc37fc331,PodSandboxId:d524a508e86ff890d883786349c2b55fe61dc345620d11bc49cfc83efa8c5816,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723811844925898697,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dddkq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87bd9636-168b-4f61-9382-0914014af5c0,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4aa588906cdcd0cf4fcb973469df4a1d02cafe5e3388d6c516665f0d1af8ceb4,PodSandboxId:e0fda91da3630c4c4c4612e48a47583f0c6a77f263ee246204a23e60b2f9156c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723811840918266847,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g75mg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d22ea17-7ddd-4c07-89d5-0ebaa170066c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f34879b3d9bde9efea1f6d29ab90f29e4d7c261b375b99154b6292d41fc10559,PodSandboxId:30242516e8e9ac227e7aba5fcf3357980c39bf1d53d5180208366d9151a9f6e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723811829571387585,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4eb1802b446ee0233a6ed400bf8fd33,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a0281c780fc2216d75b34fb0ab5edeca5d750269010e7a86e842bc53970539d,PodSandboxId:40cdcfe4bd9df902d0159353292c04634d78c4dfe6f98b844b9ee744dd1f4204,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
,State:CONTAINER_EXITED,CreatedAt:1723811829474003968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0bcffbcfcc9f18fc26b991d99b329e9,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=671a3e1b-33ae-477e-8b28-409581c6d772 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	d9f9cdb49f2e2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   6c6ce59b10f02       storage-provisioner
	f272c68cb5f2b       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      About a minute ago   Running             kube-controller-manager   2                   2a3662a22babc       kube-controller-manager-ha-863936
	e2ffb5eb0f0df       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   05ebc660f3a00       busybox-7dff88458-zqpfx
	f57541ef075e2       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      About a minute ago   Running             kube-apiserver            3                   2e5770a9723b1       kube-apiserver-ha-863936
	f03a55be0c75f       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      About a minute ago   Running             kube-vip                  0                   7153f574b2fa1       kube-vip-ha-863936
	47c8f68686797       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       3                   6c6ce59b10f02       storage-provisioner
	15e34877aa55b       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      2 minutes ago        Running             kube-proxy                1                   4228908a42f0b       kube-proxy-g75mg
	6ab681701e029       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   564511f6f39d5       coredns-6f6b679f8f-7gfgm
	69857a90cec72       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      2 minutes ago        Running             kindnet-cni               1                   b6dbe84b7a343       kindnet-dddkq
	4382fddee87cc       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      2 minutes ago        Running             etcd                      1                   e745636924446       etcd-ha-863936
	716dd81dd1440       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      2 minutes ago        Exited              kube-controller-manager   1                   2a3662a22babc       kube-controller-manager-ha-863936
	de7f3e4f5a386       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      2 minutes ago        Exited              kube-apiserver            2                   2e5770a9723b1       kube-apiserver-ha-863936
	ec46a3a2004fc       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      2 minutes ago        Running             kube-scheduler            1                   2e5be3e2e792c       kube-scheduler-ha-863936
	41ebcb2f3d94d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   9f99975c570c5       coredns-6f6b679f8f-ssb5h
	e73d7f930e176       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   5f9b33b7fe6f2       busybox-7dff88458-zqpfx
	a32107a6690bf       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   13e4c008cfb7e       coredns-6f6b679f8f-ssb5h
	8fb58a4d7b8e8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   7061cc0bd22ac       coredns-6f6b679f8f-7gfgm
	b83ba25619ab6       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    13 minutes ago       Exited              kindnet-cni               0                   d524a508e86ff       kindnet-dddkq
	4aa588906cdcd       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      13 minutes ago       Exited              kube-proxy                0                   e0fda91da3630       kube-proxy-g75mg
	f34879b3d9bde       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago       Exited              etcd                      0                   30242516e8e9a       etcd-ha-863936
	4a0281c780fc2       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      13 minutes ago       Exited              kube-scheduler            0                   40cdcfe4bd9df       kube-scheduler-ha-863936
	
	
	==> coredns [41ebcb2f3d94d6faf106f480f7a3c9a88b9a72f2e4dfd7393af7cd6c72e2079f] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:49222->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:49222->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:49216->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:49216->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [6ab681701e0290677bb191586833dac1bc9c69e12654ddb92b30341260d90fec] <==
	Trace[1153588394]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (12:49:07.551)
	Trace[1153588394]: [10.001927654s] [10.001927654s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:52598->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:52598->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:52602->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:52602->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [8fb58a4d7b8e84f3429f58f729e1f93b54a8aadb737f10f997bf64e601d6edd6] <==
	[INFO] 10.244.2.2:33554 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000202349s
	[INFO] 10.244.2.2:49854 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000138224s
	[INFO] 10.244.2.2:52911 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000113497s
	[INFO] 10.244.1.2:58083 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001926786s
	[INFO] 10.244.1.2:40090 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000179243s
	[INFO] 10.244.0.4:38072 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001911453s
	[INFO] 10.244.0.4:48123 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000124668s
	[INFO] 10.244.2.2:45589 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104297s
	[INFO] 10.244.2.2:47676 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000096845s
	[INFO] 10.244.2.2:34029 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090037s
	[INFO] 10.244.2.2:44387 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000085042s
	[INFO] 10.244.1.2:39606 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000160442s
	[INFO] 10.244.1.2:35616 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000085764s
	[INFO] 10.244.1.2:41949 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000261174s
	[INFO] 10.244.1.2:33001 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071351s
	[INFO] 10.244.0.4:57464 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000150636s
	[INFO] 10.244.2.2:55232 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000242943s
	[INFO] 10.244.2.2:35398 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000209274s
	[INFO] 10.244.1.2:40761 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122103s
	[INFO] 10.244.1.2:46518 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000133408s
	[INFO] 10.244.1.2:41022 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000117384s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io)
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a32107a6690bf88691db72f9019b8ec9c8c6e1ce5e447fa21e37000fbe8fe696] <==
	[INFO] 10.244.1.2:37962 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128488s
	[INFO] 10.244.1.2:53685 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000098031s
	[INFO] 10.244.1.2:33689 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000277395s
	[INFO] 10.244.1.2:40131 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001237471s
	[INFO] 10.244.1.2:39633 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000131283s
	[INFO] 10.244.1.2:60171 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000121735s
	[INFO] 10.244.0.4:60191 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114357s
	[INFO] 10.244.0.4:41890 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000066371s
	[INFO] 10.244.0.4:55945 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000119788s
	[INFO] 10.244.0.4:57226 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001318461s
	[INFO] 10.244.0.4:56732 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000093503s
	[INFO] 10.244.0.4:52075 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000104691s
	[INFO] 10.244.0.4:60105 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121048s
	[INFO] 10.244.0.4:43134 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000066121s
	[INFO] 10.244.0.4:44998 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063593s
	[INFO] 10.244.2.2:47337 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00013984s
	[INFO] 10.244.2.2:54916 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000155787s
	[INFO] 10.244.1.2:40477 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000149375s
	[INFO] 10.244.0.4:48877 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125695s
	[INFO] 10.244.0.4:37769 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000100407s
	[INFO] 10.244.0.4:53971 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000045729s
	[INFO] 10.244.0.4:37660 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000216606s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1935&timeout=9m5s&timeoutSeconds=545&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> describe nodes <==
	Name:               ha-863936
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-863936
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab84f9bc76071a77c857a14f5c66dccc01002b05
	                    minikube.k8s.io/name=ha-863936
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_16T12_37_16_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 12:37:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-863936
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 12:51:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 12:49:32 +0000   Fri, 16 Aug 2024 12:37:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 12:49:32 +0000   Fri, 16 Aug 2024 12:37:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 12:49:32 +0000   Fri, 16 Aug 2024 12:37:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 12:49:32 +0000   Fri, 16 Aug 2024 12:37:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.2
	  Hostname:    ha-863936
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 10f8ad5d72f24178a58c9bc9c1f37801
	  System UUID:                10f8ad5d-72f2-4178-a58c-9bc9c1f37801
	  Boot ID:                    4cc922cf-4096-4ce6-955a-2954b5f98b77
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-zqpfx              0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-6f6b679f8f-7gfgm             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 coredns-6f6b679f8f-ssb5h             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 etcd-ha-863936                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         13m
	  kube-system                 kindnet-dddkq                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-apiserver-ha-863936             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-863936    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-g75mg                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-863936             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-863936                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 89s                    kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  13m                    kubelet          Node ha-863936 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     13m                    kubelet          Node ha-863936 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    13m                    kubelet          Node ha-863936 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           13m                    node-controller  Node ha-863936 event: Registered Node ha-863936 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-863936 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-863936 event: Registered Node ha-863936 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-863936 event: Registered Node ha-863936 in Controller
	  Warning  ContainerGCFailed        2m50s (x2 over 3m50s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             2m33s (x3 over 3m22s)  kubelet          Node ha-863936 status is now: NodeNotReady
	  Normal   RegisteredNode           89s                    node-controller  Node ha-863936 event: Registered Node ha-863936 in Controller
	  Normal   RegisteredNode           87s                    node-controller  Node ha-863936 event: Registered Node ha-863936 in Controller
	  Normal   RegisteredNode           37s                    node-controller  Node ha-863936 event: Registered Node ha-863936 in Controller
	
	
	Name:               ha-863936-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-863936-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab84f9bc76071a77c857a14f5c66dccc01002b05
	                    minikube.k8s.io/name=ha-863936
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_16T12_38_57_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 12:38:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-863936-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 12:51:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 12:50:11 +0000   Fri, 16 Aug 2024 12:49:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 12:50:11 +0000   Fri, 16 Aug 2024 12:49:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 12:50:11 +0000   Fri, 16 Aug 2024 12:49:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 12:50:11 +0000   Fri, 16 Aug 2024 12:49:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.101
	  Hostname:    ha-863936-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c538a90b7afb4607a2068ae6c8689740
	  System UUID:                c538a90b-7afb-4607-a206-8ae6c8689740
	  Boot ID:                    49123558-9c59-443f-8741-ca8abe8591ff
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-t5tjw                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-863936-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         12m
	  kube-system                 kindnet-qmrb2                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-apiserver-ha-863936-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-863936-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-7lvfc                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-863936-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-863936-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 76s                  kube-proxy       
	  Normal  Starting                 12m                  kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)    kubelet          Node ha-863936-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)    kubelet          Node ha-863936-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)    kubelet          Node ha-863936-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                  node-controller  Node ha-863936-m02 event: Registered Node ha-863936-m02 in Controller
	  Normal  RegisteredNode           12m                  node-controller  Node ha-863936-m02 event: Registered Node ha-863936-m02 in Controller
	  Normal  RegisteredNode           10m                  node-controller  Node ha-863936-m02 event: Registered Node ha-863936-m02 in Controller
	  Normal  NodeNotReady             8m36s                node-controller  Node ha-863936-m02 status is now: NodeNotReady
	  Normal  Starting                 115s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  115s (x8 over 115s)  kubelet          Node ha-863936-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s (x8 over 115s)  kubelet          Node ha-863936-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     115s (x7 over 115s)  kubelet          Node ha-863936-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  115s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           89s                  node-controller  Node ha-863936-m02 event: Registered Node ha-863936-m02 in Controller
	  Normal  RegisteredNode           87s                  node-controller  Node ha-863936-m02 event: Registered Node ha-863936-m02 in Controller
	  Normal  RegisteredNode           37s                  node-controller  Node ha-863936-m02 event: Registered Node ha-863936-m02 in Controller
	
	
	Name:               ha-863936-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-863936-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab84f9bc76071a77c857a14f5c66dccc01002b05
	                    minikube.k8s.io/name=ha-863936
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_16T12_40_11_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 12:40:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-863936-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 12:51:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 12:50:42 +0000   Fri, 16 Aug 2024 12:40:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 12:50:42 +0000   Fri, 16 Aug 2024 12:40:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 12:50:42 +0000   Fri, 16 Aug 2024 12:40:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 12:50:42 +0000   Fri, 16 Aug 2024 12:40:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.116
	  Hostname:    ha-863936-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b54ef01aeadc4a70aaecea24c80f74de
	  System UUID:                b54ef01a-eadc-4a70-aaec-ea24c80f74de
	  Boot ID:                    21354d4c-5bf7-4cfa-bd4b-e7323f8ad30d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-gm458                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-863936-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         10m
	  kube-system                 kindnet-zqs4l                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-apiserver-ha-863936-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-ha-863936-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-25gzj                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-ha-863936-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-vip-ha-863936-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 36s                kube-proxy       
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node ha-863936-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node ha-863936-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node ha-863936-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-863936-m03 event: Registered Node ha-863936-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-863936-m03 event: Registered Node ha-863936-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-863936-m03 event: Registered Node ha-863936-m03 in Controller
	  Normal   RegisteredNode           89s                node-controller  Node ha-863936-m03 event: Registered Node ha-863936-m03 in Controller
	  Normal   RegisteredNode           87s                node-controller  Node ha-863936-m03 event: Registered Node ha-863936-m03 in Controller
	  Normal   Starting                 54s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  54s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  54s                kubelet          Node ha-863936-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    54s                kubelet          Node ha-863936-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     54s                kubelet          Node ha-863936-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 54s                kubelet          Node ha-863936-m03 has been rebooted, boot id: 21354d4c-5bf7-4cfa-bd4b-e7323f8ad30d
	  Normal   RegisteredNode           37s                node-controller  Node ha-863936-m03 event: Registered Node ha-863936-m03 in Controller
	
	
	Name:               ha-863936-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-863936-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab84f9bc76071a77c857a14f5c66dccc01002b05
	                    minikube.k8s.io/name=ha-863936
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_16T12_41_15_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 12:41:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-863936-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 12:50:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 12:50:57 +0000   Fri, 16 Aug 2024 12:50:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 12:50:57 +0000   Fri, 16 Aug 2024 12:50:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 12:50:57 +0000   Fri, 16 Aug 2024 12:50:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 12:50:57 +0000   Fri, 16 Aug 2024 12:50:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.74
	  Hostname:    ha-863936-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 13346cf592d54450aa4bb72c3dba17c9
	  System UUID:                13346cf5-92d5-4450-aa4b-b72c3dba17c9
	  Boot ID:                    dda3aa5d-b523-4d77-a3a9-6f8a86052d9e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-c6wlb       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      9m50s
	  kube-system                 kube-proxy-lsjgf    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4s                     kube-proxy       
	  Normal   Starting                 9m45s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  9m50s (x2 over 9m51s)  kubelet          Node ha-863936-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m50s (x2 over 9m51s)  kubelet          Node ha-863936-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m50s (x2 over 9m51s)  kubelet          Node ha-863936-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  9m50s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           9m48s                  node-controller  Node ha-863936-m04 event: Registered Node ha-863936-m04 in Controller
	  Normal   RegisteredNode           9m47s                  node-controller  Node ha-863936-m04 event: Registered Node ha-863936-m04 in Controller
	  Normal   RegisteredNode           9m46s                  node-controller  Node ha-863936-m04 event: Registered Node ha-863936-m04 in Controller
	  Normal   NodeReady                9m30s                  kubelet          Node ha-863936-m04 status is now: NodeReady
	  Normal   RegisteredNode           89s                    node-controller  Node ha-863936-m04 event: Registered Node ha-863936-m04 in Controller
	  Normal   RegisteredNode           87s                    node-controller  Node ha-863936-m04 event: Registered Node ha-863936-m04 in Controller
	  Normal   NodeNotReady             49s                    node-controller  Node ha-863936-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           37s                    node-controller  Node ha-863936-m04 event: Registered Node ha-863936-m04 in Controller
	  Normal   Starting                 8s                     kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8s                     kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 8s                     kubelet          Node ha-863936-m04 has been rebooted, boot id: dda3aa5d-b523-4d77-a3a9-6f8a86052d9e
	  Normal   NodeReady                8s                     kubelet          Node ha-863936-m04 status is now: NodeReady
	  Normal   NodeHasSufficientMemory  7s (x2 over 8s)        kubelet          Node ha-863936-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7s (x2 over 8s)        kubelet          Node ha-863936-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7s (x2 over 8s)        kubelet          Node ha-863936-m04 status is now: NodeHasSufficientPID
	
	
	==> dmesg <==
	[ +10.777615] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.058123] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055634] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.181681] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.119869] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.269746] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[Aug16 12:37] systemd-fstab-generator[771]: Ignoring "noauto" option for root device
	[  +4.293923] systemd-fstab-generator[906]: Ignoring "noauto" option for root device
	[  +0.058457] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.209516] systemd-fstab-generator[1329]: Ignoring "noauto" option for root device
	[  +0.086313] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.133654] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.050308] kauditd_printk_skb: 34 callbacks suppressed
	[Aug16 12:39] kauditd_printk_skb: 26 callbacks suppressed
	[Aug16 12:45] kauditd_printk_skb: 1 callbacks suppressed
	[Aug16 12:48] systemd-fstab-generator[3576]: Ignoring "noauto" option for root device
	[  +0.145699] systemd-fstab-generator[3588]: Ignoring "noauto" option for root device
	[  +0.186683] systemd-fstab-generator[3602]: Ignoring "noauto" option for root device
	[  +0.142181] systemd-fstab-generator[3614]: Ignoring "noauto" option for root device
	[  +0.277142] systemd-fstab-generator[3642]: Ignoring "noauto" option for root device
	[  +0.899065] systemd-fstab-generator[3834]: Ignoring "noauto" option for root device
	[  +6.580987] kauditd_printk_skb: 132 callbacks suppressed
	[Aug16 12:49] kauditd_printk_skb: 76 callbacks suppressed
	[ +26.726246] kauditd_printk_skb: 5 callbacks suppressed
	[ +17.536427] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [4382fddee87cc3d877e1bd39791f2475d440812fbbe775970391626e16ed2c4b] <==
	{"level":"warn","ts":"2024-08-16T12:50:06.028276Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"d6e396237a03cb80","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T12:50:06.030591Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"d6e396237a03cb80","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T12:50:06.042792Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"d6e396237a03cb80","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T12:50:06.140670Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"d6e396237a03cb80","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T12:50:06.142850Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"d6e396237a03cb80","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T12:50:06.243334Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"d6e396237a03cb80","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T12:50:06.342604Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"d6e396237a03cb80","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T12:50:06.442494Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"d6e396237a03cb80","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T12:50:06.543063Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"d6e396237a03cb80","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T12:50:09.687119Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"d6e396237a03cb80","rtt":"0s","error":"dial tcp 192.168.39.116:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-16T12:50:09.687140Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"d6e396237a03cb80","rtt":"0s","error":"dial tcp 192.168.39.116:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-16T12:50:09.950670Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.116:2380/version","remote-member-id":"d6e396237a03cb80","error":"Get \"https://192.168.39.116:2380/version\": dial tcp 192.168.39.116:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-16T12:50:09.950802Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"d6e396237a03cb80","error":"Get \"https://192.168.39.116:2380/version\": dial tcp 192.168.39.116:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-16T12:50:13.953273Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.116:2380/version","remote-member-id":"d6e396237a03cb80","error":"Get \"https://192.168.39.116:2380/version\": dial tcp 192.168.39.116:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-16T12:50:13.953333Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"d6e396237a03cb80","error":"Get \"https://192.168.39.116:2380/version\": dial tcp 192.168.39.116:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-16T12:50:14.687631Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"d6e396237a03cb80","rtt":"0s","error":"dial tcp 192.168.39.116:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-16T12:50:14.687713Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"d6e396237a03cb80","rtt":"0s","error":"dial tcp 192.168.39.116:2380: connect: connection refused"}
	{"level":"info","ts":"2024-08-16T12:50:16.861499Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"d6e396237a03cb80"}
	{"level":"info","ts":"2024-08-16T12:50:16.861546Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6c80de388e5020e8","remote-peer-id":"d6e396237a03cb80"}
	{"level":"info","ts":"2024-08-16T12:50:16.873155Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"6c80de388e5020e8","remote-peer-id":"d6e396237a03cb80"}
	{"level":"info","ts":"2024-08-16T12:50:16.880875Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"6c80de388e5020e8","to":"d6e396237a03cb80","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-16T12:50:16.880937Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"6c80de388e5020e8","remote-peer-id":"d6e396237a03cb80"}
	{"level":"info","ts":"2024-08-16T12:50:16.882156Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"6c80de388e5020e8","to":"d6e396237a03cb80","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-16T12:50:16.882211Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"6c80de388e5020e8","remote-peer-id":"d6e396237a03cb80"}
	{"level":"info","ts":"2024-08-16T12:50:21.127158Z","caller":"traceutil/trace.go:171","msg":"trace[279510133] transaction","detail":"{read_only:false; response_revision:2398; number_of_response:1; }","duration":"127.970338ms","start":"2024-08-16T12:50:20.999172Z","end":"2024-08-16T12:50:21.127142Z","steps":["trace[279510133] 'process raft request'  (duration: 127.760605ms)"],"step_count":1}
	
	
	==> etcd [f34879b3d9bde9efea1f6d29ab90f29e4d7c261b375b99154b6292d41fc10559] <==
	2024/08/16 12:47:13 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/08/16 12:47:13 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-16T12:47:13.597527Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-16T12:47:13.597624Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-16T12:47:13.599649Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"6c80de388e5020e8","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-16T12:47:13.599983Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"a92ff6c78f5f37a8"}
	{"level":"info","ts":"2024-08-16T12:47:13.600065Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"a92ff6c78f5f37a8"}
	{"level":"info","ts":"2024-08-16T12:47:13.600120Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"a92ff6c78f5f37a8"}
	{"level":"info","ts":"2024-08-16T12:47:13.600237Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6c80de388e5020e8","remote-peer-id":"a92ff6c78f5f37a8"}
	{"level":"info","ts":"2024-08-16T12:47:13.600312Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6c80de388e5020e8","remote-peer-id":"a92ff6c78f5f37a8"}
	{"level":"info","ts":"2024-08-16T12:47:13.600356Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6c80de388e5020e8","remote-peer-id":"a92ff6c78f5f37a8"}
	{"level":"info","ts":"2024-08-16T12:47:13.600390Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"a92ff6c78f5f37a8"}
	{"level":"info","ts":"2024-08-16T12:47:13.600401Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"d6e396237a03cb80"}
	{"level":"info","ts":"2024-08-16T12:47:13.600419Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"d6e396237a03cb80"}
	{"level":"info","ts":"2024-08-16T12:47:13.600453Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"d6e396237a03cb80"}
	{"level":"info","ts":"2024-08-16T12:47:13.600558Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6c80de388e5020e8","remote-peer-id":"d6e396237a03cb80"}
	{"level":"info","ts":"2024-08-16T12:47:13.600620Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6c80de388e5020e8","remote-peer-id":"d6e396237a03cb80"}
	{"level":"info","ts":"2024-08-16T12:47:13.600661Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6c80de388e5020e8","remote-peer-id":"d6e396237a03cb80"}
	{"level":"info","ts":"2024-08-16T12:47:13.600697Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"d6e396237a03cb80"}
	{"level":"info","ts":"2024-08-16T12:47:13.603309Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.2:2380"}
	{"level":"warn","ts":"2024-08-16T12:47:13.603431Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"8.901135296s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-08-16T12:47:13.603462Z","caller":"traceutil/trace.go:171","msg":"trace[1531816342] range","detail":"{range_begin:; range_end:; }","duration":"8.901182222s","start":"2024-08-16T12:47:04.702269Z","end":"2024-08-16T12:47:13.603451Z","steps":["trace[1531816342] 'agreement among raft nodes before linearized reading'  (duration: 8.901133699s)"],"step_count":1}
	{"level":"error","ts":"2024-08-16T12:47:13.603536Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]data_corruption ok\n[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: server stopped\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-08-16T12:47:13.604481Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.2:2380"}
	{"level":"info","ts":"2024-08-16T12:47:13.604523Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-863936","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.2:2380"],"advertise-client-urls":["https://192.168.39.2:2379"]}
	
	
	==> kernel <==
	 12:51:06 up 14 min,  0 users,  load average: 0.45, 0.43, 0.28
	Linux ha-863936 5.10.207 #1 SMP Wed Aug 14 19:18:01 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [69857a90cec728020099e00ae2fc308ffd2f0b58830b3d9498eb2371af8f090d] <==
	I0816 12:50:35.126619       1 main.go:322] Node ha-863936-m04 has CIDR [10.244.3.0/24] 
	I0816 12:50:45.124614       1 main.go:295] Handling node with IPs: map[192.168.39.2:{}]
	I0816 12:50:45.124675       1 main.go:299] handling current node
	I0816 12:50:45.124693       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0816 12:50:45.124699       1 main.go:322] Node ha-863936-m02 has CIDR [10.244.1.0/24] 
	I0816 12:50:45.124833       1 main.go:295] Handling node with IPs: map[192.168.39.116:{}]
	I0816 12:50:45.124854       1 main.go:322] Node ha-863936-m03 has CIDR [10.244.2.0/24] 
	I0816 12:50:45.124906       1 main.go:295] Handling node with IPs: map[192.168.39.74:{}]
	I0816 12:50:45.124911       1 main.go:322] Node ha-863936-m04 has CIDR [10.244.3.0/24] 
	I0816 12:50:55.124914       1 main.go:295] Handling node with IPs: map[192.168.39.2:{}]
	I0816 12:50:55.125185       1 main.go:299] handling current node
	I0816 12:50:55.125341       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0816 12:50:55.125451       1 main.go:322] Node ha-863936-m02 has CIDR [10.244.1.0/24] 
	I0816 12:50:55.125690       1 main.go:295] Handling node with IPs: map[192.168.39.116:{}]
	I0816 12:50:55.125789       1 main.go:322] Node ha-863936-m03 has CIDR [10.244.2.0/24] 
	I0816 12:50:55.126170       1 main.go:295] Handling node with IPs: map[192.168.39.74:{}]
	I0816 12:50:55.126237       1 main.go:322] Node ha-863936-m04 has CIDR [10.244.3.0/24] 
	I0816 12:51:05.130306       1 main.go:295] Handling node with IPs: map[192.168.39.2:{}]
	I0816 12:51:05.130500       1 main.go:299] handling current node
	I0816 12:51:05.130545       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0816 12:51:05.130553       1 main.go:322] Node ha-863936-m02 has CIDR [10.244.1.0/24] 
	I0816 12:51:05.130926       1 main.go:295] Handling node with IPs: map[192.168.39.116:{}]
	I0816 12:51:05.131027       1 main.go:322] Node ha-863936-m03 has CIDR [10.244.2.0/24] 
	I0816 12:51:05.131188       1 main.go:295] Handling node with IPs: map[192.168.39.74:{}]
	I0816 12:51:05.131217       1 main.go:322] Node ha-863936-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [b83ba25619ab61e7a449e6a735eb3cca59c184e250a87f2af5b712fbc37fc331] <==
	I0816 12:46:36.069611       1 main.go:322] Node ha-863936-m04 has CIDR [10.244.3.0/24] 
	I0816 12:46:46.068916       1 main.go:295] Handling node with IPs: map[192.168.39.74:{}]
	I0816 12:46:46.068999       1 main.go:322] Node ha-863936-m04 has CIDR [10.244.3.0/24] 
	I0816 12:46:46.069167       1 main.go:295] Handling node with IPs: map[192.168.39.2:{}]
	I0816 12:46:46.069176       1 main.go:299] handling current node
	I0816 12:46:46.069199       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0816 12:46:46.069204       1 main.go:322] Node ha-863936-m02 has CIDR [10.244.1.0/24] 
	I0816 12:46:46.069257       1 main.go:295] Handling node with IPs: map[192.168.39.116:{}]
	I0816 12:46:46.069262       1 main.go:322] Node ha-863936-m03 has CIDR [10.244.2.0/24] 
	I0816 12:46:56.071559       1 main.go:295] Handling node with IPs: map[192.168.39.2:{}]
	I0816 12:46:56.071600       1 main.go:299] handling current node
	I0816 12:46:56.071625       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0816 12:46:56.071630       1 main.go:322] Node ha-863936-m02 has CIDR [10.244.1.0/24] 
	I0816 12:46:56.071821       1 main.go:295] Handling node with IPs: map[192.168.39.116:{}]
	I0816 12:46:56.071858       1 main.go:322] Node ha-863936-m03 has CIDR [10.244.2.0/24] 
	I0816 12:46:56.071928       1 main.go:295] Handling node with IPs: map[192.168.39.74:{}]
	I0816 12:46:56.072003       1 main.go:322] Node ha-863936-m04 has CIDR [10.244.3.0/24] 
	I0816 12:47:06.077543       1 main.go:295] Handling node with IPs: map[192.168.39.2:{}]
	I0816 12:47:06.077910       1 main.go:299] handling current node
	I0816 12:47:06.078036       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0816 12:47:06.078133       1 main.go:322] Node ha-863936-m02 has CIDR [10.244.1.0/24] 
	I0816 12:47:06.079102       1 main.go:295] Handling node with IPs: map[192.168.39.116:{}]
	I0816 12:47:06.079145       1 main.go:322] Node ha-863936-m03 has CIDR [10.244.2.0/24] 
	I0816 12:47:06.079235       1 main.go:295] Handling node with IPs: map[192.168.39.74:{}]
	I0816 12:47:06.079254       1 main.go:322] Node ha-863936-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [de7f3e4f5a38619439599a84b3612d1e59e247b98adf7d481f48fc64ef8228aa] <==
	I0816 12:48:54.352674       1 options.go:228] external host was not specified, using 192.168.39.2
	I0816 12:48:54.371475       1 server.go:142] Version: v1.31.0
	I0816 12:48:54.371539       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 12:48:55.002378       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0816 12:48:55.028456       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0816 12:48:55.032470       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0816 12:48:55.037006       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0816 12:48:55.037343       1 instance.go:232] Using reconciler: lease
	W0816 12:49:15.001509       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0816 12:49:15.002173       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0816 12:49:15.038767       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [f57541ef075e2aebbbe3b597c77782777a7e1dbb4dc82e74f19fc2a5cba915d5] <==
	I0816 12:49:31.063929       1 crd_finalizer.go:269] Starting CRDFinalizer
	I0816 12:49:31.147748       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0816 12:49:31.147886       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0816 12:49:31.147923       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0816 12:49:31.148044       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0816 12:49:31.148395       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0816 12:49:31.148822       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0816 12:49:31.151024       1 shared_informer.go:320] Caches are synced for configmaps
	I0816 12:49:31.157268       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	W0816 12:49:31.162930       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.116]
	I0816 12:49:31.164133       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0816 12:49:31.164196       1 aggregator.go:171] initial CRD sync complete...
	I0816 12:49:31.164232       1 autoregister_controller.go:144] Starting autoregister controller
	I0816 12:49:31.164255       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0816 12:49:31.164284       1 cache.go:39] Caches are synced for autoregister controller
	I0816 12:49:31.174799       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0816 12:49:31.181146       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0816 12:49:31.181191       1 policy_source.go:224] refreshing policies
	I0816 12:49:31.198459       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0816 12:49:31.264390       1 controller.go:615] quota admission added evaluator for: endpoints
	I0816 12:49:31.285246       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0816 12:49:31.290529       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0816 12:49:32.059679       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0816 12:49:32.525123       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.116 192.168.39.2]
	W0816 12:49:52.507698       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.101 192.168.39.2]
	
	
	==> kube-controller-manager [716dd81dd144015c07273ec8072c3f31367582e5d9e70d6f89d3c6b2c8a520ae] <==
	I0816 12:48:55.071083       1 serving.go:386] Generated self-signed cert in-memory
	I0816 12:48:55.303350       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0816 12:48:55.303384       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 12:48:55.304886       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0816 12:48:55.305077       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0816 12:48:55.305279       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0816 12:48:55.305460       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0816 12:49:16.046352       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.2:8443/healthz\": dial tcp 192.168.39.2:8443: connect: connection refused"
	
	
	==> kube-controller-manager [f272c68cb5f2b671fbb4fde72d736ec8e3c47238d4c785b6a1d30c25b92ce44c] <==
	I0816 12:49:41.214545       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="50.395µs"
	I0816 12:49:41.299250       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863936-m02"
	I0816 12:49:56.626150       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="22.671854ms"
	I0816 12:49:56.626350       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="58.068µs"
	I0816 12:49:56.670038       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="23.187209ms"
	I0816 12:49:56.670581       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="216.792µs"
	I0816 12:49:56.672597       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-fc2lc EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-fc2lc\": the object has been modified; please apply your changes to the latest version and try again"
	I0816 12:49:56.673339       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"a322e0ec-14ff-458e-bae7-924b2e2d8142", APIVersion:"v1", ResourceVersion:"242", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-fc2lc EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-fc2lc": the object has been modified; please apply your changes to the latest version and try again
	I0816 12:50:11.541889       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863936-m02"
	I0816 12:50:11.942886       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863936-m03"
	I0816 12:50:12.961233       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="12.584192ms"
	I0816 12:50:12.961335       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="44.685µs"
	I0816 12:50:16.525626       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863936-m04"
	I0816 12:50:16.552762       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863936-m04"
	I0816 12:50:18.121313       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863936-m04"
	I0816 12:50:21.637207       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863936-m04"
	I0816 12:50:28.842373       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863936-m04"
	I0816 12:50:28.929052       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863936-m04"
	I0816 12:50:31.622147       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="10.720262ms"
	I0816 12:50:31.622627       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="61.427µs"
	I0816 12:50:42.701417       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863936-m03"
	I0816 12:50:57.928675       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-863936-m04"
	I0816 12:50:57.929462       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863936-m04"
	I0816 12:50:57.941171       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863936-m04"
	I0816 12:50:58.070602       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863936-m04"
	
	
	==> kube-proxy [15e34877aa55b56dd2af2c8b4c94de3639e13e9aa2640f4dc59c76f1d0ffd700] <==
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0816 12:48:55.906426       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-863936\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0816 12:48:58.978676       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-863936\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0816 12:49:02.051481       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-863936\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0816 12:49:08.195335       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-863936\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0816 12:49:17.411538       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-863936\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0816 12:49:36.556503       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.2"]
	E0816 12:49:36.556679       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 12:49:36.632240       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0816 12:49:36.632390       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0816 12:49:36.632489       1 server_linux.go:169] "Using iptables Proxier"
	I0816 12:49:36.635241       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 12:49:36.635675       1 server.go:483] "Version info" version="v1.31.0"
	I0816 12:49:36.635830       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 12:49:36.637544       1 config.go:197] "Starting service config controller"
	I0816 12:49:36.637852       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 12:49:36.638020       1 config.go:104] "Starting endpoint slice config controller"
	I0816 12:49:36.638124       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 12:49:36.638808       1 config.go:326] "Starting node config controller"
	I0816 12:49:36.638879       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 12:49:36.738298       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0816 12:49:36.738519       1 shared_informer.go:320] Caches are synced for service config
	I0816 12:49:36.739066       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [4aa588906cdcd0cf4fcb973469df4a1d02cafe5e3388d6c516665f0d1af8ceb4] <==
	E0816 12:46:10.021919       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1935\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 12:46:13.090527       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	E0816 12:46:13.090655       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1935\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 12:46:13.090794       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-863936&resourceVersion=1936": dial tcp 192.168.39.254:8443: connect: no route to host
	E0816 12:46:13.090832       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-863936&resourceVersion=1936\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 12:46:13.090916       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1936": dial tcp 192.168.39.254:8443: connect: no route to host
	E0816 12:46:13.091011       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1936\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 12:46:19.235825       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-863936&resourceVersion=1936": dial tcp 192.168.39.254:8443: connect: no route to host
	E0816 12:46:19.235988       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-863936&resourceVersion=1936\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 12:46:19.236146       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1936": dial tcp 192.168.39.254:8443: connect: no route to host
	E0816 12:46:19.236189       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1936\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 12:46:22.306487       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	E0816 12:46:22.306894       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1935\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 12:46:31.524574       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1936": dial tcp 192.168.39.254:8443: connect: no route to host
	E0816 12:46:31.524645       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1936\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 12:46:31.524777       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-863936&resourceVersion=1936": dial tcp 192.168.39.254:8443: connect: no route to host
	E0816 12:46:31.524844       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-863936&resourceVersion=1936\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 12:46:34.594839       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	E0816 12:46:34.594902       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1935\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 12:46:53.027463       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1936": dial tcp 192.168.39.254:8443: connect: no route to host
	E0816 12:46:53.027588       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1936\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 12:46:53.027817       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-863936&resourceVersion=1936": dial tcp 192.168.39.254:8443: connect: no route to host
	E0816 12:46:53.027932       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-863936&resourceVersion=1936\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 12:46:59.187815       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	E0816 12:46:59.188321       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1935\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-scheduler [4a0281c780fc2216d75b34fb0ab5edeca5d750269010e7a86e842bc53970539d] <==
	E0816 12:41:15.420071       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-c6wlb\": pod kindnet-c6wlb is already assigned to node \"ha-863936-m04\"" pod="kube-system/kindnet-c6wlb"
	I0816 12:41:15.420190       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-c6wlb" node="ha-863936-m04"
	E0816 12:41:15.413578       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-lsjgf\": pod kube-proxy-lsjgf is already assigned to node \"ha-863936-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-lsjgf" node="ha-863936-m04"
	E0816 12:41:15.424458       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 71a9943c-8ebe-4a91-876f-8e47aca3f719(kube-system/kube-proxy-lsjgf) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-lsjgf"
	E0816 12:41:15.425608       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-lsjgf\": pod kube-proxy-lsjgf is already assigned to node \"ha-863936-m04\"" pod="kube-system/kube-proxy-lsjgf"
	I0816 12:41:15.425683       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-lsjgf" node="ha-863936-m04"
	E0816 12:47:00.767753       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0816 12:47:02.002268       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0816 12:47:02.017816       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0816 12:47:02.124860       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0816 12:47:02.148673       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0816 12:47:02.159416       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0816 12:47:03.299738       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0816 12:47:04.865817       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0816 12:47:05.099148       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0816 12:47:08.348477       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0816 12:47:09.044545       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0816 12:47:10.729059       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0816 12:47:11.496273       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0816 12:47:12.303791       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0816 12:47:12.375580       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	I0816 12:47:13.519532       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0816 12:47:13.519751       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0816 12:47:13.520061       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0816 12:47:13.523832       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [ec46a3a2004fcad11de1bba2d1c355d99915bafd65d77051d5e38834061756fd] <==
	W0816 12:49:23.641416       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.2:8443: connect: connection refused
	E0816 12:49:23.641546       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.2:8443: connect: connection refused" logger="UnhandledError"
	W0816 12:49:23.970024       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.2:8443: connect: connection refused
	E0816 12:49:23.970090       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.2:8443: connect: connection refused" logger="UnhandledError"
	W0816 12:49:24.061304       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.2:8443: connect: connection refused
	E0816 12:49:24.061370       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.2:8443: connect: connection refused" logger="UnhandledError"
	W0816 12:49:24.523684       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.2:8443: connect: connection refused
	E0816 12:49:24.523756       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.2:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.2:8443: connect: connection refused" logger="UnhandledError"
	W0816 12:49:24.805592       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.2:8443: connect: connection refused
	E0816 12:49:24.805709       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.2:8443: connect: connection refused" logger="UnhandledError"
	W0816 12:49:25.124651       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.2:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.2:8443: connect: connection refused
	E0816 12:49:25.124736       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.2:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.2:8443: connect: connection refused" logger="UnhandledError"
	W0816 12:49:25.296302       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.2:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.2:8443: connect: connection refused
	E0816 12:49:25.296427       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.2:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.2:8443: connect: connection refused" logger="UnhandledError"
	W0816 12:49:25.304246       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.2:8443: connect: connection refused
	E0816 12:49:25.304367       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.2:8443: connect: connection refused" logger="UnhandledError"
	W0816 12:49:25.495523       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.2:8443: connect: connection refused
	E0816 12:49:25.495635       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.2:8443: connect: connection refused" logger="UnhandledError"
	W0816 12:49:25.607372       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.2:8443: connect: connection refused
	E0816 12:49:25.607440       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.2:8443: connect: connection refused" logger="UnhandledError"
	W0816 12:49:31.080890       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 12:49:31.081092       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 12:49:31.081350       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 12:49:31.081445       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0816 12:49:33.861805       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 16 12:49:56 ha-863936 kubelet[1336]: E0816 12:49:56.215534    1336 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812596215043258,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:49:56 ha-863936 kubelet[1336]: E0816 12:49:56.215559    1336 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812596215043258,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:49:58 ha-863936 kubelet[1336]: I0816 12:49:58.292076    1336 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-7dff88458-zqpfx" podStartSLOduration=557.428315657 podStartE2EDuration="9m20.292047638s" podCreationTimestamp="2024-08-16 12:40:38 +0000 UTC" firstStartedPulling="2024-08-16 12:40:39.283719973 +0000 UTC m=+203.552488125" lastFinishedPulling="2024-08-16 12:40:42.147451961 +0000 UTC m=+206.416220106" observedRunningTime="2024-08-16 12:40:42.781858424 +0000 UTC m=+207.050626590" watchObservedRunningTime="2024-08-16 12:49:58.292047638 +0000 UTC m=+762.560815797"
	Aug 16 12:50:06 ha-863936 kubelet[1336]: E0816 12:50:06.219312    1336 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812606218752313,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:50:06 ha-863936 kubelet[1336]: E0816 12:50:06.219353    1336 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812606218752313,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:50:15 ha-863936 kubelet[1336]: E0816 12:50:15.925932    1336 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 12:50:15 ha-863936 kubelet[1336]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 12:50:15 ha-863936 kubelet[1336]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 12:50:15 ha-863936 kubelet[1336]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 12:50:15 ha-863936 kubelet[1336]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 12:50:16 ha-863936 kubelet[1336]: E0816 12:50:16.221322    1336 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812616220847320,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:50:16 ha-863936 kubelet[1336]: E0816 12:50:16.221376    1336 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812616220847320,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:50:26 ha-863936 kubelet[1336]: E0816 12:50:26.227238    1336 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812626225522137,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:50:26 ha-863936 kubelet[1336]: E0816 12:50:26.227653    1336 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812626225522137,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:50:36 ha-863936 kubelet[1336]: E0816 12:50:36.231242    1336 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812636230661556,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:50:36 ha-863936 kubelet[1336]: E0816 12:50:36.231401    1336 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812636230661556,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:50:40 ha-863936 kubelet[1336]: I0816 12:50:40.897478    1336 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-vip-ha-863936" podUID="55dba92f-60c5-416c-9165-cbde743fbfe2"
	Aug 16 12:50:40 ha-863936 kubelet[1336]: I0816 12:50:40.926359    1336 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-863936"
	Aug 16 12:50:45 ha-863936 kubelet[1336]: I0816 12:50:45.915554    1336 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-863936" podStartSLOduration=5.915521959 podStartE2EDuration="5.915521959s" podCreationTimestamp="2024-08-16 12:50:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-16 12:50:45.915116998 +0000 UTC m=+810.183885144" watchObservedRunningTime="2024-08-16 12:50:45.915521959 +0000 UTC m=+810.184290125"
	Aug 16 12:50:46 ha-863936 kubelet[1336]: E0816 12:50:46.233860    1336 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812646233474152,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:50:46 ha-863936 kubelet[1336]: E0816 12:50:46.233918    1336 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812646233474152,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:50:56 ha-863936 kubelet[1336]: E0816 12:50:56.236600    1336 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812656236042080,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:50:56 ha-863936 kubelet[1336]: E0816 12:50:56.236888    1336 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812656236042080,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:51:06 ha-863936 kubelet[1336]: E0816 12:51:06.238853    1336 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812666238366955,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:51:06 ha-863936 kubelet[1336]: E0816 12:51:06.238929    1336 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812666238366955,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 12:51:04.636399   29777 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19423-3966/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-863936 -n ha-863936
helpers_test.go:261: (dbg) Run:  kubectl --context ha-863936 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (356.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-863936 stop -v=7 --alsologtostderr: exit status 82 (2m0.458561738s)

                                                
                                                
-- stdout --
	* Stopping node "ha-863936-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 12:51:24.027761   30167 out.go:345] Setting OutFile to fd 1 ...
	I0816 12:51:24.028308   30167 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 12:51:24.028326   30167 out.go:358] Setting ErrFile to fd 2...
	I0816 12:51:24.028334   30167 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 12:51:24.028742   30167 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-3966/.minikube/bin
	I0816 12:51:24.029306   30167 out.go:352] Setting JSON to false
	I0816 12:51:24.029448   30167 mustload.go:65] Loading cluster: ha-863936
	I0816 12:51:24.029918   30167 config.go:182] Loaded profile config "ha-863936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 12:51:24.030020   30167 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/config.json ...
	I0816 12:51:24.030334   30167 mustload.go:65] Loading cluster: ha-863936
	I0816 12:51:24.030516   30167 config.go:182] Loaded profile config "ha-863936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 12:51:24.030547   30167 stop.go:39] StopHost: ha-863936-m04
	I0816 12:51:24.031039   30167 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:51:24.031076   30167 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:51:24.046286   30167 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37065
	I0816 12:51:24.046683   30167 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:51:24.047118   30167 main.go:141] libmachine: Using API Version  1
	I0816 12:51:24.047139   30167 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:51:24.047503   30167 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:51:24.049721   30167 out.go:177] * Stopping node "ha-863936-m04"  ...
	I0816 12:51:24.050888   30167 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0816 12:51:24.050923   30167 main.go:141] libmachine: (ha-863936-m04) Calling .DriverName
	I0816 12:51:24.051123   30167 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0816 12:51:24.051142   30167 main.go:141] libmachine: (ha-863936-m04) Calling .GetSSHHostname
	I0816 12:51:24.053830   30167 main.go:141] libmachine: (ha-863936-m04) DBG | domain ha-863936-m04 has defined MAC address 52:54:00:32:98:98 in network mk-ha-863936
	I0816 12:51:24.054329   30167 main.go:141] libmachine: (ha-863936-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:98:98", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:50:52 +0000 UTC Type:0 Mac:52:54:00:32:98:98 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-863936-m04 Clientid:01:52:54:00:32:98:98}
	I0816 12:51:24.054353   30167 main.go:141] libmachine: (ha-863936-m04) DBG | domain ha-863936-m04 has defined IP address 192.168.39.74 and MAC address 52:54:00:32:98:98 in network mk-ha-863936
	I0816 12:51:24.054508   30167 main.go:141] libmachine: (ha-863936-m04) Calling .GetSSHPort
	I0816 12:51:24.054650   30167 main.go:141] libmachine: (ha-863936-m04) Calling .GetSSHKeyPath
	I0816 12:51:24.054795   30167 main.go:141] libmachine: (ha-863936-m04) Calling .GetSSHUsername
	I0816 12:51:24.054951   30167 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m04/id_rsa Username:docker}
	I0816 12:51:24.136277   30167 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0816 12:51:24.191262   30167 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0816 12:51:24.244074   30167 main.go:141] libmachine: Stopping "ha-863936-m04"...
	I0816 12:51:24.244103   30167 main.go:141] libmachine: (ha-863936-m04) Calling .GetState
	I0816 12:51:24.245759   30167 main.go:141] libmachine: (ha-863936-m04) Calling .Stop
	I0816 12:51:24.249003   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 0/120
	I0816 12:51:25.250626   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 1/120
	I0816 12:51:26.252277   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 2/120
	I0816 12:51:27.253728   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 3/120
	I0816 12:51:28.255497   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 4/120
	I0816 12:51:29.257388   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 5/120
	I0816 12:51:30.259065   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 6/120
	I0816 12:51:31.260527   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 7/120
	I0816 12:51:32.262309   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 8/120
	I0816 12:51:33.263646   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 9/120
	I0816 12:51:34.266006   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 10/120
	I0816 12:51:35.267194   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 11/120
	I0816 12:51:36.268794   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 12/120
	I0816 12:51:37.270178   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 13/120
	I0816 12:51:38.272376   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 14/120
	I0816 12:51:39.274083   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 15/120
	I0816 12:51:40.275414   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 16/120
	I0816 12:51:41.276991   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 17/120
	I0816 12:51:42.278297   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 18/120
	I0816 12:51:43.279736   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 19/120
	I0816 12:51:44.281824   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 20/120
	I0816 12:51:45.283517   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 21/120
	I0816 12:51:46.284771   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 22/120
	I0816 12:51:47.286110   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 23/120
	I0816 12:51:48.287812   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 24/120
	I0816 12:51:49.289759   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 25/120
	I0816 12:51:50.291287   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 26/120
	I0816 12:51:51.292443   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 27/120
	I0816 12:51:52.293815   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 28/120
	I0816 12:51:53.295149   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 29/120
	I0816 12:51:54.296415   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 30/120
	I0816 12:51:55.297697   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 31/120
	I0816 12:51:56.299354   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 32/120
	I0816 12:51:57.300515   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 33/120
	I0816 12:51:58.302133   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 34/120
	I0816 12:51:59.303675   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 35/120
	I0816 12:52:00.305034   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 36/120
	I0816 12:52:01.306132   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 37/120
	I0816 12:52:02.307521   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 38/120
	I0816 12:52:03.309021   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 39/120
	I0816 12:52:04.311006   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 40/120
	I0816 12:52:05.312132   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 41/120
	I0816 12:52:06.314355   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 42/120
	I0816 12:52:07.315767   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 43/120
	I0816 12:52:08.317118   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 44/120
	I0816 12:52:09.318966   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 45/120
	I0816 12:52:10.320266   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 46/120
	I0816 12:52:11.321565   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 47/120
	I0816 12:52:12.322904   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 48/120
	I0816 12:52:13.324180   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 49/120
	I0816 12:52:14.326100   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 50/120
	I0816 12:52:15.327547   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 51/120
	I0816 12:52:16.328854   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 52/120
	I0816 12:52:17.330450   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 53/120
	I0816 12:52:18.331694   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 54/120
	I0816 12:52:19.333535   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 55/120
	I0816 12:52:20.335403   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 56/120
	I0816 12:52:21.337070   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 57/120
	I0816 12:52:22.338315   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 58/120
	I0816 12:52:23.339574   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 59/120
	I0816 12:52:24.341693   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 60/120
	I0816 12:52:25.343447   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 61/120
	I0816 12:52:26.345060   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 62/120
	I0816 12:52:27.347340   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 63/120
	I0816 12:52:28.349397   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 64/120
	I0816 12:52:29.351186   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 65/120
	I0816 12:52:30.352623   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 66/120
	I0816 12:52:31.353944   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 67/120
	I0816 12:52:32.355327   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 68/120
	I0816 12:52:33.356852   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 69/120
	I0816 12:52:34.358900   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 70/120
	I0816 12:52:35.361016   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 71/120
	I0816 12:52:36.362129   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 72/120
	I0816 12:52:37.363448   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 73/120
	I0816 12:52:38.364828   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 74/120
	I0816 12:52:39.366767   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 75/120
	I0816 12:52:40.368239   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 76/120
	I0816 12:52:41.369579   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 77/120
	I0816 12:52:42.371466   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 78/120
	I0816 12:52:43.372881   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 79/120
	I0816 12:52:44.375026   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 80/120
	I0816 12:52:45.376789   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 81/120
	I0816 12:52:46.378863   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 82/120
	I0816 12:52:47.380757   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 83/120
	I0816 12:52:48.382541   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 84/120
	I0816 12:52:49.384608   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 85/120
	I0816 12:52:50.386178   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 86/120
	I0816 12:52:51.387677   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 87/120
	I0816 12:52:52.389178   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 88/120
	I0816 12:52:53.391479   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 89/120
	I0816 12:52:54.393637   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 90/120
	I0816 12:52:55.394921   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 91/120
	I0816 12:52:56.396545   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 92/120
	I0816 12:52:57.398112   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 93/120
	I0816 12:52:58.399258   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 94/120
	I0816 12:52:59.400442   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 95/120
	I0816 12:53:00.401877   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 96/120
	I0816 12:53:01.403236   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 97/120
	I0816 12:53:02.404710   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 98/120
	I0816 12:53:03.405891   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 99/120
	I0816 12:53:04.407860   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 100/120
	I0816 12:53:05.409161   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 101/120
	I0816 12:53:06.411181   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 102/120
	I0816 12:53:07.412355   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 103/120
	I0816 12:53:08.414420   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 104/120
	I0816 12:53:09.416253   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 105/120
	I0816 12:53:10.417703   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 106/120
	I0816 12:53:11.418829   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 107/120
	I0816 12:53:12.420541   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 108/120
	I0816 12:53:13.421790   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 109/120
	I0816 12:53:14.423835   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 110/120
	I0816 12:53:15.425727   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 111/120
	I0816 12:53:16.427460   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 112/120
	I0816 12:53:17.428803   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 113/120
	I0816 12:53:18.430145   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 114/120
	I0816 12:53:19.432102   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 115/120
	I0816 12:53:20.433484   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 116/120
	I0816 12:53:21.435276   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 117/120
	I0816 12:53:22.437466   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 118/120
	I0816 12:53:23.438710   30167 main.go:141] libmachine: (ha-863936-m04) Waiting for machine to stop 119/120
	I0816 12:53:24.439562   30167 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0816 12:53:24.439627   30167 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0816 12:53:24.441662   30167 out.go:201] 
	W0816 12:53:24.442931   30167 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0816 12:53:24.442960   30167 out.go:270] * 
	* 
	W0816 12:53:24.445219   30167 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 12:53:24.446468   30167 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-863936 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-863936 status -v=7 --alsologtostderr: exit status 3 (18.90147588s)

                                                
                                                
-- stdout --
	ha-863936
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-863936-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-863936-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 12:53:24.489114   30619 out.go:345] Setting OutFile to fd 1 ...
	I0816 12:53:24.489249   30619 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 12:53:24.489259   30619 out.go:358] Setting ErrFile to fd 2...
	I0816 12:53:24.489263   30619 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 12:53:24.489435   30619 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-3966/.minikube/bin
	I0816 12:53:24.489626   30619 out.go:352] Setting JSON to false
	I0816 12:53:24.489653   30619 mustload.go:65] Loading cluster: ha-863936
	I0816 12:53:24.489750   30619 notify.go:220] Checking for updates...
	I0816 12:53:24.490070   30619 config.go:182] Loaded profile config "ha-863936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 12:53:24.490085   30619 status.go:255] checking status of ha-863936 ...
	I0816 12:53:24.490484   30619 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:53:24.490542   30619 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:53:24.509610   30619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41905
	I0816 12:53:24.510091   30619 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:53:24.510638   30619 main.go:141] libmachine: Using API Version  1
	I0816 12:53:24.510657   30619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:53:24.511066   30619 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:53:24.511281   30619 main.go:141] libmachine: (ha-863936) Calling .GetState
	I0816 12:53:24.512701   30619 status.go:330] ha-863936 host status = "Running" (err=<nil>)
	I0816 12:53:24.512714   30619 host.go:66] Checking if "ha-863936" exists ...
	I0816 12:53:24.513013   30619 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:53:24.513057   30619 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:53:24.527737   30619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33133
	I0816 12:53:24.528134   30619 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:53:24.528550   30619 main.go:141] libmachine: Using API Version  1
	I0816 12:53:24.528572   30619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:53:24.528875   30619 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:53:24.529061   30619 main.go:141] libmachine: (ha-863936) Calling .GetIP
	I0816 12:53:24.531627   30619 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:53:24.532096   30619 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:53:24.532120   30619 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:53:24.532280   30619 host.go:66] Checking if "ha-863936" exists ...
	I0816 12:53:24.532657   30619 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:53:24.532699   30619 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:53:24.547422   30619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38089
	I0816 12:53:24.547788   30619 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:53:24.548265   30619 main.go:141] libmachine: Using API Version  1
	I0816 12:53:24.548291   30619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:53:24.548576   30619 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:53:24.548746   30619 main.go:141] libmachine: (ha-863936) Calling .DriverName
	I0816 12:53:24.549003   30619 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 12:53:24.549042   30619 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:53:24.551516   30619 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:53:24.551947   30619 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:53:24.551976   30619 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:53:24.552186   30619 main.go:141] libmachine: (ha-863936) Calling .GetSSHPort
	I0816 12:53:24.552338   30619 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:53:24.552489   30619 main.go:141] libmachine: (ha-863936) Calling .GetSSHUsername
	I0816 12:53:24.552625   30619 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936/id_rsa Username:docker}
	I0816 12:53:24.634514   30619 ssh_runner.go:195] Run: systemctl --version
	I0816 12:53:24.641654   30619 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 12:53:24.658984   30619 kubeconfig.go:125] found "ha-863936" server: "https://192.168.39.254:8443"
	I0816 12:53:24.659019   30619 api_server.go:166] Checking apiserver status ...
	I0816 12:53:24.659059   30619 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 12:53:24.681887   30619 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4781/cgroup
	W0816 12:53:24.691238   30619 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4781/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 12:53:24.691280   30619 ssh_runner.go:195] Run: ls
	I0816 12:53:24.696015   30619 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0816 12:53:24.700601   30619 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0816 12:53:24.700621   30619 status.go:422] ha-863936 apiserver status = Running (err=<nil>)
	I0816 12:53:24.700630   30619 status.go:257] ha-863936 status: &{Name:ha-863936 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 12:53:24.700653   30619 status.go:255] checking status of ha-863936-m02 ...
	I0816 12:53:24.701042   30619 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:53:24.701141   30619 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:53:24.715852   30619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42719
	I0816 12:53:24.716221   30619 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:53:24.716675   30619 main.go:141] libmachine: Using API Version  1
	I0816 12:53:24.716697   30619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:53:24.717028   30619 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:53:24.717269   30619 main.go:141] libmachine: (ha-863936-m02) Calling .GetState
	I0816 12:53:24.718867   30619 status.go:330] ha-863936-m02 host status = "Running" (err=<nil>)
	I0816 12:53:24.718888   30619 host.go:66] Checking if "ha-863936-m02" exists ...
	I0816 12:53:24.719181   30619 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:53:24.719211   30619 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:53:24.733457   30619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37749
	I0816 12:53:24.733820   30619 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:53:24.734286   30619 main.go:141] libmachine: Using API Version  1
	I0816 12:53:24.734308   30619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:53:24.734635   30619 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:53:24.734824   30619 main.go:141] libmachine: (ha-863936-m02) Calling .GetIP
	I0816 12:53:24.737493   30619 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:53:24.737938   30619 main.go:141] libmachine: (ha-863936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1e:73", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:48:58 +0000 UTC Type:0 Mac:52:54:00:c0:1e:73 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-863936-m02 Clientid:01:52:54:00:c0:1e:73}
	I0816 12:53:24.737980   30619 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:53:24.738152   30619 host.go:66] Checking if "ha-863936-m02" exists ...
	I0816 12:53:24.738489   30619 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:53:24.738527   30619 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:53:24.752750   30619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42623
	I0816 12:53:24.753221   30619 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:53:24.753715   30619 main.go:141] libmachine: Using API Version  1
	I0816 12:53:24.753733   30619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:53:24.754031   30619 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:53:24.754251   30619 main.go:141] libmachine: (ha-863936-m02) Calling .DriverName
	I0816 12:53:24.754431   30619 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 12:53:24.754452   30619 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHHostname
	I0816 12:53:24.757161   30619 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:53:24.757594   30619 main.go:141] libmachine: (ha-863936-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:1e:73", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:48:58 +0000 UTC Type:0 Mac:52:54:00:c0:1e:73 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-863936-m02 Clientid:01:52:54:00:c0:1e:73}
	I0816 12:53:24.757618   30619 main.go:141] libmachine: (ha-863936-m02) DBG | domain ha-863936-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:c0:1e:73 in network mk-ha-863936
	I0816 12:53:24.757755   30619 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHPort
	I0816 12:53:24.757924   30619 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHKeyPath
	I0816 12:53:24.758077   30619 main.go:141] libmachine: (ha-863936-m02) Calling .GetSSHUsername
	I0816 12:53:24.758211   30619 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m02/id_rsa Username:docker}
	I0816 12:53:24.841764   30619 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 12:53:24.860890   30619 kubeconfig.go:125] found "ha-863936" server: "https://192.168.39.254:8443"
	I0816 12:53:24.860931   30619 api_server.go:166] Checking apiserver status ...
	I0816 12:53:24.860968   30619 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 12:53:24.875581   30619 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1426/cgroup
	W0816 12:53:24.885943   30619 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1426/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 12:53:24.885988   30619 ssh_runner.go:195] Run: ls
	I0816 12:53:24.891759   30619 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0816 12:53:24.895912   30619 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0816 12:53:24.895932   30619 status.go:422] ha-863936-m02 apiserver status = Running (err=<nil>)
	I0816 12:53:24.895939   30619 status.go:257] ha-863936-m02 status: &{Name:ha-863936-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 12:53:24.895953   30619 status.go:255] checking status of ha-863936-m04 ...
	I0816 12:53:24.896235   30619 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:53:24.896270   30619 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:53:24.911739   30619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45233
	I0816 12:53:24.912203   30619 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:53:24.912636   30619 main.go:141] libmachine: Using API Version  1
	I0816 12:53:24.912658   30619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:53:24.912904   30619 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:53:24.913081   30619 main.go:141] libmachine: (ha-863936-m04) Calling .GetState
	I0816 12:53:24.914779   30619 status.go:330] ha-863936-m04 host status = "Running" (err=<nil>)
	I0816 12:53:24.914794   30619 host.go:66] Checking if "ha-863936-m04" exists ...
	I0816 12:53:24.915064   30619 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:53:24.915098   30619 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:53:24.929794   30619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45101
	I0816 12:53:24.930260   30619 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:53:24.930706   30619 main.go:141] libmachine: Using API Version  1
	I0816 12:53:24.930727   30619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:53:24.931045   30619 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:53:24.931236   30619 main.go:141] libmachine: (ha-863936-m04) Calling .GetIP
	I0816 12:53:24.933769   30619 main.go:141] libmachine: (ha-863936-m04) DBG | domain ha-863936-m04 has defined MAC address 52:54:00:32:98:98 in network mk-ha-863936
	I0816 12:53:24.934195   30619 main.go:141] libmachine: (ha-863936-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:98:98", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:50:52 +0000 UTC Type:0 Mac:52:54:00:32:98:98 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-863936-m04 Clientid:01:52:54:00:32:98:98}
	I0816 12:53:24.934215   30619 main.go:141] libmachine: (ha-863936-m04) DBG | domain ha-863936-m04 has defined IP address 192.168.39.74 and MAC address 52:54:00:32:98:98 in network mk-ha-863936
	I0816 12:53:24.934360   30619 host.go:66] Checking if "ha-863936-m04" exists ...
	I0816 12:53:24.934642   30619 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:53:24.934677   30619 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:53:24.949881   30619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41663
	I0816 12:53:24.950333   30619 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:53:24.950829   30619 main.go:141] libmachine: Using API Version  1
	I0816 12:53:24.950849   30619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:53:24.951128   30619 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:53:24.951273   30619 main.go:141] libmachine: (ha-863936-m04) Calling .DriverName
	I0816 12:53:24.951441   30619 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 12:53:24.951462   30619 main.go:141] libmachine: (ha-863936-m04) Calling .GetSSHHostname
	I0816 12:53:24.954453   30619 main.go:141] libmachine: (ha-863936-m04) DBG | domain ha-863936-m04 has defined MAC address 52:54:00:32:98:98 in network mk-ha-863936
	I0816 12:53:24.954883   30619 main.go:141] libmachine: (ha-863936-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:98:98", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:50:52 +0000 UTC Type:0 Mac:52:54:00:32:98:98 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-863936-m04 Clientid:01:52:54:00:32:98:98}
	I0816 12:53:24.954907   30619 main.go:141] libmachine: (ha-863936-m04) DBG | domain ha-863936-m04 has defined IP address 192.168.39.74 and MAC address 52:54:00:32:98:98 in network mk-ha-863936
	I0816 12:53:24.955045   30619 main.go:141] libmachine: (ha-863936-m04) Calling .GetSSHPort
	I0816 12:53:24.955215   30619 main.go:141] libmachine: (ha-863936-m04) Calling .GetSSHKeyPath
	I0816 12:53:24.955309   30619 main.go:141] libmachine: (ha-863936-m04) Calling .GetSSHUsername
	I0816 12:53:24.955433   30619 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936-m04/id_rsa Username:docker}
	W0816 12:53:43.349100   30619 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.74:22: connect: no route to host
	W0816 12:53:43.349206   30619 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.74:22: connect: no route to host
	E0816 12:53:43.349224   30619 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.74:22: connect: no route to host
	I0816 12:53:43.349235   30619 status.go:257] ha-863936-m04 status: &{Name:ha-863936-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0816 12:53:43.349259   30619 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.74:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-863936 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-863936 -n ha-863936
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-863936 logs -n 25: (1.69032974s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-863936 ssh -n ha-863936-m02 sudo cat                                          | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | /home/docker/cp-test_ha-863936-m03_ha-863936-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-863936 cp ha-863936-m03:/home/docker/cp-test.txt                              | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | ha-863936-m04:/home/docker/cp-test_ha-863936-m03_ha-863936-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-863936 ssh -n                                                                 | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | ha-863936-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-863936 ssh -n ha-863936-m04 sudo cat                                          | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | /home/docker/cp-test_ha-863936-m03_ha-863936-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-863936 cp testdata/cp-test.txt                                                | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | ha-863936-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-863936 ssh -n                                                                 | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | ha-863936-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-863936 cp ha-863936-m04:/home/docker/cp-test.txt                              | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2848660471/001/cp-test_ha-863936-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-863936 ssh -n                                                                 | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | ha-863936-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-863936 cp ha-863936-m04:/home/docker/cp-test.txt                              | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | ha-863936:/home/docker/cp-test_ha-863936-m04_ha-863936.txt                       |           |         |         |                     |                     |
	| ssh     | ha-863936 ssh -n                                                                 | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | ha-863936-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-863936 ssh -n ha-863936 sudo cat                                              | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | /home/docker/cp-test_ha-863936-m04_ha-863936.txt                                 |           |         |         |                     |                     |
	| cp      | ha-863936 cp ha-863936-m04:/home/docker/cp-test.txt                              | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | ha-863936-m02:/home/docker/cp-test_ha-863936-m04_ha-863936-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-863936 ssh -n                                                                 | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | ha-863936-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-863936 ssh -n ha-863936-m02 sudo cat                                          | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | /home/docker/cp-test_ha-863936-m04_ha-863936-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-863936 cp ha-863936-m04:/home/docker/cp-test.txt                              | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | ha-863936-m03:/home/docker/cp-test_ha-863936-m04_ha-863936-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-863936 ssh -n                                                                 | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | ha-863936-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-863936 ssh -n ha-863936-m03 sudo cat                                          | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC | 16 Aug 24 12:41 UTC |
	|         | /home/docker/cp-test_ha-863936-m04_ha-863936-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-863936 node stop m02 -v=7                                                     | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:41 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-863936 node start m02 -v=7                                                    | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:44 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-863936 -v=7                                                           | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:45 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-863936 -v=7                                                                | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:45 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-863936 --wait=true -v=7                                                    | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:47 UTC | 16 Aug 24 12:51 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-863936                                                                | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:51 UTC |                     |
	| node    | ha-863936 node delete m03 -v=7                                                   | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:51 UTC | 16 Aug 24 12:51 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-863936 stop -v=7                                                              | ha-863936 | jenkins | v1.33.1 | 16 Aug 24 12:51 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 12:47:12
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 12:47:12.666417   28466 out.go:345] Setting OutFile to fd 1 ...
	I0816 12:47:12.666669   28466 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 12:47:12.666678   28466 out.go:358] Setting ErrFile to fd 2...
	I0816 12:47:12.666682   28466 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 12:47:12.666831   28466 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-3966/.minikube/bin
	I0816 12:47:12.667392   28466 out.go:352] Setting JSON to false
	I0816 12:47:12.668288   28466 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1778,"bootTime":1723810655,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 12:47:12.668342   28466 start.go:139] virtualization: kvm guest
	I0816 12:47:12.671664   28466 out.go:177] * [ha-863936] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 12:47:12.673302   28466 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 12:47:12.673304   28466 notify.go:220] Checking for updates...
	I0816 12:47:12.674803   28466 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 12:47:12.676443   28466 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 12:47:12.677987   28466 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-3966/.minikube
	I0816 12:47:12.679436   28466 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 12:47:12.680839   28466 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 12:47:12.682494   28466 config.go:182] Loaded profile config "ha-863936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 12:47:12.682607   28466 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 12:47:12.683178   28466 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:47:12.683258   28466 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:47:12.698014   28466 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33817
	I0816 12:47:12.698606   28466 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:47:12.699102   28466 main.go:141] libmachine: Using API Version  1
	I0816 12:47:12.699138   28466 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:47:12.699461   28466 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:47:12.699645   28466 main.go:141] libmachine: (ha-863936) Calling .DriverName
	I0816 12:47:12.733350   28466 out.go:177] * Using the kvm2 driver based on existing profile
	I0816 12:47:12.734770   28466 start.go:297] selected driver: kvm2
	I0816 12:47:12.734792   28466 start.go:901] validating driver "kvm2" against &{Name:ha-863936 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.0 ClusterName:ha-863936 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.116 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.74 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:
false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 12:47:12.734989   28466 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 12:47:12.735447   28466 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 12:47:12.735569   28466 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-3966/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 12:47:12.749799   28466 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0816 12:47:12.750437   28466 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 12:47:12.750498   28466 cni.go:84] Creating CNI manager for ""
	I0816 12:47:12.750509   28466 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0816 12:47:12.750567   28466 start.go:340] cluster config:
	{Name:ha-863936 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-863936 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.116 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.74 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tille
r:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 12:47:12.750688   28466 iso.go:125] acquiring lock: {Name:mk00139b359c6fe4c84d80e843a2099303fb7c5b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 12:47:12.752461   28466 out.go:177] * Starting "ha-863936" primary control-plane node in "ha-863936" cluster
	I0816 12:47:12.753668   28466 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 12:47:12.753701   28466 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0816 12:47:12.753712   28466 cache.go:56] Caching tarball of preloaded images
	I0816 12:47:12.753784   28466 preload.go:172] Found /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 12:47:12.753794   28466 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0816 12:47:12.753899   28466 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/config.json ...
	I0816 12:47:12.754122   28466 start.go:360] acquireMachinesLock for ha-863936: {Name:mk0b4b88ee8893cefe753073bc08069ac50e5641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 12:47:12.754173   28466 start.go:364] duration metric: took 29.398µs to acquireMachinesLock for "ha-863936"
	I0816 12:47:12.754189   28466 start.go:96] Skipping create...Using existing machine configuration
	I0816 12:47:12.754198   28466 fix.go:54] fixHost starting: 
	I0816 12:47:12.754472   28466 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:47:12.754500   28466 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:47:12.768329   28466 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41443
	I0816 12:47:12.768742   28466 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:47:12.769264   28466 main.go:141] libmachine: Using API Version  1
	I0816 12:47:12.769291   28466 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:47:12.769600   28466 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:47:12.769759   28466 main.go:141] libmachine: (ha-863936) Calling .DriverName
	I0816 12:47:12.769888   28466 main.go:141] libmachine: (ha-863936) Calling .GetState
	I0816 12:47:12.771391   28466 fix.go:112] recreateIfNeeded on ha-863936: state=Running err=<nil>
	W0816 12:47:12.771412   28466 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 12:47:12.773101   28466 out.go:177] * Updating the running kvm2 "ha-863936" VM ...
	I0816 12:47:12.774159   28466 machine.go:93] provisionDockerMachine start ...
	I0816 12:47:12.774177   28466 main.go:141] libmachine: (ha-863936) Calling .DriverName
	I0816 12:47:12.774346   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:47:12.776633   28466 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:47:12.777058   28466 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:47:12.777084   28466 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:47:12.777203   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHPort
	I0816 12:47:12.777371   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:47:12.777532   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:47:12.777672   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHUsername
	I0816 12:47:12.777830   28466 main.go:141] libmachine: Using SSH client type: native
	I0816 12:47:12.778014   28466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0816 12:47:12.778025   28466 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 12:47:12.882202   28466 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-863936
	
	I0816 12:47:12.882231   28466 main.go:141] libmachine: (ha-863936) Calling .GetMachineName
	I0816 12:47:12.882501   28466 buildroot.go:166] provisioning hostname "ha-863936"
	I0816 12:47:12.882520   28466 main.go:141] libmachine: (ha-863936) Calling .GetMachineName
	I0816 12:47:12.882671   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:47:12.885538   28466 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:47:12.885951   28466 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:47:12.885974   28466 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:47:12.886198   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHPort
	I0816 12:47:12.886371   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:47:12.886548   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:47:12.886694   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHUsername
	I0816 12:47:12.886864   28466 main.go:141] libmachine: Using SSH client type: native
	I0816 12:47:12.887089   28466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0816 12:47:12.887107   28466 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-863936 && echo "ha-863936" | sudo tee /etc/hostname
	I0816 12:47:13.002689   28466 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-863936
	
	I0816 12:47:13.002712   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:47:13.005811   28466 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:47:13.006178   28466 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:47:13.006199   28466 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:47:13.006410   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHPort
	I0816 12:47:13.006584   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:47:13.006778   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:47:13.006940   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHUsername
	I0816 12:47:13.007102   28466 main.go:141] libmachine: Using SSH client type: native
	I0816 12:47:13.007277   28466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0816 12:47:13.007296   28466 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-863936' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-863936/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-863936' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 12:47:13.109896   28466 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 12:47:13.109933   28466 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-3966/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-3966/.minikube}
	I0816 12:47:13.109984   28466 buildroot.go:174] setting up certificates
	I0816 12:47:13.110017   28466 provision.go:84] configureAuth start
	I0816 12:47:13.110034   28466 main.go:141] libmachine: (ha-863936) Calling .GetMachineName
	I0816 12:47:13.110314   28466 main.go:141] libmachine: (ha-863936) Calling .GetIP
	I0816 12:47:13.112710   28466 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:47:13.113112   28466 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:47:13.113139   28466 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:47:13.113301   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:47:13.115312   28466 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:47:13.115695   28466 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:47:13.115722   28466 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:47:13.115946   28466 provision.go:143] copyHostCerts
	I0816 12:47:13.115979   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem
	I0816 12:47:13.116009   28466 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem, removing ...
	I0816 12:47:13.116024   28466 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem
	I0816 12:47:13.116091   28466 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem (1082 bytes)
	I0816 12:47:13.116167   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem
	I0816 12:47:13.116185   28466 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem, removing ...
	I0816 12:47:13.116191   28466 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem
	I0816 12:47:13.116214   28466 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem (1123 bytes)
	I0816 12:47:13.116253   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem
	I0816 12:47:13.116268   28466 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem, removing ...
	I0816 12:47:13.116277   28466 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem
	I0816 12:47:13.116298   28466 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem (1675 bytes)
	I0816 12:47:13.116340   28466 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem org=jenkins.ha-863936 san=[127.0.0.1 192.168.39.2 ha-863936 localhost minikube]
	I0816 12:47:13.241271   28466 provision.go:177] copyRemoteCerts
	I0816 12:47:13.241327   28466 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 12:47:13.241348   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:47:13.244236   28466 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:47:13.244675   28466 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:47:13.244694   28466 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:47:13.244879   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHPort
	I0816 12:47:13.245069   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:47:13.245226   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHUsername
	I0816 12:47:13.245337   28466 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936/id_rsa Username:docker}
	I0816 12:47:13.324175   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 12:47:13.324258   28466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 12:47:13.352785   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 12:47:13.352858   28466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0816 12:47:13.383693   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 12:47:13.383777   28466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 12:47:13.411313   28466 provision.go:87] duration metric: took 301.279937ms to configureAuth
	I0816 12:47:13.411341   28466 buildroot.go:189] setting minikube options for container-runtime
	I0816 12:47:13.411601   28466 config.go:182] Loaded profile config "ha-863936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 12:47:13.411681   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:47:13.414348   28466 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:47:13.414704   28466 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:47:13.414745   28466 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:47:13.414898   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHPort
	I0816 12:47:13.415076   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:47:13.415225   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:47:13.415392   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHUsername
	I0816 12:47:13.415565   28466 main.go:141] libmachine: Using SSH client type: native
	I0816 12:47:13.415770   28466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0816 12:47:13.415797   28466 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 12:48:44.394310   28466 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 12:48:44.394335   28466 machine.go:96] duration metric: took 1m31.620163698s to provisionDockerMachine
	I0816 12:48:44.394354   28466 start.go:293] postStartSetup for "ha-863936" (driver="kvm2")
	I0816 12:48:44.394366   28466 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 12:48:44.394385   28466 main.go:141] libmachine: (ha-863936) Calling .DriverName
	I0816 12:48:44.394688   28466 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 12:48:44.394719   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:48:44.397993   28466 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:48:44.398427   28466 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:48:44.398456   28466 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:48:44.398607   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHPort
	I0816 12:48:44.398788   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:48:44.398967   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHUsername
	I0816 12:48:44.399085   28466 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936/id_rsa Username:docker}
	I0816 12:48:44.480013   28466 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 12:48:44.484297   28466 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 12:48:44.484330   28466 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/addons for local assets ...
	I0816 12:48:44.484399   28466 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/files for local assets ...
	I0816 12:48:44.484482   28466 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem -> 111492.pem in /etc/ssl/certs
	I0816 12:48:44.484493   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem -> /etc/ssl/certs/111492.pem
	I0816 12:48:44.484580   28466 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 12:48:44.493723   28466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /etc/ssl/certs/111492.pem (1708 bytes)
	I0816 12:48:44.518064   28466 start.go:296] duration metric: took 123.699008ms for postStartSetup
	I0816 12:48:44.518101   28466 main.go:141] libmachine: (ha-863936) Calling .DriverName
	I0816 12:48:44.518362   28466 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0816 12:48:44.518401   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:48:44.521196   28466 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:48:44.521654   28466 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:48:44.521677   28466 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:48:44.521820   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHPort
	I0816 12:48:44.521988   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:48:44.522154   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHUsername
	I0816 12:48:44.522298   28466 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936/id_rsa Username:docker}
	W0816 12:48:44.599191   28466 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0816 12:48:44.599218   28466 fix.go:56] duration metric: took 1m31.845022194s for fixHost
	I0816 12:48:44.599242   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:48:44.601955   28466 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:48:44.602471   28466 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:48:44.602500   28466 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:48:44.602682   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHPort
	I0816 12:48:44.602877   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:48:44.603063   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:48:44.603220   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHUsername
	I0816 12:48:44.603384   28466 main.go:141] libmachine: Using SSH client type: native
	I0816 12:48:44.603551   28466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0816 12:48:44.603560   28466 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 12:48:44.702014   28466 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723812524.668188641
	
	I0816 12:48:44.702039   28466 fix.go:216] guest clock: 1723812524.668188641
	I0816 12:48:44.702048   28466 fix.go:229] Guest: 2024-08-16 12:48:44.668188641 +0000 UTC Remote: 2024-08-16 12:48:44.599226034 +0000 UTC m=+91.966804300 (delta=68.962607ms)
	I0816 12:48:44.702093   28466 fix.go:200] guest clock delta is within tolerance: 68.962607ms
	I0816 12:48:44.702104   28466 start.go:83] releasing machines lock for "ha-863936", held for 1m31.947919353s
	I0816 12:48:44.702142   28466 main.go:141] libmachine: (ha-863936) Calling .DriverName
	I0816 12:48:44.702373   28466 main.go:141] libmachine: (ha-863936) Calling .GetIP
	I0816 12:48:44.704995   28466 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:48:44.705336   28466 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:48:44.705359   28466 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:48:44.705552   28466 main.go:141] libmachine: (ha-863936) Calling .DriverName
	I0816 12:48:44.706017   28466 main.go:141] libmachine: (ha-863936) Calling .DriverName
	I0816 12:48:44.706213   28466 main.go:141] libmachine: (ha-863936) Calling .DriverName
	I0816 12:48:44.706316   28466 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 12:48:44.706359   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:48:44.706454   28466 ssh_runner.go:195] Run: cat /version.json
	I0816 12:48:44.706480   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHHostname
	I0816 12:48:44.708936   28466 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:48:44.709257   28466 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:48:44.709284   28466 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:48:44.709302   28466 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:48:44.709361   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHPort
	I0816 12:48:44.709542   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:48:44.709685   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHUsername
	I0816 12:48:44.709752   28466 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:48:44.709775   28466 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:48:44.709816   28466 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936/id_rsa Username:docker}
	I0816 12:48:44.709944   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHPort
	I0816 12:48:44.710085   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHKeyPath
	I0816 12:48:44.710227   28466 main.go:141] libmachine: (ha-863936) Calling .GetSSHUsername
	I0816 12:48:44.710366   28466 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/ha-863936/id_rsa Username:docker}
	I0816 12:48:44.782444   28466 ssh_runner.go:195] Run: systemctl --version
	I0816 12:48:44.805194   28466 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 12:48:44.967393   28466 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 12:48:44.973191   28466 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 12:48:44.973253   28466 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 12:48:44.982418   28466 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0816 12:48:44.982437   28466 start.go:495] detecting cgroup driver to use...
	I0816 12:48:44.982490   28466 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 12:48:44.998364   28466 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 12:48:45.012279   28466 docker.go:217] disabling cri-docker service (if available) ...
	I0816 12:48:45.012338   28466 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 12:48:45.025798   28466 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 12:48:45.038835   28466 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 12:48:45.180125   28466 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 12:48:45.328402   28466 docker.go:233] disabling docker service ...
	I0816 12:48:45.328478   28466 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 12:48:45.345286   28466 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 12:48:45.359026   28466 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 12:48:45.509178   28466 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 12:48:45.652776   28466 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 12:48:45.666563   28466 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 12:48:45.686132   28466 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 12:48:45.686195   28466 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:48:45.696381   28466 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 12:48:45.696445   28466 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:48:45.706372   28466 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:48:45.716646   28466 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:48:45.726888   28466 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 12:48:45.737421   28466 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:48:45.747282   28466 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:48:45.758357   28466 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:48:45.768222   28466 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 12:48:45.777321   28466 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 12:48:45.786097   28466 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 12:48:45.935318   28466 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 12:48:46.227265   28466 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 12:48:46.227347   28466 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 12:48:46.236106   28466 start.go:563] Will wait 60s for crictl version
	I0816 12:48:46.236176   28466 ssh_runner.go:195] Run: which crictl
	I0816 12:48:46.239945   28466 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 12:48:46.275481   28466 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 12:48:46.275568   28466 ssh_runner.go:195] Run: crio --version
	I0816 12:48:46.305240   28466 ssh_runner.go:195] Run: crio --version
	I0816 12:48:46.336817   28466 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 12:48:46.338167   28466 main.go:141] libmachine: (ha-863936) Calling .GetIP
	I0816 12:48:46.340854   28466 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:48:46.341256   28466 main.go:141] libmachine: (ha-863936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:fe:d4", ip: ""} in network mk-ha-863936: {Iface:virbr1 ExpiryTime:2024-08-16 13:36:47 +0000 UTC Type:0 Mac:52:54:00:88:fe:d4 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-863936 Clientid:01:52:54:00:88:fe:d4}
	I0816 12:48:46.341282   28466 main.go:141] libmachine: (ha-863936) DBG | domain ha-863936 has defined IP address 192.168.39.2 and MAC address 52:54:00:88:fe:d4 in network mk-ha-863936
	I0816 12:48:46.341443   28466 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0816 12:48:46.346335   28466 kubeadm.go:883] updating cluster {Name:ha-863936 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-863936 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.116 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.74 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 12:48:46.346468   28466 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 12:48:46.346515   28466 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 12:48:46.389339   28466 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 12:48:46.389363   28466 crio.go:433] Images already preloaded, skipping extraction
	I0816 12:48:46.389436   28466 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 12:48:46.438791   28466 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 12:48:46.438813   28466 cache_images.go:84] Images are preloaded, skipping loading
	I0816 12:48:46.438822   28466 kubeadm.go:934] updating node { 192.168.39.2 8443 v1.31.0 crio true true} ...
	I0816 12:48:46.438936   28466 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-863936 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-863936 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 12:48:46.439000   28466 ssh_runner.go:195] Run: crio config
	I0816 12:48:46.554876   28466 cni.go:84] Creating CNI manager for ""
	I0816 12:48:46.554897   28466 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0816 12:48:46.554908   28466 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 12:48:46.554935   28466 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-863936 NodeName:ha-863936 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 12:48:46.555102   28466 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-863936"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 12:48:46.555156   28466 kube-vip.go:115] generating kube-vip config ...
	I0816 12:48:46.555206   28466 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0816 12:48:46.571361   28466 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0816 12:48:46.571462   28466 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0816 12:48:46.571516   28466 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 12:48:46.581995   28466 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 12:48:46.582052   28466 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0816 12:48:46.592152   28466 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0816 12:48:46.612010   28466 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 12:48:46.641550   28466 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0816 12:48:46.659956   28466 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0816 12:48:46.683534   28466 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0816 12:48:46.692332   28466 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 12:48:46.860322   28466 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 12:48:46.877204   28466 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936 for IP: 192.168.39.2
	I0816 12:48:46.877223   28466 certs.go:194] generating shared ca certs ...
	I0816 12:48:46.877235   28466 certs.go:226] acquiring lock for ca certs: {Name:mkdb46755373c135acf8239d2d4352f4b0b3d1d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:48:46.877378   28466 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key
	I0816 12:48:46.877421   28466 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key
	I0816 12:48:46.877431   28466 certs.go:256] generating profile certs ...
	I0816 12:48:46.877501   28466 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/client.key
	I0816 12:48:46.877529   28466 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.key.72359c07
	I0816 12:48:46.877550   28466 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.crt.72359c07 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.2 192.168.39.101 192.168.39.116 192.168.39.254]
	I0816 12:48:46.987353   28466 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.crt.72359c07 ...
	I0816 12:48:46.987382   28466 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.crt.72359c07: {Name:mk10d54a2525ec300df31026c8b6dc6102e2744f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:48:46.987569   28466 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.key.72359c07 ...
	I0816 12:48:46.987582   28466 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.key.72359c07: {Name:mk2f0b27b4a347a7366b445074cb7ce586272135 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:48:46.987660   28466 certs.go:381] copying /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.crt.72359c07 -> /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.crt
	I0816 12:48:46.987812   28466 certs.go:385] copying /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.key.72359c07 -> /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.key
	I0816 12:48:46.987934   28466 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/proxy-client.key
	I0816 12:48:46.987949   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0816 12:48:46.987961   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0816 12:48:46.987974   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0816 12:48:46.987986   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0816 12:48:46.987998   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0816 12:48:46.988010   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0816 12:48:46.988022   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0816 12:48:46.988033   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0816 12:48:46.988082   28466 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem (1338 bytes)
	W0816 12:48:46.988141   28466 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149_empty.pem, impossibly tiny 0 bytes
	I0816 12:48:46.988153   28466 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 12:48:46.988177   28466 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem (1082 bytes)
	I0816 12:48:46.988199   28466 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem (1123 bytes)
	I0816 12:48:46.988220   28466 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem (1675 bytes)
	I0816 12:48:46.988257   28466 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem (1708 bytes)
	I0816 12:48:46.988285   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem -> /usr/share/ca-certificates/11149.pem
	I0816 12:48:46.988301   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem -> /usr/share/ca-certificates/111492.pem
	I0816 12:48:46.988314   28466 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0816 12:48:46.988807   28466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 12:48:47.012890   28466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 12:48:47.036546   28466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 12:48:47.060027   28466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 12:48:47.083638   28466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0816 12:48:47.107700   28466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 12:48:47.131823   28466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 12:48:47.156027   28466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/ha-863936/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 12:48:47.179930   28466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem --> /usr/share/ca-certificates/11149.pem (1338 bytes)
	I0816 12:48:47.203842   28466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /usr/share/ca-certificates/111492.pem (1708 bytes)
	I0816 12:48:47.227641   28466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 12:48:47.252594   28466 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 12:48:47.269431   28466 ssh_runner.go:195] Run: openssl version
	I0816 12:48:47.275563   28466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 12:48:47.285825   28466 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 12:48:47.290289   28466 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 12:22 /usr/share/ca-certificates/minikubeCA.pem
	I0816 12:48:47.290332   28466 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 12:48:47.295837   28466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 12:48:47.304644   28466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11149.pem && ln -fs /usr/share/ca-certificates/11149.pem /etc/ssl/certs/11149.pem"
	I0816 12:48:47.315050   28466 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11149.pem
	I0816 12:48:47.319541   28466 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 12:33 /usr/share/ca-certificates/11149.pem
	I0816 12:48:47.319583   28466 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11149.pem
	I0816 12:48:47.325197   28466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11149.pem /etc/ssl/certs/51391683.0"
	I0816 12:48:47.334118   28466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111492.pem && ln -fs /usr/share/ca-certificates/111492.pem /etc/ssl/certs/111492.pem"
	I0816 12:48:47.344896   28466 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111492.pem
	I0816 12:48:47.349565   28466 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 12:33 /usr/share/ca-certificates/111492.pem
	I0816 12:48:47.349622   28466 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111492.pem
	I0816 12:48:47.355259   28466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111492.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 12:48:47.364307   28466 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 12:48:47.368925   28466 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 12:48:47.374542   28466 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 12:48:47.380047   28466 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 12:48:47.385632   28466 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 12:48:47.391346   28466 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 12:48:47.397080   28466 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 12:48:47.402850   28466 kubeadm.go:392] StartCluster: {Name:ha-863936 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-863936 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.116 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.74 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod
:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 12:48:47.402987   28466 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 12:48:47.403060   28466 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 12:48:47.446398   28466 cri.go:89] found id: "41ebcb2f3d94d6faf106f480f7a3c9a88b9a72f2e4dfd7393af7cd6c72e2079f"
	I0816 12:48:47.446423   28466 cri.go:89] found id: "13ee625d64ac22e3dbd2a411db60aa943aca2b0965240ce6d86470b99d108a28"
	I0816 12:48:47.446427   28466 cri.go:89] found id: "27fd86b233d7915b829d3d87a08450886d7cf55ca3dafce85c215cb3718f4022"
	I0816 12:48:47.446430   28466 cri.go:89] found id: "6c1af75bd6dc5d1a0980fa2b20a308aa9c311599686714bc15f19c6a16dcd811"
	I0816 12:48:47.446433   28466 cri.go:89] found id: "a7e67a022e7b9b1a5a3ea3fbc46623fd4813ff6efeaf4cff5f954a956b23545c"
	I0816 12:48:47.446436   28466 cri.go:89] found id: "a32107a6690bf88691db72f9019b8ec9c8c6e1ce5e447fa21e37000fbe8fe696"
	I0816 12:48:47.446438   28466 cri.go:89] found id: "8fb58a4d7b8e84f3429f58f729e1f93b54a8aadb737f10f997bf64e601d6edd6"
	I0816 12:48:47.446441   28466 cri.go:89] found id: "b83ba25619ab61e7a449e6a735eb3cca59c184e250a87f2af5b712fbc37fc331"
	I0816 12:48:47.446443   28466 cri.go:89] found id: "4aa588906cdcd0cf4fcb973469df4a1d02cafe5e3388d6c516665f0d1af8ceb4"
	I0816 12:48:47.446448   28466 cri.go:89] found id: "50ae5af99f5970011dec9ba89fd0047f1f9b657bdad8b1e90a1718aa00bdd86a"
	I0816 12:48:47.446450   28466 cri.go:89] found id: "f34879b3d9bde9efea1f6d29ab90f29e4d7c261b375b99154b6292d41fc10559"
	I0816 12:48:47.446453   28466 cri.go:89] found id: "ee882e5e99dadc7370d79fccecde5adec2c82fc5cf4d93a04c88222c888fc1a9"
	I0816 12:48:47.446455   28466 cri.go:89] found id: "4a0281c780fc2216d75b34fb0ab5edeca5d750269010e7a86e842bc53970539d"
	I0816 12:48:47.446457   28466 cri.go:89] found id: "2beea397951195fcf59b5f00713ebd9cc8a260e3975fa901a4733ac52610bd62"
	I0816 12:48:47.446461   28466 cri.go:89] found id: ""
	I0816 12:48:47.446500   28466 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 16 12:53:43 ha-863936 crio[3656]: time="2024-08-16 12:53:43.958636964Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812823958601813,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2ea533f5-6374-4710-87e6-41a3eca8fa2a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 12:53:43 ha-863936 crio[3656]: time="2024-08-16 12:53:43.959480100Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b87ef8b0-3c14-4867-8440-a4f121f510fb name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 12:53:43 ha-863936 crio[3656]: time="2024-08-16 12:53:43.959537217Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b87ef8b0-3c14-4867-8440-a4f121f510fb name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 12:53:43 ha-863936 crio[3656]: time="2024-08-16 12:53:43.960011477Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d9f9cdb49f2e208b14ce5d538c1296d5ca31308ea50d93a324f1dab81ee4828b,PodSandboxId:6c6ce59b10f027dda0492ea2cc7784e979b2e4575b41a6d3d4b4846c708ff8ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723812579915111302,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6e7b7e6-00b6-42e2-9680-e6660e76bc6f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f272c68cb5f2b671fbb4fde72d736ec8e3c47238d4c785b6a1d30c25b92ce44c,PodSandboxId:2a3662a22babc6afd74ebfb7d9e65638e5c992e2f3819ed20bdb3a33ce29b12b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723812574918416022,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1758561f6a2148ce3a7eabea3ce99a1a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2ffb5eb0f0df4c6a354d94a448dd7733348df5a3111df63b97081e652e00b3e,PodSandboxId:05ebc660f3a004b33b1919d66a501b861a346ffad7393c37ed846418f998c414,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723812567175517801,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zqpfx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c52c866f-81c3-423f-a604-f792834e341e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f57541ef075e2aebbbe3b597c77782777a7e1dbb4dc82e74f19fc2a5cba915d5,PodSandboxId:2e5770a9723b139fb8fcee684674d19f2f484e23f583240f470a287bc07a0a70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723812565604659437,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fb02b673e0a97e6d66a5a7404114d26,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f03a55be0c75f679f976d46b357088d845045a272d05087a1511d8fc11be9ba3,PodSandboxId:7153f574b2fa16ca13f0001f759a4d888f92fba2212bf78c01d878a62f33fffb,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723812548310117133,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ec2b816f95a9c13a68e8d3dd18d3822,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47c8f68686797ce6bff3488c73dbe8881981f9e2018359937476bdda33cffec9,PodSandboxId:6c6ce59b10f027dda0492ea2cc7784e979b2e4575b41a6d3d4b4846c708ff8ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723812534131682004,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6e7b7e6-00b6-42e2-9680-e6660e76bc6f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15e34877aa55b56dd2af2c8b4c94de3639e13e9aa2640f4dc59c76f1d0ffd700,PodSandboxId:4228908a42f0b13c126674c42df44178b1e535b8d1ad73c15191ff27418e1227,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723812534116774628,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g75mg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d22ea17-7ddd-4c07-89d5-0ebaa170066c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:69857a90cec728020099e00ae2fc308ffd2f0b58830b3d9498eb2371af8f090d,PodSandboxId:b6dbe84b7a3435716739b7dbb7ea3a870883e79c799c8dfd8cb89705572eee2b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723812534010316842,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dddkq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87bd9636-168b-4f61-9382-0914014af5c0,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ab681701
e0290677bb191586833dac1bc9c69e12654ddb92b30341260d90fec,PodSandboxId:564511f6f39d5c5233053e8250813d972ced174a53b45f6c71837335ff02ddf8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723812534063334241,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-7gfgm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 797ae351-63bf-4994-a9bd-901367887b58,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4382fddee87cc3d877e1bd39791f2475d440812fbbe775970391626e16ed2c4b,PodSandboxId:e745636924446357136cb347e503cfc0a7a32c790c13d1e18119fc81ec82dc45,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723812533797486126,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4eb1802b446ee0233a6ed400bf8fd33,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:716dd81dd144015c07273ec8072c3f31367582e5d9e70d6f89d3c6b2c8a520ae,PodSandboxId:2a3662a22babc6afd74ebfb7d9e65638e5c992e2f3819ed20bdb3a33ce29b12b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723812533746497816,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1758561f6a2148ce3a7eabea3ce99a1a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec46a3a2004fcad11de1bba2d1c355d99915bafd65d77051d5e38834061756fd,PodSandboxId:2e5be3e2e792c4932d57dc8ec1637bf1d0433315a23c63bbed71994fd7c314e7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723812533587378908,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0bcffbcfcc9f18fc26b991d99b329e9,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de7f3e4f5a38619439599a84b3612d1e59e247b98adf7d481f48fc64ef8228aa,PodSandboxId:2e5770a9723b139fb8fcee684674d19f2f484e23f583240f470a287bc07a0a70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723812533668441533,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fb02b673e0a97e6d66a5a7404114d26,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41ebcb2f3d94d6faf106f480f7a3c9a88b9a72f2e4dfd7393af7cd6c72e2079f,PodSandboxId:9f99975c570c519e8fc16d94f6f3a955a2f0da1baca56b1235fda2b87ed58bcd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723812526612893130,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-ssb5h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5162fb17-6897-40d2-9c2c-80157ea46e07,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e73d7f930e1761fdc56db10356ad46f9f2acd1f4aead50f57efa3558af4e0b18,PodSandboxId:5f9b33b7fe6f25a53393dfc965ee81bb65952c3ab4fc610bd3fa7395f2ed6d3a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723812042160165915,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zqpfx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c52c866f-81c3-423f-a604-f792834e341e,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a32107a6690bf88691db72f9019b8ec9c8c6e1ce5e447fa21e37000fbe8fe696,PodSandboxId:13e4c008cfb7ea17cb823e290756e07b0177dd0379a53dafaff6302e03252b5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723811856865765181,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-ssb5h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5162fb17-6897-40d2-9c2c-80157ea46e07,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fb58a4d7b8e84f3429f58f729e1f93b54a8aadb737f10f997bf64e601d6edd6,PodSandboxId:7061cc0bd22ace243b66f598d9799b3e59733e06ba7f688f1f4a72a56387bfd3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723811856826516024,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-7gfgm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 797ae351-63bf-4994-a9bd-901367887b58,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b83ba25619ab61e7a449e6a735eb3cca59c184e250a87f2af5b712fbc37fc331,PodSandboxId:d524a508e86ff890d883786349c2b55fe61dc345620d11bc49cfc83efa8c5816,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723811844925898697,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dddkq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87bd9636-168b-4f61-9382-0914014af5c0,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4aa588906cdcd0cf4fcb973469df4a1d02cafe5e3388d6c516665f0d1af8ceb4,PodSandboxId:e0fda91da3630c4c4c4612e48a47583f0c6a77f263ee246204a23e60b2f9156c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723811840918266847,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g75mg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d22ea17-7ddd-4c07-89d5-0ebaa170066c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f34879b3d9bde9efea1f6d29ab90f29e4d7c261b375b99154b6292d41fc10559,PodSandboxId:30242516e8e9ac227e7aba5fcf3357980c39bf1d53d5180208366d9151a9f6e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723811829571387585,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4eb1802b446ee0233a6ed400bf8fd33,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a0281c780fc2216d75b34fb0ab5edeca5d750269010e7a86e842bc53970539d,PodSandboxId:40cdcfe4bd9df902d0159353292c04634d78c4dfe6f98b844b9ee744dd1f4204,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
,State:CONTAINER_EXITED,CreatedAt:1723811829474003968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0bcffbcfcc9f18fc26b991d99b329e9,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b87ef8b0-3c14-4867-8440-a4f121f510fb name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 12:53:44 ha-863936 crio[3656]: time="2024-08-16 12:53:44.004525770Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e0123423-748c-4f94-b269-0b928c0c8d7c name=/runtime.v1.RuntimeService/Version
	Aug 16 12:53:44 ha-863936 crio[3656]: time="2024-08-16 12:53:44.004598831Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e0123423-748c-4f94-b269-0b928c0c8d7c name=/runtime.v1.RuntimeService/Version
	Aug 16 12:53:44 ha-863936 crio[3656]: time="2024-08-16 12:53:44.005920085Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=34d8b29a-6819-4fd7-95d2-76489ae28941 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 12:53:44 ha-863936 crio[3656]: time="2024-08-16 12:53:44.006551682Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812824006517306,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=34d8b29a-6819-4fd7-95d2-76489ae28941 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 12:53:44 ha-863936 crio[3656]: time="2024-08-16 12:53:44.007931395Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=56f961f7-ad0f-4f52-8ade-807f3366d1c3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 12:53:44 ha-863936 crio[3656]: time="2024-08-16 12:53:44.008322208Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=56f961f7-ad0f-4f52-8ade-807f3366d1c3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 12:53:44 ha-863936 crio[3656]: time="2024-08-16 12:53:44.008869115Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d9f9cdb49f2e208b14ce5d538c1296d5ca31308ea50d93a324f1dab81ee4828b,PodSandboxId:6c6ce59b10f027dda0492ea2cc7784e979b2e4575b41a6d3d4b4846c708ff8ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723812579915111302,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6e7b7e6-00b6-42e2-9680-e6660e76bc6f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f272c68cb5f2b671fbb4fde72d736ec8e3c47238d4c785b6a1d30c25b92ce44c,PodSandboxId:2a3662a22babc6afd74ebfb7d9e65638e5c992e2f3819ed20bdb3a33ce29b12b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723812574918416022,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1758561f6a2148ce3a7eabea3ce99a1a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2ffb5eb0f0df4c6a354d94a448dd7733348df5a3111df63b97081e652e00b3e,PodSandboxId:05ebc660f3a004b33b1919d66a501b861a346ffad7393c37ed846418f998c414,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723812567175517801,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zqpfx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c52c866f-81c3-423f-a604-f792834e341e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f57541ef075e2aebbbe3b597c77782777a7e1dbb4dc82e74f19fc2a5cba915d5,PodSandboxId:2e5770a9723b139fb8fcee684674d19f2f484e23f583240f470a287bc07a0a70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723812565604659437,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fb02b673e0a97e6d66a5a7404114d26,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f03a55be0c75f679f976d46b357088d845045a272d05087a1511d8fc11be9ba3,PodSandboxId:7153f574b2fa16ca13f0001f759a4d888f92fba2212bf78c01d878a62f33fffb,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723812548310117133,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ec2b816f95a9c13a68e8d3dd18d3822,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47c8f68686797ce6bff3488c73dbe8881981f9e2018359937476bdda33cffec9,PodSandboxId:6c6ce59b10f027dda0492ea2cc7784e979b2e4575b41a6d3d4b4846c708ff8ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723812534131682004,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6e7b7e6-00b6-42e2-9680-e6660e76bc6f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15e34877aa55b56dd2af2c8b4c94de3639e13e9aa2640f4dc59c76f1d0ffd700,PodSandboxId:4228908a42f0b13c126674c42df44178b1e535b8d1ad73c15191ff27418e1227,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723812534116774628,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g75mg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d22ea17-7ddd-4c07-89d5-0ebaa170066c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:69857a90cec728020099e00ae2fc308ffd2f0b58830b3d9498eb2371af8f090d,PodSandboxId:b6dbe84b7a3435716739b7dbb7ea3a870883e79c799c8dfd8cb89705572eee2b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723812534010316842,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dddkq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87bd9636-168b-4f61-9382-0914014af5c0,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ab681701
e0290677bb191586833dac1bc9c69e12654ddb92b30341260d90fec,PodSandboxId:564511f6f39d5c5233053e8250813d972ced174a53b45f6c71837335ff02ddf8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723812534063334241,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-7gfgm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 797ae351-63bf-4994-a9bd-901367887b58,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4382fddee87cc3d877e1bd39791f2475d440812fbbe775970391626e16ed2c4b,PodSandboxId:e745636924446357136cb347e503cfc0a7a32c790c13d1e18119fc81ec82dc45,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723812533797486126,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4eb1802b446ee0233a6ed400bf8fd33,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:716dd81dd144015c07273ec8072c3f31367582e5d9e70d6f89d3c6b2c8a520ae,PodSandboxId:2a3662a22babc6afd74ebfb7d9e65638e5c992e2f3819ed20bdb3a33ce29b12b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723812533746497816,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1758561f6a2148ce3a7eabea3ce99a1a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec46a3a2004fcad11de1bba2d1c355d99915bafd65d77051d5e38834061756fd,PodSandboxId:2e5be3e2e792c4932d57dc8ec1637bf1d0433315a23c63bbed71994fd7c314e7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723812533587378908,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0bcffbcfcc9f18fc26b991d99b329e9,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de7f3e4f5a38619439599a84b3612d1e59e247b98adf7d481f48fc64ef8228aa,PodSandboxId:2e5770a9723b139fb8fcee684674d19f2f484e23f583240f470a287bc07a0a70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723812533668441533,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fb02b673e0a97e6d66a5a7404114d26,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41ebcb2f3d94d6faf106f480f7a3c9a88b9a72f2e4dfd7393af7cd6c72e2079f,PodSandboxId:9f99975c570c519e8fc16d94f6f3a955a2f0da1baca56b1235fda2b87ed58bcd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723812526612893130,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-ssb5h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5162fb17-6897-40d2-9c2c-80157ea46e07,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e73d7f930e1761fdc56db10356ad46f9f2acd1f4aead50f57efa3558af4e0b18,PodSandboxId:5f9b33b7fe6f25a53393dfc965ee81bb65952c3ab4fc610bd3fa7395f2ed6d3a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723812042160165915,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zqpfx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c52c866f-81c3-423f-a604-f792834e341e,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a32107a6690bf88691db72f9019b8ec9c8c6e1ce5e447fa21e37000fbe8fe696,PodSandboxId:13e4c008cfb7ea17cb823e290756e07b0177dd0379a53dafaff6302e03252b5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723811856865765181,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-ssb5h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5162fb17-6897-40d2-9c2c-80157ea46e07,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fb58a4d7b8e84f3429f58f729e1f93b54a8aadb737f10f997bf64e601d6edd6,PodSandboxId:7061cc0bd22ace243b66f598d9799b3e59733e06ba7f688f1f4a72a56387bfd3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723811856826516024,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-7gfgm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 797ae351-63bf-4994-a9bd-901367887b58,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b83ba25619ab61e7a449e6a735eb3cca59c184e250a87f2af5b712fbc37fc331,PodSandboxId:d524a508e86ff890d883786349c2b55fe61dc345620d11bc49cfc83efa8c5816,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723811844925898697,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dddkq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87bd9636-168b-4f61-9382-0914014af5c0,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4aa588906cdcd0cf4fcb973469df4a1d02cafe5e3388d6c516665f0d1af8ceb4,PodSandboxId:e0fda91da3630c4c4c4612e48a47583f0c6a77f263ee246204a23e60b2f9156c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723811840918266847,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g75mg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d22ea17-7ddd-4c07-89d5-0ebaa170066c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f34879b3d9bde9efea1f6d29ab90f29e4d7c261b375b99154b6292d41fc10559,PodSandboxId:30242516e8e9ac227e7aba5fcf3357980c39bf1d53d5180208366d9151a9f6e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723811829571387585,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4eb1802b446ee0233a6ed400bf8fd33,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a0281c780fc2216d75b34fb0ab5edeca5d750269010e7a86e842bc53970539d,PodSandboxId:40cdcfe4bd9df902d0159353292c04634d78c4dfe6f98b844b9ee744dd1f4204,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
,State:CONTAINER_EXITED,CreatedAt:1723811829474003968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0bcffbcfcc9f18fc26b991d99b329e9,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=56f961f7-ad0f-4f52-8ade-807f3366d1c3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 12:53:44 ha-863936 crio[3656]: time="2024-08-16 12:53:44.054697300Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=01e585e5-a537-46b6-9dd3-8112404a68bf name=/runtime.v1.RuntimeService/Version
	Aug 16 12:53:44 ha-863936 crio[3656]: time="2024-08-16 12:53:44.054795132Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=01e585e5-a537-46b6-9dd3-8112404a68bf name=/runtime.v1.RuntimeService/Version
	Aug 16 12:53:44 ha-863936 crio[3656]: time="2024-08-16 12:53:44.056274892Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a5ffc204-9710-43f5-958c-90a586e4b5db name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 12:53:44 ha-863936 crio[3656]: time="2024-08-16 12:53:44.056735238Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812824056712174,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a5ffc204-9710-43f5-958c-90a586e4b5db name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 12:53:44 ha-863936 crio[3656]: time="2024-08-16 12:53:44.057543826Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c54c9fc5-2755-4dd1-a931-dc0261843a2d name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 12:53:44 ha-863936 crio[3656]: time="2024-08-16 12:53:44.057615281Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c54c9fc5-2755-4dd1-a931-dc0261843a2d name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 12:53:44 ha-863936 crio[3656]: time="2024-08-16 12:53:44.059132166Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d9f9cdb49f2e208b14ce5d538c1296d5ca31308ea50d93a324f1dab81ee4828b,PodSandboxId:6c6ce59b10f027dda0492ea2cc7784e979b2e4575b41a6d3d4b4846c708ff8ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723812579915111302,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6e7b7e6-00b6-42e2-9680-e6660e76bc6f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f272c68cb5f2b671fbb4fde72d736ec8e3c47238d4c785b6a1d30c25b92ce44c,PodSandboxId:2a3662a22babc6afd74ebfb7d9e65638e5c992e2f3819ed20bdb3a33ce29b12b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723812574918416022,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1758561f6a2148ce3a7eabea3ce99a1a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2ffb5eb0f0df4c6a354d94a448dd7733348df5a3111df63b97081e652e00b3e,PodSandboxId:05ebc660f3a004b33b1919d66a501b861a346ffad7393c37ed846418f998c414,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723812567175517801,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zqpfx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c52c866f-81c3-423f-a604-f792834e341e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f57541ef075e2aebbbe3b597c77782777a7e1dbb4dc82e74f19fc2a5cba915d5,PodSandboxId:2e5770a9723b139fb8fcee684674d19f2f484e23f583240f470a287bc07a0a70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723812565604659437,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fb02b673e0a97e6d66a5a7404114d26,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f03a55be0c75f679f976d46b357088d845045a272d05087a1511d8fc11be9ba3,PodSandboxId:7153f574b2fa16ca13f0001f759a4d888f92fba2212bf78c01d878a62f33fffb,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723812548310117133,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ec2b816f95a9c13a68e8d3dd18d3822,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47c8f68686797ce6bff3488c73dbe8881981f9e2018359937476bdda33cffec9,PodSandboxId:6c6ce59b10f027dda0492ea2cc7784e979b2e4575b41a6d3d4b4846c708ff8ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723812534131682004,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6e7b7e6-00b6-42e2-9680-e6660e76bc6f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15e34877aa55b56dd2af2c8b4c94de3639e13e9aa2640f4dc59c76f1d0ffd700,PodSandboxId:4228908a42f0b13c126674c42df44178b1e535b8d1ad73c15191ff27418e1227,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723812534116774628,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g75mg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d22ea17-7ddd-4c07-89d5-0ebaa170066c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:69857a90cec728020099e00ae2fc308ffd2f0b58830b3d9498eb2371af8f090d,PodSandboxId:b6dbe84b7a3435716739b7dbb7ea3a870883e79c799c8dfd8cb89705572eee2b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723812534010316842,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dddkq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87bd9636-168b-4f61-9382-0914014af5c0,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ab681701
e0290677bb191586833dac1bc9c69e12654ddb92b30341260d90fec,PodSandboxId:564511f6f39d5c5233053e8250813d972ced174a53b45f6c71837335ff02ddf8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723812534063334241,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-7gfgm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 797ae351-63bf-4994-a9bd-901367887b58,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4382fddee87cc3d877e1bd39791f2475d440812fbbe775970391626e16ed2c4b,PodSandboxId:e745636924446357136cb347e503cfc0a7a32c790c13d1e18119fc81ec82dc45,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723812533797486126,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4eb1802b446ee0233a6ed400bf8fd33,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:716dd81dd144015c07273ec8072c3f31367582e5d9e70d6f89d3c6b2c8a520ae,PodSandboxId:2a3662a22babc6afd74ebfb7d9e65638e5c992e2f3819ed20bdb3a33ce29b12b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723812533746497816,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1758561f6a2148ce3a7eabea3ce99a1a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec46a3a2004fcad11de1bba2d1c355d99915bafd65d77051d5e38834061756fd,PodSandboxId:2e5be3e2e792c4932d57dc8ec1637bf1d0433315a23c63bbed71994fd7c314e7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723812533587378908,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0bcffbcfcc9f18fc26b991d99b329e9,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de7f3e4f5a38619439599a84b3612d1e59e247b98adf7d481f48fc64ef8228aa,PodSandboxId:2e5770a9723b139fb8fcee684674d19f2f484e23f583240f470a287bc07a0a70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723812533668441533,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fb02b673e0a97e6d66a5a7404114d26,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41ebcb2f3d94d6faf106f480f7a3c9a88b9a72f2e4dfd7393af7cd6c72e2079f,PodSandboxId:9f99975c570c519e8fc16d94f6f3a955a2f0da1baca56b1235fda2b87ed58bcd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723812526612893130,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-ssb5h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5162fb17-6897-40d2-9c2c-80157ea46e07,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e73d7f930e1761fdc56db10356ad46f9f2acd1f4aead50f57efa3558af4e0b18,PodSandboxId:5f9b33b7fe6f25a53393dfc965ee81bb65952c3ab4fc610bd3fa7395f2ed6d3a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723812042160165915,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zqpfx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c52c866f-81c3-423f-a604-f792834e341e,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a32107a6690bf88691db72f9019b8ec9c8c6e1ce5e447fa21e37000fbe8fe696,PodSandboxId:13e4c008cfb7ea17cb823e290756e07b0177dd0379a53dafaff6302e03252b5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723811856865765181,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-ssb5h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5162fb17-6897-40d2-9c2c-80157ea46e07,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fb58a4d7b8e84f3429f58f729e1f93b54a8aadb737f10f997bf64e601d6edd6,PodSandboxId:7061cc0bd22ace243b66f598d9799b3e59733e06ba7f688f1f4a72a56387bfd3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723811856826516024,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-7gfgm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 797ae351-63bf-4994-a9bd-901367887b58,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b83ba25619ab61e7a449e6a735eb3cca59c184e250a87f2af5b712fbc37fc331,PodSandboxId:d524a508e86ff890d883786349c2b55fe61dc345620d11bc49cfc83efa8c5816,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723811844925898697,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dddkq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87bd9636-168b-4f61-9382-0914014af5c0,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4aa588906cdcd0cf4fcb973469df4a1d02cafe5e3388d6c516665f0d1af8ceb4,PodSandboxId:e0fda91da3630c4c4c4612e48a47583f0c6a77f263ee246204a23e60b2f9156c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723811840918266847,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g75mg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d22ea17-7ddd-4c07-89d5-0ebaa170066c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f34879b3d9bde9efea1f6d29ab90f29e4d7c261b375b99154b6292d41fc10559,PodSandboxId:30242516e8e9ac227e7aba5fcf3357980c39bf1d53d5180208366d9151a9f6e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723811829571387585,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4eb1802b446ee0233a6ed400bf8fd33,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a0281c780fc2216d75b34fb0ab5edeca5d750269010e7a86e842bc53970539d,PodSandboxId:40cdcfe4bd9df902d0159353292c04634d78c4dfe6f98b844b9ee744dd1f4204,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
,State:CONTAINER_EXITED,CreatedAt:1723811829474003968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0bcffbcfcc9f18fc26b991d99b329e9,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c54c9fc5-2755-4dd1-a931-dc0261843a2d name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 12:53:44 ha-863936 crio[3656]: time="2024-08-16 12:53:44.108704203Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8ca1dbe8-dc26-499e-8ede-192880bec19b name=/runtime.v1.RuntimeService/Version
	Aug 16 12:53:44 ha-863936 crio[3656]: time="2024-08-16 12:53:44.108783724Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8ca1dbe8-dc26-499e-8ede-192880bec19b name=/runtime.v1.RuntimeService/Version
	Aug 16 12:53:44 ha-863936 crio[3656]: time="2024-08-16 12:53:44.110869163Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fcd79d18-e4d1-49f7-96fd-261b3d832c10 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 12:53:44 ha-863936 crio[3656]: time="2024-08-16 12:53:44.111419728Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812824111393826,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fcd79d18-e4d1-49f7-96fd-261b3d832c10 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 12:53:44 ha-863936 crio[3656]: time="2024-08-16 12:53:44.111999280Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a9c64e48-ab88-4403-944a-5cf2c09c5855 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 12:53:44 ha-863936 crio[3656]: time="2024-08-16 12:53:44.112059790Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a9c64e48-ab88-4403-944a-5cf2c09c5855 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 12:53:44 ha-863936 crio[3656]: time="2024-08-16 12:53:44.112450224Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d9f9cdb49f2e208b14ce5d538c1296d5ca31308ea50d93a324f1dab81ee4828b,PodSandboxId:6c6ce59b10f027dda0492ea2cc7784e979b2e4575b41a6d3d4b4846c708ff8ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723812579915111302,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6e7b7e6-00b6-42e2-9680-e6660e76bc6f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f272c68cb5f2b671fbb4fde72d736ec8e3c47238d4c785b6a1d30c25b92ce44c,PodSandboxId:2a3662a22babc6afd74ebfb7d9e65638e5c992e2f3819ed20bdb3a33ce29b12b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723812574918416022,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1758561f6a2148ce3a7eabea3ce99a1a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2ffb5eb0f0df4c6a354d94a448dd7733348df5a3111df63b97081e652e00b3e,PodSandboxId:05ebc660f3a004b33b1919d66a501b861a346ffad7393c37ed846418f998c414,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723812567175517801,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zqpfx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c52c866f-81c3-423f-a604-f792834e341e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f57541ef075e2aebbbe3b597c77782777a7e1dbb4dc82e74f19fc2a5cba915d5,PodSandboxId:2e5770a9723b139fb8fcee684674d19f2f484e23f583240f470a287bc07a0a70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723812565604659437,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fb02b673e0a97e6d66a5a7404114d26,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f03a55be0c75f679f976d46b357088d845045a272d05087a1511d8fc11be9ba3,PodSandboxId:7153f574b2fa16ca13f0001f759a4d888f92fba2212bf78c01d878a62f33fffb,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723812548310117133,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ec2b816f95a9c13a68e8d3dd18d3822,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47c8f68686797ce6bff3488c73dbe8881981f9e2018359937476bdda33cffec9,PodSandboxId:6c6ce59b10f027dda0492ea2cc7784e979b2e4575b41a6d3d4b4846c708ff8ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723812534131682004,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6e7b7e6-00b6-42e2-9680-e6660e76bc6f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15e34877aa55b56dd2af2c8b4c94de3639e13e9aa2640f4dc59c76f1d0ffd700,PodSandboxId:4228908a42f0b13c126674c42df44178b1e535b8d1ad73c15191ff27418e1227,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723812534116774628,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g75mg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d22ea17-7ddd-4c07-89d5-0ebaa170066c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:69857a90cec728020099e00ae2fc308ffd2f0b58830b3d9498eb2371af8f090d,PodSandboxId:b6dbe84b7a3435716739b7dbb7ea3a870883e79c799c8dfd8cb89705572eee2b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723812534010316842,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dddkq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87bd9636-168b-4f61-9382-0914014af5c0,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ab681701
e0290677bb191586833dac1bc9c69e12654ddb92b30341260d90fec,PodSandboxId:564511f6f39d5c5233053e8250813d972ced174a53b45f6c71837335ff02ddf8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723812534063334241,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-7gfgm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 797ae351-63bf-4994-a9bd-901367887b58,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4382fddee87cc3d877e1bd39791f2475d440812fbbe775970391626e16ed2c4b,PodSandboxId:e745636924446357136cb347e503cfc0a7a32c790c13d1e18119fc81ec82dc45,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723812533797486126,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4eb1802b446ee0233a6ed400bf8fd33,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:716dd81dd144015c07273ec8072c3f31367582e5d9e70d6f89d3c6b2c8a520ae,PodSandboxId:2a3662a22babc6afd74ebfb7d9e65638e5c992e2f3819ed20bdb3a33ce29b12b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723812533746497816,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1758561f6a2148ce3a7eabea3ce99a1a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec46a3a2004fcad11de1bba2d1c355d99915bafd65d77051d5e38834061756fd,PodSandboxId:2e5be3e2e792c4932d57dc8ec1637bf1d0433315a23c63bbed71994fd7c314e7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723812533587378908,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0bcffbcfcc9f18fc26b991d99b329e9,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de7f3e4f5a38619439599a84b3612d1e59e247b98adf7d481f48fc64ef8228aa,PodSandboxId:2e5770a9723b139fb8fcee684674d19f2f484e23f583240f470a287bc07a0a70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723812533668441533,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fb02b673e0a97e6d66a5a7404114d26,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41ebcb2f3d94d6faf106f480f7a3c9a88b9a72f2e4dfd7393af7cd6c72e2079f,PodSandboxId:9f99975c570c519e8fc16d94f6f3a955a2f0da1baca56b1235fda2b87ed58bcd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723812526612893130,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-ssb5h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5162fb17-6897-40d2-9c2c-80157ea46e07,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e73d7f930e1761fdc56db10356ad46f9f2acd1f4aead50f57efa3558af4e0b18,PodSandboxId:5f9b33b7fe6f25a53393dfc965ee81bb65952c3ab4fc610bd3fa7395f2ed6d3a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723812042160165915,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zqpfx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c52c866f-81c3-423f-a604-f792834e341e,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a32107a6690bf88691db72f9019b8ec9c8c6e1ce5e447fa21e37000fbe8fe696,PodSandboxId:13e4c008cfb7ea17cb823e290756e07b0177dd0379a53dafaff6302e03252b5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723811856865765181,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-ssb5h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5162fb17-6897-40d2-9c2c-80157ea46e07,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fb58a4d7b8e84f3429f58f729e1f93b54a8aadb737f10f997bf64e601d6edd6,PodSandboxId:7061cc0bd22ace243b66f598d9799b3e59733e06ba7f688f1f4a72a56387bfd3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723811856826516024,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-7gfgm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 797ae351-63bf-4994-a9bd-901367887b58,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b83ba25619ab61e7a449e6a735eb3cca59c184e250a87f2af5b712fbc37fc331,PodSandboxId:d524a508e86ff890d883786349c2b55fe61dc345620d11bc49cfc83efa8c5816,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723811844925898697,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dddkq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87bd9636-168b-4f61-9382-0914014af5c0,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4aa588906cdcd0cf4fcb973469df4a1d02cafe5e3388d6c516665f0d1af8ceb4,PodSandboxId:e0fda91da3630c4c4c4612e48a47583f0c6a77f263ee246204a23e60b2f9156c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723811840918266847,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g75mg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d22ea17-7ddd-4c07-89d5-0ebaa170066c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f34879b3d9bde9efea1f6d29ab90f29e4d7c261b375b99154b6292d41fc10559,PodSandboxId:30242516e8e9ac227e7aba5fcf3357980c39bf1d53d5180208366d9151a9f6e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723811829571387585,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4eb1802b446ee0233a6ed400bf8fd33,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a0281c780fc2216d75b34fb0ab5edeca5d750269010e7a86e842bc53970539d,PodSandboxId:40cdcfe4bd9df902d0159353292c04634d78c4dfe6f98b844b9ee744dd1f4204,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
,State:CONTAINER_EXITED,CreatedAt:1723811829474003968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-863936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0bcffbcfcc9f18fc26b991d99b329e9,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a9c64e48-ab88-4403-944a-5cf2c09c5855 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d9f9cdb49f2e2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       4                   6c6ce59b10f02       storage-provisioner
	f272c68cb5f2b       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      4 minutes ago       Running             kube-controller-manager   2                   2a3662a22babc       kube-controller-manager-ha-863936
	e2ffb5eb0f0df       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   05ebc660f3a00       busybox-7dff88458-zqpfx
	f57541ef075e2       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      4 minutes ago       Running             kube-apiserver            3                   2e5770a9723b1       kube-apiserver-ha-863936
	f03a55be0c75f       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      4 minutes ago       Running             kube-vip                  0                   7153f574b2fa1       kube-vip-ha-863936
	47c8f68686797       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Exited              storage-provisioner       3                   6c6ce59b10f02       storage-provisioner
	15e34877aa55b       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      4 minutes ago       Running             kube-proxy                1                   4228908a42f0b       kube-proxy-g75mg
	6ab681701e029       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   564511f6f39d5       coredns-6f6b679f8f-7gfgm
	69857a90cec72       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      4 minutes ago       Running             kindnet-cni               1                   b6dbe84b7a343       kindnet-dddkq
	4382fddee87cc       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      4 minutes ago       Running             etcd                      1                   e745636924446       etcd-ha-863936
	716dd81dd1440       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      4 minutes ago       Exited              kube-controller-manager   1                   2a3662a22babc       kube-controller-manager-ha-863936
	de7f3e4f5a386       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      4 minutes ago       Exited              kube-apiserver            2                   2e5770a9723b1       kube-apiserver-ha-863936
	ec46a3a2004fc       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      4 minutes ago       Running             kube-scheduler            1                   2e5be3e2e792c       kube-scheduler-ha-863936
	41ebcb2f3d94d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   9f99975c570c5       coredns-6f6b679f8f-ssb5h
	e73d7f930e176       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   5f9b33b7fe6f2       busybox-7dff88458-zqpfx
	a32107a6690bf       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   13e4c008cfb7e       coredns-6f6b679f8f-ssb5h
	8fb58a4d7b8e8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   7061cc0bd22ac       coredns-6f6b679f8f-7gfgm
	b83ba25619ab6       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    16 minutes ago      Exited              kindnet-cni               0                   d524a508e86ff       kindnet-dddkq
	4aa588906cdcd       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      16 minutes ago      Exited              kube-proxy                0                   e0fda91da3630       kube-proxy-g75mg
	f34879b3d9bde       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      16 minutes ago      Exited              etcd                      0                   30242516e8e9a       etcd-ha-863936
	4a0281c780fc2       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      16 minutes ago      Exited              kube-scheduler            0                   40cdcfe4bd9df       kube-scheduler-ha-863936
	
	
	==> coredns [41ebcb2f3d94d6faf106f480f7a3c9a88b9a72f2e4dfd7393af7cd6c72e2079f] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:49222->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:49222->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:49216->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:49216->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [6ab681701e0290677bb191586833dac1bc9c69e12654ddb92b30341260d90fec] <==
	Trace[1153588394]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (12:49:07.551)
	Trace[1153588394]: [10.001927654s] [10.001927654s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:52598->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:52598->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:52602->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:52602->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [8fb58a4d7b8e84f3429f58f729e1f93b54a8aadb737f10f997bf64e601d6edd6] <==
	[INFO] 10.244.2.2:33554 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000202349s
	[INFO] 10.244.2.2:49854 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000138224s
	[INFO] 10.244.2.2:52911 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000113497s
	[INFO] 10.244.1.2:58083 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001926786s
	[INFO] 10.244.1.2:40090 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000179243s
	[INFO] 10.244.0.4:38072 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001911453s
	[INFO] 10.244.0.4:48123 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000124668s
	[INFO] 10.244.2.2:45589 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104297s
	[INFO] 10.244.2.2:47676 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000096845s
	[INFO] 10.244.2.2:34029 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090037s
	[INFO] 10.244.2.2:44387 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000085042s
	[INFO] 10.244.1.2:39606 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000160442s
	[INFO] 10.244.1.2:35616 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000085764s
	[INFO] 10.244.1.2:41949 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000261174s
	[INFO] 10.244.1.2:33001 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071351s
	[INFO] 10.244.0.4:57464 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000150636s
	[INFO] 10.244.2.2:55232 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000242943s
	[INFO] 10.244.2.2:35398 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000209274s
	[INFO] 10.244.1.2:40761 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122103s
	[INFO] 10.244.1.2:46518 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000133408s
	[INFO] 10.244.1.2:41022 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000117384s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io)
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a32107a6690bf88691db72f9019b8ec9c8c6e1ce5e447fa21e37000fbe8fe696] <==
	[INFO] 10.244.1.2:37962 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128488s
	[INFO] 10.244.1.2:53685 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000098031s
	[INFO] 10.244.1.2:33689 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000277395s
	[INFO] 10.244.1.2:40131 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001237471s
	[INFO] 10.244.1.2:39633 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000131283s
	[INFO] 10.244.1.2:60171 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000121735s
	[INFO] 10.244.0.4:60191 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114357s
	[INFO] 10.244.0.4:41890 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000066371s
	[INFO] 10.244.0.4:55945 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000119788s
	[INFO] 10.244.0.4:57226 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001318461s
	[INFO] 10.244.0.4:56732 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000093503s
	[INFO] 10.244.0.4:52075 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000104691s
	[INFO] 10.244.0.4:60105 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121048s
	[INFO] 10.244.0.4:43134 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000066121s
	[INFO] 10.244.0.4:44998 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063593s
	[INFO] 10.244.2.2:47337 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00013984s
	[INFO] 10.244.2.2:54916 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000155787s
	[INFO] 10.244.1.2:40477 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000149375s
	[INFO] 10.244.0.4:48877 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125695s
	[INFO] 10.244.0.4:37769 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000100407s
	[INFO] 10.244.0.4:53971 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000045729s
	[INFO] 10.244.0.4:37660 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000216606s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1935&timeout=9m5s&timeoutSeconds=545&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> describe nodes <==
	Name:               ha-863936
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-863936
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab84f9bc76071a77c857a14f5c66dccc01002b05
	                    minikube.k8s.io/name=ha-863936
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_16T12_37_16_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 12:37:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-863936
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 12:53:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 12:52:26 +0000   Fri, 16 Aug 2024 12:52:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 12:52:26 +0000   Fri, 16 Aug 2024 12:52:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 12:52:26 +0000   Fri, 16 Aug 2024 12:52:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 12:52:26 +0000   Fri, 16 Aug 2024 12:52:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.2
	  Hostname:    ha-863936
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 10f8ad5d72f24178a58c9bc9c1f37801
	  System UUID:                10f8ad5d-72f2-4178-a58c-9bc9c1f37801
	  Boot ID:                    4cc922cf-4096-4ce6-955a-2954b5f98b77
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-zqpfx              0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 coredns-6f6b679f8f-7gfgm             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 coredns-6f6b679f8f-ssb5h             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 etcd-ha-863936                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         16m
	  kube-system                 kindnet-dddkq                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	  kube-system                 kube-apiserver-ha-863936             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-ha-863936    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-g75mg                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-ha-863936             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-vip-ha-863936                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m4s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 16m                    kube-proxy       
	  Normal   Starting                 4m7s                   kube-proxy       
	  Normal   NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Normal   RegisteredNode           16m                    node-controller  Node ha-863936 event: Registered Node ha-863936 in Controller
	  Normal   RegisteredNode           14m                    node-controller  Node ha-863936 event: Registered Node ha-863936 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-863936 event: Registered Node ha-863936 in Controller
	  Warning  ContainerGCFailed        5m29s (x2 over 6m29s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             5m12s (x3 over 6m1s)   kubelet          Node ha-863936 status is now: NodeNotReady
	  Normal   RegisteredNode           4m8s                   node-controller  Node ha-863936 event: Registered Node ha-863936 in Controller
	  Normal   RegisteredNode           4m6s                   node-controller  Node ha-863936 event: Registered Node ha-863936 in Controller
	  Normal   RegisteredNode           3m16s                  node-controller  Node ha-863936 event: Registered Node ha-863936 in Controller
	  Normal   NodeNotReady             101s                   node-controller  Node ha-863936 status is now: NodeNotReady
	  Normal   NodeReady                78s (x2 over 16m)      kubelet          Node ha-863936 status is now: NodeReady
	  Normal   NodeHasSufficientPID     78s (x2 over 16m)      kubelet          Node ha-863936 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    78s (x2 over 16m)      kubelet          Node ha-863936 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  78s (x2 over 16m)      kubelet          Node ha-863936 status is now: NodeHasSufficientMemory
	
	
	Name:               ha-863936-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-863936-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab84f9bc76071a77c857a14f5c66dccc01002b05
	                    minikube.k8s.io/name=ha-863936
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_16T12_38_57_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 12:38:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-863936-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 12:53:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 12:50:11 +0000   Fri, 16 Aug 2024 12:49:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 12:50:11 +0000   Fri, 16 Aug 2024 12:49:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 12:50:11 +0000   Fri, 16 Aug 2024 12:49:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 12:50:11 +0000   Fri, 16 Aug 2024 12:49:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.101
	  Hostname:    ha-863936-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c538a90b7afb4607a2068ae6c8689740
	  System UUID:                c538a90b-7afb-4607-a206-8ae6c8689740
	  Boot ID:                    49123558-9c59-443f-8741-ca8abe8591ff
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-t5tjw                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 etcd-ha-863936-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         14m
	  kube-system                 kindnet-qmrb2                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      14m
	  kube-system                 kube-apiserver-ha-863936-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-863936-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-7lvfc                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-863936-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-863936-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m54s                  kube-proxy       
	  Normal  Starting                 14m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-863936-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-863936-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)      kubelet          Node ha-863936-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                    node-controller  Node ha-863936-m02 event: Registered Node ha-863936-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-863936-m02 event: Registered Node ha-863936-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-863936-m02 event: Registered Node ha-863936-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-863936-m02 status is now: NodeNotReady
	  Normal  Starting                 4m34s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m34s (x8 over 4m34s)  kubelet          Node ha-863936-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m34s (x8 over 4m34s)  kubelet          Node ha-863936-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m34s (x7 over 4m34s)  kubelet          Node ha-863936-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m8s                   node-controller  Node ha-863936-m02 event: Registered Node ha-863936-m02 in Controller
	  Normal  RegisteredNode           4m6s                   node-controller  Node ha-863936-m02 event: Registered Node ha-863936-m02 in Controller
	  Normal  RegisteredNode           3m16s                  node-controller  Node ha-863936-m02 event: Registered Node ha-863936-m02 in Controller
	
	
	Name:               ha-863936-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-863936-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab84f9bc76071a77c857a14f5c66dccc01002b05
	                    minikube.k8s.io/name=ha-863936
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_16T12_41_15_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 12:41:15 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-863936-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 12:51:18 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 16 Aug 2024 12:50:57 +0000   Fri, 16 Aug 2024 12:52:01 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 16 Aug 2024 12:50:57 +0000   Fri, 16 Aug 2024 12:52:01 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 16 Aug 2024 12:50:57 +0000   Fri, 16 Aug 2024 12:52:01 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 16 Aug 2024 12:50:57 +0000   Fri, 16 Aug 2024 12:52:01 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.74
	  Hostname:    ha-863936-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 13346cf592d54450aa4bb72c3dba17c9
	  System UUID:                13346cf5-92d5-4450-aa4b-b72c3dba17c9
	  Boot ID:                    dda3aa5d-b523-4d77-a3a9-6f8a86052d9e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-58t7b    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m37s
	  kube-system                 kindnet-c6wlb              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-proxy-lsjgf           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m42s                  kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m (x2 over 12m)      kubelet          Node ha-863936-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x2 over 12m)      kubelet          Node ha-863936-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x2 over 12m)      kubelet          Node ha-863936-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                    node-controller  Node ha-863936-m04 event: Registered Node ha-863936-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-863936-m04 event: Registered Node ha-863936-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-863936-m04 event: Registered Node ha-863936-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-863936-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m8s                   node-controller  Node ha-863936-m04 event: Registered Node ha-863936-m04 in Controller
	  Normal   RegisteredNode           4m6s                   node-controller  Node ha-863936-m04 event: Registered Node ha-863936-m04 in Controller
	  Normal   RegisteredNode           3m16s                  node-controller  Node ha-863936-m04 event: Registered Node ha-863936-m04 in Controller
	  Warning  Rebooted                 2m47s                  kubelet          Node ha-863936-m04 has been rebooted, boot id: dda3aa5d-b523-4d77-a3a9-6f8a86052d9e
	  Normal   NodeAllocatableEnforced  2m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 2m47s                  kubelet          Starting kubelet.
	  Normal   NodeReady                2m47s                  kubelet          Node ha-863936-m04 status is now: NodeReady
	  Normal   NodeHasSufficientMemory  2m46s (x2 over 2m47s)  kubelet          Node ha-863936-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m46s (x2 over 2m47s)  kubelet          Node ha-863936-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m46s (x2 over 2m47s)  kubelet          Node ha-863936-m04 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             103s (x2 over 3m28s)   node-controller  Node ha-863936-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[ +10.777615] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.058123] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055634] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.181681] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.119869] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.269746] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[Aug16 12:37] systemd-fstab-generator[771]: Ignoring "noauto" option for root device
	[  +4.293923] systemd-fstab-generator[906]: Ignoring "noauto" option for root device
	[  +0.058457] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.209516] systemd-fstab-generator[1329]: Ignoring "noauto" option for root device
	[  +0.086313] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.133654] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.050308] kauditd_printk_skb: 34 callbacks suppressed
	[Aug16 12:39] kauditd_printk_skb: 26 callbacks suppressed
	[Aug16 12:45] kauditd_printk_skb: 1 callbacks suppressed
	[Aug16 12:48] systemd-fstab-generator[3576]: Ignoring "noauto" option for root device
	[  +0.145699] systemd-fstab-generator[3588]: Ignoring "noauto" option for root device
	[  +0.186683] systemd-fstab-generator[3602]: Ignoring "noauto" option for root device
	[  +0.142181] systemd-fstab-generator[3614]: Ignoring "noauto" option for root device
	[  +0.277142] systemd-fstab-generator[3642]: Ignoring "noauto" option for root device
	[  +0.899065] systemd-fstab-generator[3834]: Ignoring "noauto" option for root device
	[  +6.580987] kauditd_printk_skb: 132 callbacks suppressed
	[Aug16 12:49] kauditd_printk_skb: 76 callbacks suppressed
	[ +26.726246] kauditd_printk_skb: 5 callbacks suppressed
	[ +17.536427] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [4382fddee87cc3d877e1bd39791f2475d440812fbbe775970391626e16ed2c4b] <==
	{"level":"info","ts":"2024-08-16T12:50:16.880875Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"6c80de388e5020e8","to":"d6e396237a03cb80","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-16T12:50:16.880937Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"6c80de388e5020e8","remote-peer-id":"d6e396237a03cb80"}
	{"level":"info","ts":"2024-08-16T12:50:16.882156Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"6c80de388e5020e8","to":"d6e396237a03cb80","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-16T12:50:16.882211Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"6c80de388e5020e8","remote-peer-id":"d6e396237a03cb80"}
	{"level":"info","ts":"2024-08-16T12:50:21.127158Z","caller":"traceutil/trace.go:171","msg":"trace[279510133] transaction","detail":"{read_only:false; response_revision:2398; number_of_response:1; }","duration":"127.970338ms","start":"2024-08-16T12:50:20.999172Z","end":"2024-08-16T12:50:21.127142Z","steps":["trace[279510133] 'process raft request'  (duration: 127.760605ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T12:51:11.098557Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6c80de388e5020e8 switched to configuration voters=(7818493287602331880 12191234053279528872)"}
	{"level":"info","ts":"2024-08-16T12:51:11.100789Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"e20ba2e00cb0e827","local-member-id":"6c80de388e5020e8","removed-remote-peer-id":"d6e396237a03cb80","removed-remote-peer-urls":["https://192.168.39.116:2380"]}
	{"level":"warn","ts":"2024-08-16T12:51:11.101103Z","caller":"etcdserver/server.go:987","msg":"rejected Raft message from removed member","local-member-id":"6c80de388e5020e8","removed-member-id":"d6e396237a03cb80"}
	{"level":"warn","ts":"2024-08-16T12:51:11.101188Z","caller":"rafthttp/peer.go:180","msg":"failed to process Raft message","error":"cannot process message from removed member"}
	{"level":"info","ts":"2024-08-16T12:51:11.100912Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"d6e396237a03cb80"}
	{"level":"warn","ts":"2024-08-16T12:51:11.101592Z","caller":"embed/config_logging.go:170","msg":"rejected connection on client endpoint","remote-addr":"192.168.39.116:43378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-08-16T12:51:11.102378Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"d6e396237a03cb80"}
	{"level":"info","ts":"2024-08-16T12:51:11.102486Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"d6e396237a03cb80"}
	{"level":"warn","ts":"2024-08-16T12:51:11.110155Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"d6e396237a03cb80"}
	{"level":"info","ts":"2024-08-16T12:51:11.110216Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"d6e396237a03cb80"}
	{"level":"info","ts":"2024-08-16T12:51:11.110282Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6c80de388e5020e8","remote-peer-id":"d6e396237a03cb80"}
	{"level":"warn","ts":"2024-08-16T12:51:11.110443Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6c80de388e5020e8","remote-peer-id":"d6e396237a03cb80","error":"context canceled"}
	{"level":"warn","ts":"2024-08-16T12:51:11.110514Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"d6e396237a03cb80","error":"failed to read d6e396237a03cb80 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-08-16T12:51:11.110562Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6c80de388e5020e8","remote-peer-id":"d6e396237a03cb80"}
	{"level":"warn","ts":"2024-08-16T12:51:11.110692Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"6c80de388e5020e8","remote-peer-id":"d6e396237a03cb80","error":"context canceled"}
	{"level":"info","ts":"2024-08-16T12:51:11.110740Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6c80de388e5020e8","remote-peer-id":"d6e396237a03cb80"}
	{"level":"info","ts":"2024-08-16T12:51:11.110775Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"d6e396237a03cb80"}
	{"level":"info","ts":"2024-08-16T12:51:11.110810Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"6c80de388e5020e8","removed-remote-peer-id":"d6e396237a03cb80"}
	{"level":"warn","ts":"2024-08-16T12:51:11.121228Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"6c80de388e5020e8","remote-peer-id-stream-handler":"6c80de388e5020e8","remote-peer-id-from":"d6e396237a03cb80"}
	{"level":"warn","ts":"2024-08-16T12:51:11.122164Z","caller":"embed/config_logging.go:170","msg":"rejected connection on peer endpoint","remote-addr":"192.168.39.116:48908","server-name":"","error":"EOF"}
	
	
	==> etcd [f34879b3d9bde9efea1f6d29ab90f29e4d7c261b375b99154b6292d41fc10559] <==
	2024/08/16 12:47:13 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/08/16 12:47:13 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-16T12:47:13.597527Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-16T12:47:13.597624Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-16T12:47:13.599649Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"6c80de388e5020e8","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-16T12:47:13.599983Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"a92ff6c78f5f37a8"}
	{"level":"info","ts":"2024-08-16T12:47:13.600065Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"a92ff6c78f5f37a8"}
	{"level":"info","ts":"2024-08-16T12:47:13.600120Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"a92ff6c78f5f37a8"}
	{"level":"info","ts":"2024-08-16T12:47:13.600237Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6c80de388e5020e8","remote-peer-id":"a92ff6c78f5f37a8"}
	{"level":"info","ts":"2024-08-16T12:47:13.600312Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6c80de388e5020e8","remote-peer-id":"a92ff6c78f5f37a8"}
	{"level":"info","ts":"2024-08-16T12:47:13.600356Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6c80de388e5020e8","remote-peer-id":"a92ff6c78f5f37a8"}
	{"level":"info","ts":"2024-08-16T12:47:13.600390Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"a92ff6c78f5f37a8"}
	{"level":"info","ts":"2024-08-16T12:47:13.600401Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"d6e396237a03cb80"}
	{"level":"info","ts":"2024-08-16T12:47:13.600419Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"d6e396237a03cb80"}
	{"level":"info","ts":"2024-08-16T12:47:13.600453Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"d6e396237a03cb80"}
	{"level":"info","ts":"2024-08-16T12:47:13.600558Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6c80de388e5020e8","remote-peer-id":"d6e396237a03cb80"}
	{"level":"info","ts":"2024-08-16T12:47:13.600620Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6c80de388e5020e8","remote-peer-id":"d6e396237a03cb80"}
	{"level":"info","ts":"2024-08-16T12:47:13.600661Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6c80de388e5020e8","remote-peer-id":"d6e396237a03cb80"}
	{"level":"info","ts":"2024-08-16T12:47:13.600697Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"d6e396237a03cb80"}
	{"level":"info","ts":"2024-08-16T12:47:13.603309Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.2:2380"}
	{"level":"warn","ts":"2024-08-16T12:47:13.603431Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"8.901135296s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-08-16T12:47:13.603462Z","caller":"traceutil/trace.go:171","msg":"trace[1531816342] range","detail":"{range_begin:; range_end:; }","duration":"8.901182222s","start":"2024-08-16T12:47:04.702269Z","end":"2024-08-16T12:47:13.603451Z","steps":["trace[1531816342] 'agreement among raft nodes before linearized reading'  (duration: 8.901133699s)"],"step_count":1}
	{"level":"error","ts":"2024-08-16T12:47:13.603536Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]data_corruption ok\n[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: server stopped\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-08-16T12:47:13.604481Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.2:2380"}
	{"level":"info","ts":"2024-08-16T12:47:13.604523Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-863936","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.2:2380"],"advertise-client-urls":["https://192.168.39.2:2379"]}
	
	
	==> kernel <==
	 12:53:44 up 17 min,  0 users,  load average: 0.35, 0.39, 0.28
	Linux ha-863936 5.10.207 #1 SMP Wed Aug 14 19:18:01 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [69857a90cec728020099e00ae2fc308ffd2f0b58830b3d9498eb2371af8f090d] <==
	I0816 12:52:55.124874       1 main.go:322] Node ha-863936-m04 has CIDR [10.244.3.0/24] 
	I0816 12:53:05.127199       1 main.go:295] Handling node with IPs: map[192.168.39.2:{}]
	I0816 12:53:05.127397       1 main.go:299] handling current node
	I0816 12:53:05.127446       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0816 12:53:05.127454       1 main.go:322] Node ha-863936-m02 has CIDR [10.244.1.0/24] 
	I0816 12:53:05.127744       1 main.go:295] Handling node with IPs: map[192.168.39.74:{}]
	I0816 12:53:05.127822       1 main.go:322] Node ha-863936-m04 has CIDR [10.244.3.0/24] 
	I0816 12:53:15.133656       1 main.go:295] Handling node with IPs: map[192.168.39.2:{}]
	I0816 12:53:15.133845       1 main.go:299] handling current node
	I0816 12:53:15.133905       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0816 12:53:15.134027       1 main.go:322] Node ha-863936-m02 has CIDR [10.244.1.0/24] 
	I0816 12:53:15.134336       1 main.go:295] Handling node with IPs: map[192.168.39.74:{}]
	I0816 12:53:15.134409       1 main.go:322] Node ha-863936-m04 has CIDR [10.244.3.0/24] 
	I0816 12:53:25.133734       1 main.go:295] Handling node with IPs: map[192.168.39.74:{}]
	I0816 12:53:25.133773       1 main.go:322] Node ha-863936-m04 has CIDR [10.244.3.0/24] 
	I0816 12:53:25.134033       1 main.go:295] Handling node with IPs: map[192.168.39.2:{}]
	I0816 12:53:25.134070       1 main.go:299] handling current node
	I0816 12:53:25.134082       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0816 12:53:25.134088       1 main.go:322] Node ha-863936-m02 has CIDR [10.244.1.0/24] 
	I0816 12:53:35.127009       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0816 12:53:35.127057       1 main.go:322] Node ha-863936-m02 has CIDR [10.244.1.0/24] 
	I0816 12:53:35.127231       1 main.go:295] Handling node with IPs: map[192.168.39.74:{}]
	I0816 12:53:35.127282       1 main.go:322] Node ha-863936-m04 has CIDR [10.244.3.0/24] 
	I0816 12:53:35.127337       1 main.go:295] Handling node with IPs: map[192.168.39.2:{}]
	I0816 12:53:35.127357       1 main.go:299] handling current node
	
	
	==> kindnet [b83ba25619ab61e7a449e6a735eb3cca59c184e250a87f2af5b712fbc37fc331] <==
	I0816 12:46:36.069611       1 main.go:322] Node ha-863936-m04 has CIDR [10.244.3.0/24] 
	I0816 12:46:46.068916       1 main.go:295] Handling node with IPs: map[192.168.39.74:{}]
	I0816 12:46:46.068999       1 main.go:322] Node ha-863936-m04 has CIDR [10.244.3.0/24] 
	I0816 12:46:46.069167       1 main.go:295] Handling node with IPs: map[192.168.39.2:{}]
	I0816 12:46:46.069176       1 main.go:299] handling current node
	I0816 12:46:46.069199       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0816 12:46:46.069204       1 main.go:322] Node ha-863936-m02 has CIDR [10.244.1.0/24] 
	I0816 12:46:46.069257       1 main.go:295] Handling node with IPs: map[192.168.39.116:{}]
	I0816 12:46:46.069262       1 main.go:322] Node ha-863936-m03 has CIDR [10.244.2.0/24] 
	I0816 12:46:56.071559       1 main.go:295] Handling node with IPs: map[192.168.39.2:{}]
	I0816 12:46:56.071600       1 main.go:299] handling current node
	I0816 12:46:56.071625       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0816 12:46:56.071630       1 main.go:322] Node ha-863936-m02 has CIDR [10.244.1.0/24] 
	I0816 12:46:56.071821       1 main.go:295] Handling node with IPs: map[192.168.39.116:{}]
	I0816 12:46:56.071858       1 main.go:322] Node ha-863936-m03 has CIDR [10.244.2.0/24] 
	I0816 12:46:56.071928       1 main.go:295] Handling node with IPs: map[192.168.39.74:{}]
	I0816 12:46:56.072003       1 main.go:322] Node ha-863936-m04 has CIDR [10.244.3.0/24] 
	I0816 12:47:06.077543       1 main.go:295] Handling node with IPs: map[192.168.39.2:{}]
	I0816 12:47:06.077910       1 main.go:299] handling current node
	I0816 12:47:06.078036       1 main.go:295] Handling node with IPs: map[192.168.39.101:{}]
	I0816 12:47:06.078133       1 main.go:322] Node ha-863936-m02 has CIDR [10.244.1.0/24] 
	I0816 12:47:06.079102       1 main.go:295] Handling node with IPs: map[192.168.39.116:{}]
	I0816 12:47:06.079145       1 main.go:322] Node ha-863936-m03 has CIDR [10.244.2.0/24] 
	I0816 12:47:06.079235       1 main.go:295] Handling node with IPs: map[192.168.39.74:{}]
	I0816 12:47:06.079254       1 main.go:322] Node ha-863936-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [de7f3e4f5a38619439599a84b3612d1e59e247b98adf7d481f48fc64ef8228aa] <==
	I0816 12:48:54.352674       1 options.go:228] external host was not specified, using 192.168.39.2
	I0816 12:48:54.371475       1 server.go:142] Version: v1.31.0
	I0816 12:48:54.371539       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 12:48:55.002378       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0816 12:48:55.028456       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0816 12:48:55.032470       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0816 12:48:55.037006       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0816 12:48:55.037343       1 instance.go:232] Using reconciler: lease
	W0816 12:49:15.001509       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0816 12:49:15.002173       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0816 12:49:15.038767       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [f57541ef075e2aebbbe3b597c77782777a7e1dbb4dc82e74f19fc2a5cba915d5] <==
	I0816 12:49:31.063929       1 crd_finalizer.go:269] Starting CRDFinalizer
	I0816 12:49:31.147748       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0816 12:49:31.147886       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0816 12:49:31.147923       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0816 12:49:31.148044       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0816 12:49:31.148395       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0816 12:49:31.148822       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0816 12:49:31.151024       1 shared_informer.go:320] Caches are synced for configmaps
	I0816 12:49:31.157268       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	W0816 12:49:31.162930       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.116]
	I0816 12:49:31.164133       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0816 12:49:31.164196       1 aggregator.go:171] initial CRD sync complete...
	I0816 12:49:31.164232       1 autoregister_controller.go:144] Starting autoregister controller
	I0816 12:49:31.164255       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0816 12:49:31.164284       1 cache.go:39] Caches are synced for autoregister controller
	I0816 12:49:31.174799       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0816 12:49:31.181146       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0816 12:49:31.181191       1 policy_source.go:224] refreshing policies
	I0816 12:49:31.198459       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0816 12:49:31.264390       1 controller.go:615] quota admission added evaluator for: endpoints
	I0816 12:49:31.285246       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0816 12:49:31.290529       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0816 12:49:32.059679       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0816 12:49:32.525123       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.116 192.168.39.2]
	W0816 12:49:52.507698       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.101 192.168.39.2]
	
	
	==> kube-controller-manager [716dd81dd144015c07273ec8072c3f31367582e5d9e70d6f89d3c6b2c8a520ae] <==
	I0816 12:48:55.071083       1 serving.go:386] Generated self-signed cert in-memory
	I0816 12:48:55.303350       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0816 12:48:55.303384       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 12:48:55.304886       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0816 12:48:55.305077       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0816 12:48:55.305279       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0816 12:48:55.305460       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0816 12:49:16.046352       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.2:8443/healthz\": dial tcp 192.168.39.2:8443: connect: connection refused"
	
	
	==> kube-controller-manager [f272c68cb5f2b671fbb4fde72d736ec8e3c47238d4c785b6a1d30c25b92ce44c] <==
	I0816 12:52:03.120008       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863936-m04"
	I0816 12:52:03.120093       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863936"
	I0816 12:52:03.127264       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="17.950722ms"
	I0816 12:52:03.128218       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="33.059µs"
	I0816 12:52:03.247357       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-fc2lc EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-fc2lc\": the object has been modified; please apply your changes to the latest version and try again"
	I0816 12:52:03.248694       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"a322e0ec-14ff-458e-bae7-924b2e2d8142", APIVersion:"v1", ResourceVersion:"242", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-fc2lc EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-fc2lc": the object has been modified; please apply your changes to the latest version and try again
	I0816 12:52:03.269164       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-fc2lc EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-fc2lc\": the object has been modified; please apply your changes to the latest version and try again"
	I0816 12:52:03.270071       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"a322e0ec-14ff-458e-bae7-924b2e2d8142", APIVersion:"v1", ResourceVersion:"242", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-fc2lc EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-fc2lc": the object has been modified; please apply your changes to the latest version and try again
	I0816 12:52:03.289519       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="93.100796ms"
	I0816 12:52:03.343492       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="53.860295ms"
	I0816 12:52:03.343711       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="119.468µs"
	I0816 12:52:06.717865       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863936-m04"
	I0816 12:52:13.197016       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863936"
	I0816 12:52:16.799207       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863936"
	I0816 12:52:25.944211       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-fc2lc EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-fc2lc\": the object has been modified; please apply your changes to the latest version and try again"
	I0816 12:52:25.944533       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"a322e0ec-14ff-458e-bae7-924b2e2d8142", APIVersion:"v1", ResourceVersion:"242", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-fc2lc EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-fc2lc": the object has been modified; please apply your changes to the latest version and try again
	I0816 12:52:26.008012       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="82.325385ms"
	I0816 12:52:26.008222       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="45.920356ms"
	I0816 12:52:26.008540       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="86.159µs"
	I0816 12:52:26.008705       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="423.07µs"
	I0816 12:52:26.190494       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="76.050538ms"
	I0816 12:52:26.190612       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="68.088µs"
	I0816 12:52:26.475059       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863936"
	I0816 12:52:26.487171       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863936"
	I0816 12:52:26.707294       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863936"
	
	
	==> kube-proxy [15e34877aa55b56dd2af2c8b4c94de3639e13e9aa2640f4dc59c76f1d0ffd700] <==
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0816 12:48:55.906426       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-863936\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0816 12:48:58.978676       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-863936\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0816 12:49:02.051481       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-863936\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0816 12:49:08.195335       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-863936\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0816 12:49:17.411538       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-863936\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0816 12:49:36.556503       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.2"]
	E0816 12:49:36.556679       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 12:49:36.632240       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0816 12:49:36.632390       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0816 12:49:36.632489       1 server_linux.go:169] "Using iptables Proxier"
	I0816 12:49:36.635241       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 12:49:36.635675       1 server.go:483] "Version info" version="v1.31.0"
	I0816 12:49:36.635830       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 12:49:36.637544       1 config.go:197] "Starting service config controller"
	I0816 12:49:36.637852       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 12:49:36.638020       1 config.go:104] "Starting endpoint slice config controller"
	I0816 12:49:36.638124       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 12:49:36.638808       1 config.go:326] "Starting node config controller"
	I0816 12:49:36.638879       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 12:49:36.738298       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0816 12:49:36.738519       1 shared_informer.go:320] Caches are synced for service config
	I0816 12:49:36.739066       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [4aa588906cdcd0cf4fcb973469df4a1d02cafe5e3388d6c516665f0d1af8ceb4] <==
	E0816 12:46:10.021919       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1935\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 12:46:13.090527       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	E0816 12:46:13.090655       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1935\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 12:46:13.090794       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-863936&resourceVersion=1936": dial tcp 192.168.39.254:8443: connect: no route to host
	E0816 12:46:13.090832       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-863936&resourceVersion=1936\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 12:46:13.090916       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1936": dial tcp 192.168.39.254:8443: connect: no route to host
	E0816 12:46:13.091011       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1936\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 12:46:19.235825       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-863936&resourceVersion=1936": dial tcp 192.168.39.254:8443: connect: no route to host
	E0816 12:46:19.235988       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-863936&resourceVersion=1936\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 12:46:19.236146       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1936": dial tcp 192.168.39.254:8443: connect: no route to host
	E0816 12:46:19.236189       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1936\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 12:46:22.306487       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	E0816 12:46:22.306894       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1935\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 12:46:31.524574       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1936": dial tcp 192.168.39.254:8443: connect: no route to host
	E0816 12:46:31.524645       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1936\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 12:46:31.524777       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-863936&resourceVersion=1936": dial tcp 192.168.39.254:8443: connect: no route to host
	E0816 12:46:31.524844       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-863936&resourceVersion=1936\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 12:46:34.594839       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	E0816 12:46:34.594902       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1935\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 12:46:53.027463       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1936": dial tcp 192.168.39.254:8443: connect: no route to host
	E0816 12:46:53.027588       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1936\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 12:46:53.027817       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-863936&resourceVersion=1936": dial tcp 192.168.39.254:8443: connect: no route to host
	E0816 12:46:53.027932       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-863936&resourceVersion=1936\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 12:46:59.187815       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	E0816 12:46:59.188321       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1935\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-scheduler [4a0281c780fc2216d75b34fb0ab5edeca5d750269010e7a86e842bc53970539d] <==
	E0816 12:41:15.420071       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-c6wlb\": pod kindnet-c6wlb is already assigned to node \"ha-863936-m04\"" pod="kube-system/kindnet-c6wlb"
	I0816 12:41:15.420190       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-c6wlb" node="ha-863936-m04"
	E0816 12:41:15.413578       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-lsjgf\": pod kube-proxy-lsjgf is already assigned to node \"ha-863936-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-lsjgf" node="ha-863936-m04"
	E0816 12:41:15.424458       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 71a9943c-8ebe-4a91-876f-8e47aca3f719(kube-system/kube-proxy-lsjgf) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-lsjgf"
	E0816 12:41:15.425608       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-lsjgf\": pod kube-proxy-lsjgf is already assigned to node \"ha-863936-m04\"" pod="kube-system/kube-proxy-lsjgf"
	I0816 12:41:15.425683       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-lsjgf" node="ha-863936-m04"
	E0816 12:47:00.767753       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0816 12:47:02.002268       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0816 12:47:02.017816       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0816 12:47:02.124860       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0816 12:47:02.148673       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0816 12:47:02.159416       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0816 12:47:03.299738       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0816 12:47:04.865817       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0816 12:47:05.099148       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0816 12:47:08.348477       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0816 12:47:09.044545       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0816 12:47:10.729059       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0816 12:47:11.496273       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0816 12:47:12.303791       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0816 12:47:12.375580       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	I0816 12:47:13.519532       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0816 12:47:13.519751       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0816 12:47:13.520061       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0816 12:47:13.523832       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [ec46a3a2004fcad11de1bba2d1c355d99915bafd65d77051d5e38834061756fd] <==
	W0816 12:49:23.970024       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.2:8443: connect: connection refused
	E0816 12:49:23.970090       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.2:8443: connect: connection refused" logger="UnhandledError"
	W0816 12:49:24.061304       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.2:8443: connect: connection refused
	E0816 12:49:24.061370       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.2:8443: connect: connection refused" logger="UnhandledError"
	W0816 12:49:24.523684       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.2:8443: connect: connection refused
	E0816 12:49:24.523756       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.2:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.2:8443: connect: connection refused" logger="UnhandledError"
	W0816 12:49:24.805592       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.2:8443: connect: connection refused
	E0816 12:49:24.805709       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.2:8443: connect: connection refused" logger="UnhandledError"
	W0816 12:49:25.124651       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.2:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.2:8443: connect: connection refused
	E0816 12:49:25.124736       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.2:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.2:8443: connect: connection refused" logger="UnhandledError"
	W0816 12:49:25.296302       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.2:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.2:8443: connect: connection refused
	E0816 12:49:25.296427       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.2:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.2:8443: connect: connection refused" logger="UnhandledError"
	W0816 12:49:25.304246       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.2:8443: connect: connection refused
	E0816 12:49:25.304367       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.2:8443: connect: connection refused" logger="UnhandledError"
	W0816 12:49:25.495523       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.2:8443: connect: connection refused
	E0816 12:49:25.495635       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.2:8443: connect: connection refused" logger="UnhandledError"
	W0816 12:49:25.607372       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.2:8443: connect: connection refused
	E0816 12:49:25.607440       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.2:8443: connect: connection refused" logger="UnhandledError"
	W0816 12:49:31.080890       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 12:49:31.081092       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 12:49:31.081350       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 12:49:31.081445       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0816 12:49:33.861805       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0816 12:51:07.798816       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-58t7b\": pod busybox-7dff88458-58t7b is already assigned to node \"ha-863936-m04\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-58t7b" node="ha-863936-m04"
	E0816 12:51:07.808661       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-58t7b\": pod busybox-7dff88458-58t7b is already assigned to node \"ha-863936-m04\"" pod="default/busybox-7dff88458-58t7b"
	
	
	==> kubelet <==
	Aug 16 12:52:19 ha-863936 kubelet[1336]: W0816 12:52:19.089255    1336 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Aug 16 12:52:19 ha-863936 kubelet[1336]: E0816 12:52:19.089311    1336 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-863936?timeout=10s\": http2: client connection lost"
	Aug 16 12:52:19 ha-863936 kubelet[1336]: I0816 12:52:19.089346    1336 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease"
	Aug 16 12:52:19 ha-863936 kubelet[1336]: W0816 12:52:19.089096    1336 reflector.go:484] pkg/kubelet/config/apiserver.go:66: watch of *v1.Pod ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Aug 16 12:52:26 ha-863936 kubelet[1336]: E0816 12:52:26.257672    1336 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812746257205379,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:52:26 ha-863936 kubelet[1336]: E0816 12:52:26.258125    1336 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812746257205379,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:52:36 ha-863936 kubelet[1336]: E0816 12:52:36.260117    1336 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812756259676417,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:52:36 ha-863936 kubelet[1336]: E0816 12:52:36.260390    1336 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812756259676417,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:52:46 ha-863936 kubelet[1336]: E0816 12:52:46.262822    1336 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812766262327569,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:52:46 ha-863936 kubelet[1336]: E0816 12:52:46.262871    1336 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812766262327569,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:52:56 ha-863936 kubelet[1336]: E0816 12:52:56.265012    1336 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812776264443918,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:52:56 ha-863936 kubelet[1336]: E0816 12:52:56.265115    1336 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812776264443918,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:53:06 ha-863936 kubelet[1336]: E0816 12:53:06.269152    1336 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812786268378405,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:53:06 ha-863936 kubelet[1336]: E0816 12:53:06.269231    1336 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812786268378405,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:53:15 ha-863936 kubelet[1336]: E0816 12:53:15.927433    1336 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 12:53:15 ha-863936 kubelet[1336]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 12:53:15 ha-863936 kubelet[1336]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 12:53:15 ha-863936 kubelet[1336]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 12:53:15 ha-863936 kubelet[1336]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 12:53:16 ha-863936 kubelet[1336]: E0816 12:53:16.271840    1336 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812796271467655,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:53:16 ha-863936 kubelet[1336]: E0816 12:53:16.271868    1336 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812796271467655,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:53:26 ha-863936 kubelet[1336]: E0816 12:53:26.274647    1336 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812806273870012,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:53:26 ha-863936 kubelet[1336]: E0816 12:53:26.274785    1336 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812806273870012,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:53:36 ha-863936 kubelet[1336]: E0816 12:53:36.276733    1336 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812816276342099,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:53:36 ha-863936 kubelet[1336]: E0816 12:53:36.277105    1336 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723812816276342099,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 12:53:43.668346   30763 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19423-3966/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-863936 -n ha-863936
helpers_test.go:261: (dbg) Run:  kubectl --context ha-863936 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.66s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (327.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-336982
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-336982
E0816 13:08:56.826795   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-336982: exit status 82 (2m1.843038397s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-336982-m03"  ...
	* Stopping node "multinode-336982-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-336982" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-336982 --wait=true -v=8 --alsologtostderr
E0816 13:10:40.922496   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/functional-756697/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:13:43.985931   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/functional-756697/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:13:56.823410   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-336982 --wait=true -v=8 --alsologtostderr: (3m23.559815879s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-336982
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-336982 -n multinode-336982
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-336982 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-336982 logs -n 25: (1.486815043s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-336982 ssh -n                                                                 | multinode-336982 | jenkins | v1.33.1 | 16 Aug 24 13:07 UTC | 16 Aug 24 13:07 UTC |
	|         | multinode-336982-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-336982 cp multinode-336982-m02:/home/docker/cp-test.txt                       | multinode-336982 | jenkins | v1.33.1 | 16 Aug 24 13:07 UTC | 16 Aug 24 13:07 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile804343114/001/cp-test_multinode-336982-m02.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-336982 ssh -n                                                                 | multinode-336982 | jenkins | v1.33.1 | 16 Aug 24 13:07 UTC | 16 Aug 24 13:07 UTC |
	|         | multinode-336982-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-336982 cp multinode-336982-m02:/home/docker/cp-test.txt                       | multinode-336982 | jenkins | v1.33.1 | 16 Aug 24 13:07 UTC | 16 Aug 24 13:07 UTC |
	|         | multinode-336982:/home/docker/cp-test_multinode-336982-m02_multinode-336982.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-336982 ssh -n                                                                 | multinode-336982 | jenkins | v1.33.1 | 16 Aug 24 13:07 UTC | 16 Aug 24 13:07 UTC |
	|         | multinode-336982-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-336982 ssh -n multinode-336982 sudo cat                                       | multinode-336982 | jenkins | v1.33.1 | 16 Aug 24 13:07 UTC | 16 Aug 24 13:07 UTC |
	|         | /home/docker/cp-test_multinode-336982-m02_multinode-336982.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-336982 cp multinode-336982-m02:/home/docker/cp-test.txt                       | multinode-336982 | jenkins | v1.33.1 | 16 Aug 24 13:07 UTC | 16 Aug 24 13:07 UTC |
	|         | multinode-336982-m03:/home/docker/cp-test_multinode-336982-m02_multinode-336982-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-336982 ssh -n                                                                 | multinode-336982 | jenkins | v1.33.1 | 16 Aug 24 13:07 UTC | 16 Aug 24 13:07 UTC |
	|         | multinode-336982-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-336982 ssh -n multinode-336982-m03 sudo cat                                   | multinode-336982 | jenkins | v1.33.1 | 16 Aug 24 13:07 UTC | 16 Aug 24 13:07 UTC |
	|         | /home/docker/cp-test_multinode-336982-m02_multinode-336982-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-336982 cp testdata/cp-test.txt                                                | multinode-336982 | jenkins | v1.33.1 | 16 Aug 24 13:07 UTC | 16 Aug 24 13:07 UTC |
	|         | multinode-336982-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-336982 ssh -n                                                                 | multinode-336982 | jenkins | v1.33.1 | 16 Aug 24 13:07 UTC | 16 Aug 24 13:07 UTC |
	|         | multinode-336982-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-336982 cp multinode-336982-m03:/home/docker/cp-test.txt                       | multinode-336982 | jenkins | v1.33.1 | 16 Aug 24 13:07 UTC | 16 Aug 24 13:07 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile804343114/001/cp-test_multinode-336982-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-336982 ssh -n                                                                 | multinode-336982 | jenkins | v1.33.1 | 16 Aug 24 13:07 UTC | 16 Aug 24 13:07 UTC |
	|         | multinode-336982-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-336982 cp multinode-336982-m03:/home/docker/cp-test.txt                       | multinode-336982 | jenkins | v1.33.1 | 16 Aug 24 13:07 UTC | 16 Aug 24 13:07 UTC |
	|         | multinode-336982:/home/docker/cp-test_multinode-336982-m03_multinode-336982.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-336982 ssh -n                                                                 | multinode-336982 | jenkins | v1.33.1 | 16 Aug 24 13:07 UTC | 16 Aug 24 13:07 UTC |
	|         | multinode-336982-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-336982 ssh -n multinode-336982 sudo cat                                       | multinode-336982 | jenkins | v1.33.1 | 16 Aug 24 13:07 UTC | 16 Aug 24 13:07 UTC |
	|         | /home/docker/cp-test_multinode-336982-m03_multinode-336982.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-336982 cp multinode-336982-m03:/home/docker/cp-test.txt                       | multinode-336982 | jenkins | v1.33.1 | 16 Aug 24 13:07 UTC | 16 Aug 24 13:07 UTC |
	|         | multinode-336982-m02:/home/docker/cp-test_multinode-336982-m03_multinode-336982-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-336982 ssh -n                                                                 | multinode-336982 | jenkins | v1.33.1 | 16 Aug 24 13:07 UTC | 16 Aug 24 13:07 UTC |
	|         | multinode-336982-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-336982 ssh -n multinode-336982-m02 sudo cat                                   | multinode-336982 | jenkins | v1.33.1 | 16 Aug 24 13:07 UTC | 16 Aug 24 13:07 UTC |
	|         | /home/docker/cp-test_multinode-336982-m03_multinode-336982-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-336982 node stop m03                                                          | multinode-336982 | jenkins | v1.33.1 | 16 Aug 24 13:07 UTC | 16 Aug 24 13:07 UTC |
	| node    | multinode-336982 node start                                                             | multinode-336982 | jenkins | v1.33.1 | 16 Aug 24 13:07 UTC | 16 Aug 24 13:08 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-336982                                                                | multinode-336982 | jenkins | v1.33.1 | 16 Aug 24 13:08 UTC |                     |
	| stop    | -p multinode-336982                                                                     | multinode-336982 | jenkins | v1.33.1 | 16 Aug 24 13:08 UTC |                     |
	| start   | -p multinode-336982                                                                     | multinode-336982 | jenkins | v1.33.1 | 16 Aug 24 13:10 UTC | 16 Aug 24 13:14 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-336982                                                                | multinode-336982 | jenkins | v1.33.1 | 16 Aug 24 13:14 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 13:10:37
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 13:10:37.862714   40000 out.go:345] Setting OutFile to fd 1 ...
	I0816 13:10:37.862960   40000 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 13:10:37.862969   40000 out.go:358] Setting ErrFile to fd 2...
	I0816 13:10:37.862974   40000 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 13:10:37.863175   40000 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-3966/.minikube/bin
	I0816 13:10:37.863694   40000 out.go:352] Setting JSON to false
	I0816 13:10:37.864571   40000 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3183,"bootTime":1723810655,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 13:10:37.864627   40000 start.go:139] virtualization: kvm guest
	I0816 13:10:37.867680   40000 out.go:177] * [multinode-336982] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 13:10:37.869050   40000 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 13:10:37.869059   40000 notify.go:220] Checking for updates...
	I0816 13:10:37.870573   40000 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 13:10:37.872080   40000 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 13:10:37.873556   40000 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-3966/.minikube
	I0816 13:10:37.874908   40000 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 13:10:37.876139   40000 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 13:10:37.878245   40000 config.go:182] Loaded profile config "multinode-336982": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 13:10:37.878362   40000 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 13:10:37.878980   40000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 13:10:37.879031   40000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:10:37.894197   40000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39533
	I0816 13:10:37.894583   40000 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:10:37.895145   40000 main.go:141] libmachine: Using API Version  1
	I0816 13:10:37.895166   40000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:10:37.895563   40000 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:10:37.895790   40000 main.go:141] libmachine: (multinode-336982) Calling .DriverName
	I0816 13:10:37.930339   40000 out.go:177] * Using the kvm2 driver based on existing profile
	I0816 13:10:37.931787   40000 start.go:297] selected driver: kvm2
	I0816 13:10:37.931798   40000 start.go:901] validating driver "kvm2" against &{Name:multinode-336982 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:multinode-336982 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.190 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.145 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 13:10:37.931932   40000 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 13:10:37.932268   40000 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 13:10:37.932340   40000 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-3966/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 13:10:37.946778   40000 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0816 13:10:37.947443   40000 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 13:10:37.947477   40000 cni.go:84] Creating CNI manager for ""
	I0816 13:10:37.947487   40000 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0816 13:10:37.947542   40000 start.go:340] cluster config:
	{Name:multinode-336982 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-336982 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.190 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.145 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 13:10:37.947660   40000 iso.go:125] acquiring lock: {Name:mk00139b359c6fe4c84d80e843a2099303fb7c5b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 13:10:37.949514   40000 out.go:177] * Starting "multinode-336982" primary control-plane node in "multinode-336982" cluster
	I0816 13:10:37.950740   40000 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 13:10:37.950786   40000 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0816 13:10:37.950793   40000 cache.go:56] Caching tarball of preloaded images
	I0816 13:10:37.950864   40000 preload.go:172] Found /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 13:10:37.950876   40000 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0816 13:10:37.950981   40000 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/multinode-336982/config.json ...
	I0816 13:10:37.951159   40000 start.go:360] acquireMachinesLock for multinode-336982: {Name:mk0b4b88ee8893cefe753073bc08069ac50e5641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 13:10:37.951193   40000 start.go:364] duration metric: took 18.581µs to acquireMachinesLock for "multinode-336982"
	I0816 13:10:37.951212   40000 start.go:96] Skipping create...Using existing machine configuration
	I0816 13:10:37.951219   40000 fix.go:54] fixHost starting: 
	I0816 13:10:37.951461   40000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 13:10:37.951492   40000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:10:37.965607   40000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34407
	I0816 13:10:37.966026   40000 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:10:37.966455   40000 main.go:141] libmachine: Using API Version  1
	I0816 13:10:37.966477   40000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:10:37.966811   40000 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:10:37.967020   40000 main.go:141] libmachine: (multinode-336982) Calling .DriverName
	I0816 13:10:37.967182   40000 main.go:141] libmachine: (multinode-336982) Calling .GetState
	I0816 13:10:37.968597   40000 fix.go:112] recreateIfNeeded on multinode-336982: state=Running err=<nil>
	W0816 13:10:37.968623   40000 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 13:10:37.971390   40000 out.go:177] * Updating the running kvm2 "multinode-336982" VM ...
	I0816 13:10:37.972750   40000 machine.go:93] provisionDockerMachine start ...
	I0816 13:10:37.972769   40000 main.go:141] libmachine: (multinode-336982) Calling .DriverName
	I0816 13:10:37.973000   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHHostname
	I0816 13:10:37.975406   40000 main.go:141] libmachine: (multinode-336982) DBG | domain multinode-336982 has defined MAC address 52:54:00:26:1f:11 in network mk-multinode-336982
	I0816 13:10:37.975796   40000 main.go:141] libmachine: (multinode-336982) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:1f:11", ip: ""} in network mk-multinode-336982: {Iface:virbr1 ExpiryTime:2024-08-16 14:05:09 +0000 UTC Type:0 Mac:52:54:00:26:1f:11 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-336982 Clientid:01:52:54:00:26:1f:11}
	I0816 13:10:37.975822   40000 main.go:141] libmachine: (multinode-336982) DBG | domain multinode-336982 has defined IP address 192.168.39.208 and MAC address 52:54:00:26:1f:11 in network mk-multinode-336982
	I0816 13:10:37.975964   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHPort
	I0816 13:10:37.976119   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHKeyPath
	I0816 13:10:37.976296   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHKeyPath
	I0816 13:10:37.976426   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHUsername
	I0816 13:10:37.976592   40000 main.go:141] libmachine: Using SSH client type: native
	I0816 13:10:37.976826   40000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I0816 13:10:37.976843   40000 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 13:10:38.078028   40000 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-336982
	
	I0816 13:10:38.078053   40000 main.go:141] libmachine: (multinode-336982) Calling .GetMachineName
	I0816 13:10:38.078277   40000 buildroot.go:166] provisioning hostname "multinode-336982"
	I0816 13:10:38.078298   40000 main.go:141] libmachine: (multinode-336982) Calling .GetMachineName
	I0816 13:10:38.078500   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHHostname
	I0816 13:10:38.081036   40000 main.go:141] libmachine: (multinode-336982) DBG | domain multinode-336982 has defined MAC address 52:54:00:26:1f:11 in network mk-multinode-336982
	I0816 13:10:38.081382   40000 main.go:141] libmachine: (multinode-336982) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:1f:11", ip: ""} in network mk-multinode-336982: {Iface:virbr1 ExpiryTime:2024-08-16 14:05:09 +0000 UTC Type:0 Mac:52:54:00:26:1f:11 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-336982 Clientid:01:52:54:00:26:1f:11}
	I0816 13:10:38.081409   40000 main.go:141] libmachine: (multinode-336982) DBG | domain multinode-336982 has defined IP address 192.168.39.208 and MAC address 52:54:00:26:1f:11 in network mk-multinode-336982
	I0816 13:10:38.081588   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHPort
	I0816 13:10:38.081754   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHKeyPath
	I0816 13:10:38.081901   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHKeyPath
	I0816 13:10:38.082026   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHUsername
	I0816 13:10:38.082151   40000 main.go:141] libmachine: Using SSH client type: native
	I0816 13:10:38.082304   40000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I0816 13:10:38.082318   40000 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-336982 && echo "multinode-336982" | sudo tee /etc/hostname
	I0816 13:10:38.201525   40000 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-336982
	
	I0816 13:10:38.201548   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHHostname
	I0816 13:10:38.204246   40000 main.go:141] libmachine: (multinode-336982) DBG | domain multinode-336982 has defined MAC address 52:54:00:26:1f:11 in network mk-multinode-336982
	I0816 13:10:38.204664   40000 main.go:141] libmachine: (multinode-336982) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:1f:11", ip: ""} in network mk-multinode-336982: {Iface:virbr1 ExpiryTime:2024-08-16 14:05:09 +0000 UTC Type:0 Mac:52:54:00:26:1f:11 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-336982 Clientid:01:52:54:00:26:1f:11}
	I0816 13:10:38.204694   40000 main.go:141] libmachine: (multinode-336982) DBG | domain multinode-336982 has defined IP address 192.168.39.208 and MAC address 52:54:00:26:1f:11 in network mk-multinode-336982
	I0816 13:10:38.204869   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHPort
	I0816 13:10:38.205042   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHKeyPath
	I0816 13:10:38.205217   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHKeyPath
	I0816 13:10:38.205352   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHUsername
	I0816 13:10:38.205533   40000 main.go:141] libmachine: Using SSH client type: native
	I0816 13:10:38.205693   40000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I0816 13:10:38.205711   40000 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-336982' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-336982/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-336982' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 13:10:38.302767   40000 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 13:10:38.302797   40000 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-3966/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-3966/.minikube}
	I0816 13:10:38.302822   40000 buildroot.go:174] setting up certificates
	I0816 13:10:38.302835   40000 provision.go:84] configureAuth start
	I0816 13:10:38.302850   40000 main.go:141] libmachine: (multinode-336982) Calling .GetMachineName
	I0816 13:10:38.303251   40000 main.go:141] libmachine: (multinode-336982) Calling .GetIP
	I0816 13:10:38.305949   40000 main.go:141] libmachine: (multinode-336982) DBG | domain multinode-336982 has defined MAC address 52:54:00:26:1f:11 in network mk-multinode-336982
	I0816 13:10:38.306403   40000 main.go:141] libmachine: (multinode-336982) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:1f:11", ip: ""} in network mk-multinode-336982: {Iface:virbr1 ExpiryTime:2024-08-16 14:05:09 +0000 UTC Type:0 Mac:52:54:00:26:1f:11 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-336982 Clientid:01:52:54:00:26:1f:11}
	I0816 13:10:38.306432   40000 main.go:141] libmachine: (multinode-336982) DBG | domain multinode-336982 has defined IP address 192.168.39.208 and MAC address 52:54:00:26:1f:11 in network mk-multinode-336982
	I0816 13:10:38.306591   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHHostname
	I0816 13:10:38.308564   40000 main.go:141] libmachine: (multinode-336982) DBG | domain multinode-336982 has defined MAC address 52:54:00:26:1f:11 in network mk-multinode-336982
	I0816 13:10:38.308825   40000 main.go:141] libmachine: (multinode-336982) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:1f:11", ip: ""} in network mk-multinode-336982: {Iface:virbr1 ExpiryTime:2024-08-16 14:05:09 +0000 UTC Type:0 Mac:52:54:00:26:1f:11 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-336982 Clientid:01:52:54:00:26:1f:11}
	I0816 13:10:38.308851   40000 main.go:141] libmachine: (multinode-336982) DBG | domain multinode-336982 has defined IP address 192.168.39.208 and MAC address 52:54:00:26:1f:11 in network mk-multinode-336982
	I0816 13:10:38.308948   40000 provision.go:143] copyHostCerts
	I0816 13:10:38.308983   40000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem
	I0816 13:10:38.309016   40000 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem, removing ...
	I0816 13:10:38.309031   40000 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem
	I0816 13:10:38.309098   40000 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem (1082 bytes)
	I0816 13:10:38.309193   40000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem
	I0816 13:10:38.309211   40000 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem, removing ...
	I0816 13:10:38.309219   40000 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem
	I0816 13:10:38.309243   40000 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem (1123 bytes)
	I0816 13:10:38.309287   40000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem
	I0816 13:10:38.309305   40000 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem, removing ...
	I0816 13:10:38.309309   40000 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem
	I0816 13:10:38.309330   40000 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem (1675 bytes)
	I0816 13:10:38.309372   40000 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem org=jenkins.multinode-336982 san=[127.0.0.1 192.168.39.208 localhost minikube multinode-336982]
	I0816 13:10:38.628766   40000 provision.go:177] copyRemoteCerts
	I0816 13:10:38.628823   40000 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 13:10:38.628844   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHHostname
	I0816 13:10:38.631298   40000 main.go:141] libmachine: (multinode-336982) DBG | domain multinode-336982 has defined MAC address 52:54:00:26:1f:11 in network mk-multinode-336982
	I0816 13:10:38.631643   40000 main.go:141] libmachine: (multinode-336982) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:1f:11", ip: ""} in network mk-multinode-336982: {Iface:virbr1 ExpiryTime:2024-08-16 14:05:09 +0000 UTC Type:0 Mac:52:54:00:26:1f:11 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-336982 Clientid:01:52:54:00:26:1f:11}
	I0816 13:10:38.631674   40000 main.go:141] libmachine: (multinode-336982) DBG | domain multinode-336982 has defined IP address 192.168.39.208 and MAC address 52:54:00:26:1f:11 in network mk-multinode-336982
	I0816 13:10:38.631855   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHPort
	I0816 13:10:38.632079   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHKeyPath
	I0816 13:10:38.632291   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHUsername
	I0816 13:10:38.632404   40000 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/multinode-336982/id_rsa Username:docker}
	I0816 13:10:38.711540   40000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 13:10:38.711608   40000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 13:10:38.737228   40000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 13:10:38.737302   40000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 13:10:38.763873   40000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 13:10:38.763945   40000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0816 13:10:38.788994   40000 provision.go:87] duration metric: took 486.143983ms to configureAuth
	I0816 13:10:38.789029   40000 buildroot.go:189] setting minikube options for container-runtime
	I0816 13:10:38.789399   40000 config.go:182] Loaded profile config "multinode-336982": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 13:10:38.789508   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHHostname
	I0816 13:10:38.791870   40000 main.go:141] libmachine: (multinode-336982) DBG | domain multinode-336982 has defined MAC address 52:54:00:26:1f:11 in network mk-multinode-336982
	I0816 13:10:38.792252   40000 main.go:141] libmachine: (multinode-336982) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:1f:11", ip: ""} in network mk-multinode-336982: {Iface:virbr1 ExpiryTime:2024-08-16 14:05:09 +0000 UTC Type:0 Mac:52:54:00:26:1f:11 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-336982 Clientid:01:52:54:00:26:1f:11}
	I0816 13:10:38.792284   40000 main.go:141] libmachine: (multinode-336982) DBG | domain multinode-336982 has defined IP address 192.168.39.208 and MAC address 52:54:00:26:1f:11 in network mk-multinode-336982
	I0816 13:10:38.792387   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHPort
	I0816 13:10:38.792563   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHKeyPath
	I0816 13:10:38.792717   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHKeyPath
	I0816 13:10:38.792862   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHUsername
	I0816 13:10:38.793045   40000 main.go:141] libmachine: Using SSH client type: native
	I0816 13:10:38.793189   40000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I0816 13:10:38.793203   40000 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 13:12:09.626425   40000 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 13:12:09.626453   40000 machine.go:96] duration metric: took 1m31.653690279s to provisionDockerMachine
	I0816 13:12:09.626465   40000 start.go:293] postStartSetup for "multinode-336982" (driver="kvm2")
	I0816 13:12:09.626479   40000 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 13:12:09.626497   40000 main.go:141] libmachine: (multinode-336982) Calling .DriverName
	I0816 13:12:09.626816   40000 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 13:12:09.626845   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHHostname
	I0816 13:12:09.629758   40000 main.go:141] libmachine: (multinode-336982) DBG | domain multinode-336982 has defined MAC address 52:54:00:26:1f:11 in network mk-multinode-336982
	I0816 13:12:09.630165   40000 main.go:141] libmachine: (multinode-336982) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:1f:11", ip: ""} in network mk-multinode-336982: {Iface:virbr1 ExpiryTime:2024-08-16 14:05:09 +0000 UTC Type:0 Mac:52:54:00:26:1f:11 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-336982 Clientid:01:52:54:00:26:1f:11}
	I0816 13:12:09.630304   40000 main.go:141] libmachine: (multinode-336982) DBG | domain multinode-336982 has defined IP address 192.168.39.208 and MAC address 52:54:00:26:1f:11 in network mk-multinode-336982
	I0816 13:12:09.630403   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHPort
	I0816 13:12:09.630652   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHKeyPath
	I0816 13:12:09.630821   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHUsername
	I0816 13:12:09.630952   40000 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/multinode-336982/id_rsa Username:docker}
	I0816 13:12:09.712956   40000 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 13:12:09.717434   40000 command_runner.go:130] > NAME=Buildroot
	I0816 13:12:09.717452   40000 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0816 13:12:09.717457   40000 command_runner.go:130] > ID=buildroot
	I0816 13:12:09.717464   40000 command_runner.go:130] > VERSION_ID=2023.02.9
	I0816 13:12:09.717469   40000 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0816 13:12:09.717504   40000 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 13:12:09.717520   40000 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/addons for local assets ...
	I0816 13:12:09.717585   40000 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/files for local assets ...
	I0816 13:12:09.717674   40000 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem -> 111492.pem in /etc/ssl/certs
	I0816 13:12:09.717684   40000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem -> /etc/ssl/certs/111492.pem
	I0816 13:12:09.717782   40000 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 13:12:09.727599   40000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /etc/ssl/certs/111492.pem (1708 bytes)
	I0816 13:12:09.754383   40000 start.go:296] duration metric: took 127.906448ms for postStartSetup
	I0816 13:12:09.754428   40000 fix.go:56] duration metric: took 1m31.803207589s for fixHost
	I0816 13:12:09.754452   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHHostname
	I0816 13:12:09.757213   40000 main.go:141] libmachine: (multinode-336982) DBG | domain multinode-336982 has defined MAC address 52:54:00:26:1f:11 in network mk-multinode-336982
	I0816 13:12:09.757647   40000 main.go:141] libmachine: (multinode-336982) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:1f:11", ip: ""} in network mk-multinode-336982: {Iface:virbr1 ExpiryTime:2024-08-16 14:05:09 +0000 UTC Type:0 Mac:52:54:00:26:1f:11 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-336982 Clientid:01:52:54:00:26:1f:11}
	I0816 13:12:09.757676   40000 main.go:141] libmachine: (multinode-336982) DBG | domain multinode-336982 has defined IP address 192.168.39.208 and MAC address 52:54:00:26:1f:11 in network mk-multinode-336982
	I0816 13:12:09.757836   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHPort
	I0816 13:12:09.758054   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHKeyPath
	I0816 13:12:09.758225   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHKeyPath
	I0816 13:12:09.758371   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHUsername
	I0816 13:12:09.758567   40000 main.go:141] libmachine: Using SSH client type: native
	I0816 13:12:09.758709   40000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I0816 13:12:09.758719   40000 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 13:12:09.853733   40000 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723813929.831822526
	
	I0816 13:12:09.853753   40000 fix.go:216] guest clock: 1723813929.831822526
	I0816 13:12:09.853761   40000 fix.go:229] Guest: 2024-08-16 13:12:09.831822526 +0000 UTC Remote: 2024-08-16 13:12:09.754433623 +0000 UTC m=+91.924913360 (delta=77.388903ms)
	I0816 13:12:09.853791   40000 fix.go:200] guest clock delta is within tolerance: 77.388903ms
	I0816 13:12:09.853795   40000 start.go:83] releasing machines lock for "multinode-336982", held for 1m31.902593602s
	I0816 13:12:09.853813   40000 main.go:141] libmachine: (multinode-336982) Calling .DriverName
	I0816 13:12:09.854101   40000 main.go:141] libmachine: (multinode-336982) Calling .GetIP
	I0816 13:12:09.856610   40000 main.go:141] libmachine: (multinode-336982) DBG | domain multinode-336982 has defined MAC address 52:54:00:26:1f:11 in network mk-multinode-336982
	I0816 13:12:09.856972   40000 main.go:141] libmachine: (multinode-336982) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:1f:11", ip: ""} in network mk-multinode-336982: {Iface:virbr1 ExpiryTime:2024-08-16 14:05:09 +0000 UTC Type:0 Mac:52:54:00:26:1f:11 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-336982 Clientid:01:52:54:00:26:1f:11}
	I0816 13:12:09.856999   40000 main.go:141] libmachine: (multinode-336982) DBG | domain multinode-336982 has defined IP address 192.168.39.208 and MAC address 52:54:00:26:1f:11 in network mk-multinode-336982
	I0816 13:12:09.857109   40000 main.go:141] libmachine: (multinode-336982) Calling .DriverName
	I0816 13:12:09.857542   40000 main.go:141] libmachine: (multinode-336982) Calling .DriverName
	I0816 13:12:09.857713   40000 main.go:141] libmachine: (multinode-336982) Calling .DriverName
	I0816 13:12:09.857801   40000 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 13:12:09.857852   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHHostname
	I0816 13:12:09.857943   40000 ssh_runner.go:195] Run: cat /version.json
	I0816 13:12:09.857969   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHHostname
	I0816 13:12:09.860531   40000 main.go:141] libmachine: (multinode-336982) DBG | domain multinode-336982 has defined MAC address 52:54:00:26:1f:11 in network mk-multinode-336982
	I0816 13:12:09.860860   40000 main.go:141] libmachine: (multinode-336982) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:1f:11", ip: ""} in network mk-multinode-336982: {Iface:virbr1 ExpiryTime:2024-08-16 14:05:09 +0000 UTC Type:0 Mac:52:54:00:26:1f:11 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-336982 Clientid:01:52:54:00:26:1f:11}
	I0816 13:12:09.860886   40000 main.go:141] libmachine: (multinode-336982) DBG | domain multinode-336982 has defined IP address 192.168.39.208 and MAC address 52:54:00:26:1f:11 in network mk-multinode-336982
	I0816 13:12:09.860919   40000 main.go:141] libmachine: (multinode-336982) DBG | domain multinode-336982 has defined MAC address 52:54:00:26:1f:11 in network mk-multinode-336982
	I0816 13:12:09.861038   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHPort
	I0816 13:12:09.861228   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHKeyPath
	I0816 13:12:09.861372   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHUsername
	I0816 13:12:09.861456   40000 main.go:141] libmachine: (multinode-336982) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:1f:11", ip: ""} in network mk-multinode-336982: {Iface:virbr1 ExpiryTime:2024-08-16 14:05:09 +0000 UTC Type:0 Mac:52:54:00:26:1f:11 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-336982 Clientid:01:52:54:00:26:1f:11}
	I0816 13:12:09.861490   40000 main.go:141] libmachine: (multinode-336982) DBG | domain multinode-336982 has defined IP address 192.168.39.208 and MAC address 52:54:00:26:1f:11 in network mk-multinode-336982
	I0816 13:12:09.861492   40000 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/multinode-336982/id_rsa Username:docker}
	I0816 13:12:09.861655   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHPort
	I0816 13:12:09.861811   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHKeyPath
	I0816 13:12:09.861965   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHUsername
	I0816 13:12:09.862100   40000 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/multinode-336982/id_rsa Username:docker}
	I0816 13:12:09.954791   40000 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0816 13:12:09.954837   40000 command_runner.go:130] > {"iso_version": "v1.33.1-1723650137-19443", "kicbase_version": "v0.0.44-1723567951-19429", "minikube_version": "v1.33.1", "commit": "0de88034feeac7cdc6e3fa82af59b9e46ac52b3e"}
	I0816 13:12:09.954961   40000 ssh_runner.go:195] Run: systemctl --version
	I0816 13:12:09.960590   40000 command_runner.go:130] > systemd 252 (252)
	I0816 13:12:09.960620   40000 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0816 13:12:09.960866   40000 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 13:12:10.121449   40000 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0816 13:12:10.129227   40000 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0816 13:12:10.129274   40000 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 13:12:10.129343   40000 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 13:12:10.138877   40000 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0816 13:12:10.138900   40000 start.go:495] detecting cgroup driver to use...
	I0816 13:12:10.138953   40000 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 13:12:10.154966   40000 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 13:12:10.170445   40000 docker.go:217] disabling cri-docker service (if available) ...
	I0816 13:12:10.170517   40000 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 13:12:10.184593   40000 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 13:12:10.198999   40000 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 13:12:10.347015   40000 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 13:12:10.498588   40000 docker.go:233] disabling docker service ...
	I0816 13:12:10.498647   40000 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 13:12:10.516301   40000 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 13:12:10.530034   40000 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 13:12:10.670250   40000 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 13:12:10.806431   40000 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 13:12:10.820851   40000 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 13:12:10.839862   40000 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0816 13:12:10.839907   40000 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 13:12:10.839962   40000 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:12:10.850809   40000 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 13:12:10.850917   40000 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:12:10.861426   40000 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:12:10.872048   40000 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:12:10.882663   40000 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 13:12:10.893516   40000 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:12:10.903836   40000 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:12:10.915264   40000 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:12:10.925754   40000 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 13:12:10.935190   40000 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0816 13:12:10.935258   40000 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 13:12:10.944154   40000 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:12:11.093364   40000 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 13:12:11.411161   40000 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 13:12:11.411244   40000 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 13:12:11.416300   40000 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0816 13:12:11.416326   40000 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0816 13:12:11.416335   40000 command_runner.go:130] > Device: 0,22	Inode: 1335        Links: 1
	I0816 13:12:11.416344   40000 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0816 13:12:11.416352   40000 command_runner.go:130] > Access: 2024-08-16 13:12:11.286505345 +0000
	I0816 13:12:11.416368   40000 command_runner.go:130] > Modify: 2024-08-16 13:12:11.285505322 +0000
	I0816 13:12:11.416378   40000 command_runner.go:130] > Change: 2024-08-16 13:12:11.286505345 +0000
	I0816 13:12:11.416383   40000 command_runner.go:130] >  Birth: -
	I0816 13:12:11.416411   40000 start.go:563] Will wait 60s for crictl version
	I0816 13:12:11.416454   40000 ssh_runner.go:195] Run: which crictl
	I0816 13:12:11.420272   40000 command_runner.go:130] > /usr/bin/crictl
	I0816 13:12:11.420334   40000 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 13:12:11.463487   40000 command_runner.go:130] > Version:  0.1.0
	I0816 13:12:11.463508   40000 command_runner.go:130] > RuntimeName:  cri-o
	I0816 13:12:11.463512   40000 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0816 13:12:11.463518   40000 command_runner.go:130] > RuntimeApiVersion:  v1
	I0816 13:12:11.463665   40000 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 13:12:11.463739   40000 ssh_runner.go:195] Run: crio --version
	I0816 13:12:11.489892   40000 command_runner.go:130] > crio version 1.29.1
	I0816 13:12:11.489913   40000 command_runner.go:130] > Version:        1.29.1
	I0816 13:12:11.489921   40000 command_runner.go:130] > GitCommit:      unknown
	I0816 13:12:11.489927   40000 command_runner.go:130] > GitCommitDate:  unknown
	I0816 13:12:11.489933   40000 command_runner.go:130] > GitTreeState:   clean
	I0816 13:12:11.489941   40000 command_runner.go:130] > BuildDate:      2024-08-14T19:54:05Z
	I0816 13:12:11.489947   40000 command_runner.go:130] > GoVersion:      go1.21.6
	I0816 13:12:11.489952   40000 command_runner.go:130] > Compiler:       gc
	I0816 13:12:11.489957   40000 command_runner.go:130] > Platform:       linux/amd64
	I0816 13:12:11.489962   40000 command_runner.go:130] > Linkmode:       dynamic
	I0816 13:12:11.489969   40000 command_runner.go:130] > BuildTags:      
	I0816 13:12:11.489975   40000 command_runner.go:130] >   containers_image_ostree_stub
	I0816 13:12:11.489982   40000 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0816 13:12:11.489988   40000 command_runner.go:130] >   btrfs_noversion
	I0816 13:12:11.489995   40000 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0816 13:12:11.490005   40000 command_runner.go:130] >   libdm_no_deferred_remove
	I0816 13:12:11.490013   40000 command_runner.go:130] >   seccomp
	I0816 13:12:11.490023   40000 command_runner.go:130] > LDFlags:          unknown
	I0816 13:12:11.490031   40000 command_runner.go:130] > SeccompEnabled:   true
	I0816 13:12:11.490083   40000 command_runner.go:130] > AppArmorEnabled:  false
	I0816 13:12:11.491236   40000 ssh_runner.go:195] Run: crio --version
	I0816 13:12:11.520075   40000 command_runner.go:130] > crio version 1.29.1
	I0816 13:12:11.520099   40000 command_runner.go:130] > Version:        1.29.1
	I0816 13:12:11.520106   40000 command_runner.go:130] > GitCommit:      unknown
	I0816 13:12:11.520111   40000 command_runner.go:130] > GitCommitDate:  unknown
	I0816 13:12:11.520115   40000 command_runner.go:130] > GitTreeState:   clean
	I0816 13:12:11.520120   40000 command_runner.go:130] > BuildDate:      2024-08-14T19:54:05Z
	I0816 13:12:11.520124   40000 command_runner.go:130] > GoVersion:      go1.21.6
	I0816 13:12:11.520129   40000 command_runner.go:130] > Compiler:       gc
	I0816 13:12:11.520133   40000 command_runner.go:130] > Platform:       linux/amd64
	I0816 13:12:11.520137   40000 command_runner.go:130] > Linkmode:       dynamic
	I0816 13:12:11.520142   40000 command_runner.go:130] > BuildTags:      
	I0816 13:12:11.520146   40000 command_runner.go:130] >   containers_image_ostree_stub
	I0816 13:12:11.520151   40000 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0816 13:12:11.520155   40000 command_runner.go:130] >   btrfs_noversion
	I0816 13:12:11.520158   40000 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0816 13:12:11.520162   40000 command_runner.go:130] >   libdm_no_deferred_remove
	I0816 13:12:11.520165   40000 command_runner.go:130] >   seccomp
	I0816 13:12:11.520169   40000 command_runner.go:130] > LDFlags:          unknown
	I0816 13:12:11.520174   40000 command_runner.go:130] > SeccompEnabled:   true
	I0816 13:12:11.520179   40000 command_runner.go:130] > AppArmorEnabled:  false
	I0816 13:12:11.523047   40000 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 13:12:11.524509   40000 main.go:141] libmachine: (multinode-336982) Calling .GetIP
	I0816 13:12:11.527111   40000 main.go:141] libmachine: (multinode-336982) DBG | domain multinode-336982 has defined MAC address 52:54:00:26:1f:11 in network mk-multinode-336982
	I0816 13:12:11.527410   40000 main.go:141] libmachine: (multinode-336982) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:1f:11", ip: ""} in network mk-multinode-336982: {Iface:virbr1 ExpiryTime:2024-08-16 14:05:09 +0000 UTC Type:0 Mac:52:54:00:26:1f:11 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-336982 Clientid:01:52:54:00:26:1f:11}
	I0816 13:12:11.527434   40000 main.go:141] libmachine: (multinode-336982) DBG | domain multinode-336982 has defined IP address 192.168.39.208 and MAC address 52:54:00:26:1f:11 in network mk-multinode-336982
	I0816 13:12:11.527580   40000 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0816 13:12:11.531896   40000 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0816 13:12:11.532000   40000 kubeadm.go:883] updating cluster {Name:multinode-336982 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.0 ClusterName:multinode-336982 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.190 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.145 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fa
lse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 13:12:11.532159   40000 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 13:12:11.532201   40000 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 13:12:11.573486   40000 command_runner.go:130] > {
	I0816 13:12:11.573510   40000 command_runner.go:130] >   "images": [
	I0816 13:12:11.573515   40000 command_runner.go:130] >     {
	I0816 13:12:11.573523   40000 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0816 13:12:11.573528   40000 command_runner.go:130] >       "repoTags": [
	I0816 13:12:11.573534   40000 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0816 13:12:11.573537   40000 command_runner.go:130] >       ],
	I0816 13:12:11.573555   40000 command_runner.go:130] >       "repoDigests": [
	I0816 13:12:11.573563   40000 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0816 13:12:11.573570   40000 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0816 13:12:11.573574   40000 command_runner.go:130] >       ],
	I0816 13:12:11.573578   40000 command_runner.go:130] >       "size": "87165492",
	I0816 13:12:11.573582   40000 command_runner.go:130] >       "uid": null,
	I0816 13:12:11.573587   40000 command_runner.go:130] >       "username": "",
	I0816 13:12:11.573593   40000 command_runner.go:130] >       "spec": null,
	I0816 13:12:11.573597   40000 command_runner.go:130] >       "pinned": false
	I0816 13:12:11.573600   40000 command_runner.go:130] >     },
	I0816 13:12:11.573604   40000 command_runner.go:130] >     {
	I0816 13:12:11.573611   40000 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0816 13:12:11.573618   40000 command_runner.go:130] >       "repoTags": [
	I0816 13:12:11.573623   40000 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0816 13:12:11.573628   40000 command_runner.go:130] >       ],
	I0816 13:12:11.573632   40000 command_runner.go:130] >       "repoDigests": [
	I0816 13:12:11.573638   40000 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0816 13:12:11.573647   40000 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0816 13:12:11.573650   40000 command_runner.go:130] >       ],
	I0816 13:12:11.573654   40000 command_runner.go:130] >       "size": "87190579",
	I0816 13:12:11.573658   40000 command_runner.go:130] >       "uid": null,
	I0816 13:12:11.573667   40000 command_runner.go:130] >       "username": "",
	I0816 13:12:11.573671   40000 command_runner.go:130] >       "spec": null,
	I0816 13:12:11.573675   40000 command_runner.go:130] >       "pinned": false
	I0816 13:12:11.573678   40000 command_runner.go:130] >     },
	I0816 13:12:11.573682   40000 command_runner.go:130] >     {
	I0816 13:12:11.573687   40000 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0816 13:12:11.573693   40000 command_runner.go:130] >       "repoTags": [
	I0816 13:12:11.573698   40000 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0816 13:12:11.573701   40000 command_runner.go:130] >       ],
	I0816 13:12:11.573705   40000 command_runner.go:130] >       "repoDigests": [
	I0816 13:12:11.573712   40000 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0816 13:12:11.573719   40000 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0816 13:12:11.573723   40000 command_runner.go:130] >       ],
	I0816 13:12:11.573730   40000 command_runner.go:130] >       "size": "1363676",
	I0816 13:12:11.573733   40000 command_runner.go:130] >       "uid": null,
	I0816 13:12:11.573741   40000 command_runner.go:130] >       "username": "",
	I0816 13:12:11.573748   40000 command_runner.go:130] >       "spec": null,
	I0816 13:12:11.573752   40000 command_runner.go:130] >       "pinned": false
	I0816 13:12:11.573755   40000 command_runner.go:130] >     },
	I0816 13:12:11.573758   40000 command_runner.go:130] >     {
	I0816 13:12:11.573763   40000 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0816 13:12:11.573768   40000 command_runner.go:130] >       "repoTags": [
	I0816 13:12:11.573773   40000 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0816 13:12:11.573778   40000 command_runner.go:130] >       ],
	I0816 13:12:11.573782   40000 command_runner.go:130] >       "repoDigests": [
	I0816 13:12:11.573792   40000 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0816 13:12:11.573807   40000 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0816 13:12:11.573813   40000 command_runner.go:130] >       ],
	I0816 13:12:11.573817   40000 command_runner.go:130] >       "size": "31470524",
	I0816 13:12:11.573821   40000 command_runner.go:130] >       "uid": null,
	I0816 13:12:11.573827   40000 command_runner.go:130] >       "username": "",
	I0816 13:12:11.573834   40000 command_runner.go:130] >       "spec": null,
	I0816 13:12:11.573838   40000 command_runner.go:130] >       "pinned": false
	I0816 13:12:11.573844   40000 command_runner.go:130] >     },
	I0816 13:12:11.573848   40000 command_runner.go:130] >     {
	I0816 13:12:11.573854   40000 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0816 13:12:11.573860   40000 command_runner.go:130] >       "repoTags": [
	I0816 13:12:11.573866   40000 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0816 13:12:11.573871   40000 command_runner.go:130] >       ],
	I0816 13:12:11.573876   40000 command_runner.go:130] >       "repoDigests": [
	I0816 13:12:11.573885   40000 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0816 13:12:11.573894   40000 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0816 13:12:11.573899   40000 command_runner.go:130] >       ],
	I0816 13:12:11.573903   40000 command_runner.go:130] >       "size": "61245718",
	I0816 13:12:11.573909   40000 command_runner.go:130] >       "uid": null,
	I0816 13:12:11.573914   40000 command_runner.go:130] >       "username": "nonroot",
	I0816 13:12:11.573920   40000 command_runner.go:130] >       "spec": null,
	I0816 13:12:11.573924   40000 command_runner.go:130] >       "pinned": false
	I0816 13:12:11.573929   40000 command_runner.go:130] >     },
	I0816 13:12:11.573932   40000 command_runner.go:130] >     {
	I0816 13:12:11.573941   40000 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0816 13:12:11.573945   40000 command_runner.go:130] >       "repoTags": [
	I0816 13:12:11.573952   40000 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0816 13:12:11.573958   40000 command_runner.go:130] >       ],
	I0816 13:12:11.573964   40000 command_runner.go:130] >       "repoDigests": [
	I0816 13:12:11.573971   40000 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0816 13:12:11.573979   40000 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0816 13:12:11.573983   40000 command_runner.go:130] >       ],
	I0816 13:12:11.573987   40000 command_runner.go:130] >       "size": "149009664",
	I0816 13:12:11.573993   40000 command_runner.go:130] >       "uid": {
	I0816 13:12:11.573997   40000 command_runner.go:130] >         "value": "0"
	I0816 13:12:11.574001   40000 command_runner.go:130] >       },
	I0816 13:12:11.574005   40000 command_runner.go:130] >       "username": "",
	I0816 13:12:11.574009   40000 command_runner.go:130] >       "spec": null,
	I0816 13:12:11.574013   40000 command_runner.go:130] >       "pinned": false
	I0816 13:12:11.574016   40000 command_runner.go:130] >     },
	I0816 13:12:11.574020   40000 command_runner.go:130] >     {
	I0816 13:12:11.574028   40000 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0816 13:12:11.574032   40000 command_runner.go:130] >       "repoTags": [
	I0816 13:12:11.574037   40000 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0816 13:12:11.574043   40000 command_runner.go:130] >       ],
	I0816 13:12:11.574047   40000 command_runner.go:130] >       "repoDigests": [
	I0816 13:12:11.574056   40000 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0816 13:12:11.574067   40000 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0816 13:12:11.574073   40000 command_runner.go:130] >       ],
	I0816 13:12:11.574078   40000 command_runner.go:130] >       "size": "95233506",
	I0816 13:12:11.574084   40000 command_runner.go:130] >       "uid": {
	I0816 13:12:11.574088   40000 command_runner.go:130] >         "value": "0"
	I0816 13:12:11.574092   40000 command_runner.go:130] >       },
	I0816 13:12:11.574095   40000 command_runner.go:130] >       "username": "",
	I0816 13:12:11.574100   40000 command_runner.go:130] >       "spec": null,
	I0816 13:12:11.574109   40000 command_runner.go:130] >       "pinned": false
	I0816 13:12:11.574116   40000 command_runner.go:130] >     },
	I0816 13:12:11.574119   40000 command_runner.go:130] >     {
	I0816 13:12:11.574127   40000 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0816 13:12:11.574133   40000 command_runner.go:130] >       "repoTags": [
	I0816 13:12:11.574138   40000 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0816 13:12:11.574145   40000 command_runner.go:130] >       ],
	I0816 13:12:11.574149   40000 command_runner.go:130] >       "repoDigests": [
	I0816 13:12:11.574166   40000 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0816 13:12:11.574177   40000 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0816 13:12:11.574183   40000 command_runner.go:130] >       ],
	I0816 13:12:11.574188   40000 command_runner.go:130] >       "size": "89437512",
	I0816 13:12:11.574194   40000 command_runner.go:130] >       "uid": {
	I0816 13:12:11.574202   40000 command_runner.go:130] >         "value": "0"
	I0816 13:12:11.574207   40000 command_runner.go:130] >       },
	I0816 13:12:11.574211   40000 command_runner.go:130] >       "username": "",
	I0816 13:12:11.574216   40000 command_runner.go:130] >       "spec": null,
	I0816 13:12:11.574221   40000 command_runner.go:130] >       "pinned": false
	I0816 13:12:11.574224   40000 command_runner.go:130] >     },
	I0816 13:12:11.574228   40000 command_runner.go:130] >     {
	I0816 13:12:11.574234   40000 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0816 13:12:11.574238   40000 command_runner.go:130] >       "repoTags": [
	I0816 13:12:11.574242   40000 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0816 13:12:11.574245   40000 command_runner.go:130] >       ],
	I0816 13:12:11.574249   40000 command_runner.go:130] >       "repoDigests": [
	I0816 13:12:11.574256   40000 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0816 13:12:11.574263   40000 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0816 13:12:11.574266   40000 command_runner.go:130] >       ],
	I0816 13:12:11.574269   40000 command_runner.go:130] >       "size": "92728217",
	I0816 13:12:11.574273   40000 command_runner.go:130] >       "uid": null,
	I0816 13:12:11.574277   40000 command_runner.go:130] >       "username": "",
	I0816 13:12:11.574280   40000 command_runner.go:130] >       "spec": null,
	I0816 13:12:11.574284   40000 command_runner.go:130] >       "pinned": false
	I0816 13:12:11.574286   40000 command_runner.go:130] >     },
	I0816 13:12:11.574289   40000 command_runner.go:130] >     {
	I0816 13:12:11.574294   40000 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0816 13:12:11.574298   40000 command_runner.go:130] >       "repoTags": [
	I0816 13:12:11.574303   40000 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0816 13:12:11.574309   40000 command_runner.go:130] >       ],
	I0816 13:12:11.574313   40000 command_runner.go:130] >       "repoDigests": [
	I0816 13:12:11.574321   40000 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0816 13:12:11.574330   40000 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0816 13:12:11.574339   40000 command_runner.go:130] >       ],
	I0816 13:12:11.574345   40000 command_runner.go:130] >       "size": "68420936",
	I0816 13:12:11.574349   40000 command_runner.go:130] >       "uid": {
	I0816 13:12:11.574357   40000 command_runner.go:130] >         "value": "0"
	I0816 13:12:11.574363   40000 command_runner.go:130] >       },
	I0816 13:12:11.574367   40000 command_runner.go:130] >       "username": "",
	I0816 13:12:11.574373   40000 command_runner.go:130] >       "spec": null,
	I0816 13:12:11.574377   40000 command_runner.go:130] >       "pinned": false
	I0816 13:12:11.574383   40000 command_runner.go:130] >     },
	I0816 13:12:11.574386   40000 command_runner.go:130] >     {
	I0816 13:12:11.574394   40000 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0816 13:12:11.574400   40000 command_runner.go:130] >       "repoTags": [
	I0816 13:12:11.574405   40000 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0816 13:12:11.574410   40000 command_runner.go:130] >       ],
	I0816 13:12:11.574414   40000 command_runner.go:130] >       "repoDigests": [
	I0816 13:12:11.574421   40000 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0816 13:12:11.574429   40000 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0816 13:12:11.574435   40000 command_runner.go:130] >       ],
	I0816 13:12:11.574440   40000 command_runner.go:130] >       "size": "742080",
	I0816 13:12:11.574445   40000 command_runner.go:130] >       "uid": {
	I0816 13:12:11.574449   40000 command_runner.go:130] >         "value": "65535"
	I0816 13:12:11.574455   40000 command_runner.go:130] >       },
	I0816 13:12:11.574460   40000 command_runner.go:130] >       "username": "",
	I0816 13:12:11.574466   40000 command_runner.go:130] >       "spec": null,
	I0816 13:12:11.574469   40000 command_runner.go:130] >       "pinned": true
	I0816 13:12:11.574475   40000 command_runner.go:130] >     }
	I0816 13:12:11.574478   40000 command_runner.go:130] >   ]
	I0816 13:12:11.574481   40000 command_runner.go:130] > }
	I0816 13:12:11.574648   40000 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 13:12:11.574658   40000 crio.go:433] Images already preloaded, skipping extraction
	I0816 13:12:11.574701   40000 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 13:12:11.607970   40000 command_runner.go:130] > {
	I0816 13:12:11.607997   40000 command_runner.go:130] >   "images": [
	I0816 13:12:11.608002   40000 command_runner.go:130] >     {
	I0816 13:12:11.608010   40000 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0816 13:12:11.608015   40000 command_runner.go:130] >       "repoTags": [
	I0816 13:12:11.608021   40000 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0816 13:12:11.608025   40000 command_runner.go:130] >       ],
	I0816 13:12:11.608029   40000 command_runner.go:130] >       "repoDigests": [
	I0816 13:12:11.608038   40000 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0816 13:12:11.608045   40000 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0816 13:12:11.608048   40000 command_runner.go:130] >       ],
	I0816 13:12:11.608052   40000 command_runner.go:130] >       "size": "87165492",
	I0816 13:12:11.608057   40000 command_runner.go:130] >       "uid": null,
	I0816 13:12:11.608072   40000 command_runner.go:130] >       "username": "",
	I0816 13:12:11.608079   40000 command_runner.go:130] >       "spec": null,
	I0816 13:12:11.608087   40000 command_runner.go:130] >       "pinned": false
	I0816 13:12:11.608090   40000 command_runner.go:130] >     },
	I0816 13:12:11.608097   40000 command_runner.go:130] >     {
	I0816 13:12:11.608103   40000 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0816 13:12:11.608111   40000 command_runner.go:130] >       "repoTags": [
	I0816 13:12:11.608144   40000 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0816 13:12:11.608147   40000 command_runner.go:130] >       ],
	I0816 13:12:11.608151   40000 command_runner.go:130] >       "repoDigests": [
	I0816 13:12:11.608158   40000 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0816 13:12:11.608167   40000 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0816 13:12:11.608173   40000 command_runner.go:130] >       ],
	I0816 13:12:11.608177   40000 command_runner.go:130] >       "size": "87190579",
	I0816 13:12:11.608181   40000 command_runner.go:130] >       "uid": null,
	I0816 13:12:11.608188   40000 command_runner.go:130] >       "username": "",
	I0816 13:12:11.608194   40000 command_runner.go:130] >       "spec": null,
	I0816 13:12:11.608198   40000 command_runner.go:130] >       "pinned": false
	I0816 13:12:11.608203   40000 command_runner.go:130] >     },
	I0816 13:12:11.608207   40000 command_runner.go:130] >     {
	I0816 13:12:11.608213   40000 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0816 13:12:11.608218   40000 command_runner.go:130] >       "repoTags": [
	I0816 13:12:11.608223   40000 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0816 13:12:11.608227   40000 command_runner.go:130] >       ],
	I0816 13:12:11.608230   40000 command_runner.go:130] >       "repoDigests": [
	I0816 13:12:11.608237   40000 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0816 13:12:11.608246   40000 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0816 13:12:11.608250   40000 command_runner.go:130] >       ],
	I0816 13:12:11.608254   40000 command_runner.go:130] >       "size": "1363676",
	I0816 13:12:11.608259   40000 command_runner.go:130] >       "uid": null,
	I0816 13:12:11.608264   40000 command_runner.go:130] >       "username": "",
	I0816 13:12:11.608270   40000 command_runner.go:130] >       "spec": null,
	I0816 13:12:11.608274   40000 command_runner.go:130] >       "pinned": false
	I0816 13:12:11.608279   40000 command_runner.go:130] >     },
	I0816 13:12:11.608282   40000 command_runner.go:130] >     {
	I0816 13:12:11.608290   40000 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0816 13:12:11.608293   40000 command_runner.go:130] >       "repoTags": [
	I0816 13:12:11.608305   40000 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0816 13:12:11.608311   40000 command_runner.go:130] >       ],
	I0816 13:12:11.608315   40000 command_runner.go:130] >       "repoDigests": [
	I0816 13:12:11.608325   40000 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0816 13:12:11.608339   40000 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0816 13:12:11.608345   40000 command_runner.go:130] >       ],
	I0816 13:12:11.608350   40000 command_runner.go:130] >       "size": "31470524",
	I0816 13:12:11.608353   40000 command_runner.go:130] >       "uid": null,
	I0816 13:12:11.608357   40000 command_runner.go:130] >       "username": "",
	I0816 13:12:11.608361   40000 command_runner.go:130] >       "spec": null,
	I0816 13:12:11.608368   40000 command_runner.go:130] >       "pinned": false
	I0816 13:12:11.608373   40000 command_runner.go:130] >     },
	I0816 13:12:11.608377   40000 command_runner.go:130] >     {
	I0816 13:12:11.608383   40000 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0816 13:12:11.608389   40000 command_runner.go:130] >       "repoTags": [
	I0816 13:12:11.608394   40000 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0816 13:12:11.608400   40000 command_runner.go:130] >       ],
	I0816 13:12:11.608404   40000 command_runner.go:130] >       "repoDigests": [
	I0816 13:12:11.608411   40000 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0816 13:12:11.608436   40000 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0816 13:12:11.608447   40000 command_runner.go:130] >       ],
	I0816 13:12:11.608452   40000 command_runner.go:130] >       "size": "61245718",
	I0816 13:12:11.608456   40000 command_runner.go:130] >       "uid": null,
	I0816 13:12:11.608460   40000 command_runner.go:130] >       "username": "nonroot",
	I0816 13:12:11.608464   40000 command_runner.go:130] >       "spec": null,
	I0816 13:12:11.608468   40000 command_runner.go:130] >       "pinned": false
	I0816 13:12:11.608474   40000 command_runner.go:130] >     },
	I0816 13:12:11.608477   40000 command_runner.go:130] >     {
	I0816 13:12:11.608483   40000 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0816 13:12:11.608488   40000 command_runner.go:130] >       "repoTags": [
	I0816 13:12:11.608492   40000 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0816 13:12:11.608498   40000 command_runner.go:130] >       ],
	I0816 13:12:11.608503   40000 command_runner.go:130] >       "repoDigests": [
	I0816 13:12:11.608510   40000 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0816 13:12:11.608518   40000 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0816 13:12:11.608522   40000 command_runner.go:130] >       ],
	I0816 13:12:11.608526   40000 command_runner.go:130] >       "size": "149009664",
	I0816 13:12:11.608529   40000 command_runner.go:130] >       "uid": {
	I0816 13:12:11.608533   40000 command_runner.go:130] >         "value": "0"
	I0816 13:12:11.608539   40000 command_runner.go:130] >       },
	I0816 13:12:11.608542   40000 command_runner.go:130] >       "username": "",
	I0816 13:12:11.608546   40000 command_runner.go:130] >       "spec": null,
	I0816 13:12:11.608550   40000 command_runner.go:130] >       "pinned": false
	I0816 13:12:11.608554   40000 command_runner.go:130] >     },
	I0816 13:12:11.608557   40000 command_runner.go:130] >     {
	I0816 13:12:11.608564   40000 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0816 13:12:11.608570   40000 command_runner.go:130] >       "repoTags": [
	I0816 13:12:11.608576   40000 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0816 13:12:11.608580   40000 command_runner.go:130] >       ],
	I0816 13:12:11.608583   40000 command_runner.go:130] >       "repoDigests": [
	I0816 13:12:11.608593   40000 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0816 13:12:11.608600   40000 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0816 13:12:11.608605   40000 command_runner.go:130] >       ],
	I0816 13:12:11.608609   40000 command_runner.go:130] >       "size": "95233506",
	I0816 13:12:11.608613   40000 command_runner.go:130] >       "uid": {
	I0816 13:12:11.608617   40000 command_runner.go:130] >         "value": "0"
	I0816 13:12:11.608620   40000 command_runner.go:130] >       },
	I0816 13:12:11.608624   40000 command_runner.go:130] >       "username": "",
	I0816 13:12:11.608629   40000 command_runner.go:130] >       "spec": null,
	I0816 13:12:11.608634   40000 command_runner.go:130] >       "pinned": false
	I0816 13:12:11.608637   40000 command_runner.go:130] >     },
	I0816 13:12:11.608641   40000 command_runner.go:130] >     {
	I0816 13:12:11.608653   40000 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0816 13:12:11.608658   40000 command_runner.go:130] >       "repoTags": [
	I0816 13:12:11.608664   40000 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0816 13:12:11.608670   40000 command_runner.go:130] >       ],
	I0816 13:12:11.608674   40000 command_runner.go:130] >       "repoDigests": [
	I0816 13:12:11.608691   40000 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0816 13:12:11.608702   40000 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0816 13:12:11.608709   40000 command_runner.go:130] >       ],
	I0816 13:12:11.608716   40000 command_runner.go:130] >       "size": "89437512",
	I0816 13:12:11.608722   40000 command_runner.go:130] >       "uid": {
	I0816 13:12:11.608726   40000 command_runner.go:130] >         "value": "0"
	I0816 13:12:11.608730   40000 command_runner.go:130] >       },
	I0816 13:12:11.608734   40000 command_runner.go:130] >       "username": "",
	I0816 13:12:11.608740   40000 command_runner.go:130] >       "spec": null,
	I0816 13:12:11.608743   40000 command_runner.go:130] >       "pinned": false
	I0816 13:12:11.608747   40000 command_runner.go:130] >     },
	I0816 13:12:11.608752   40000 command_runner.go:130] >     {
	I0816 13:12:11.608757   40000 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0816 13:12:11.608763   40000 command_runner.go:130] >       "repoTags": [
	I0816 13:12:11.608768   40000 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0816 13:12:11.608778   40000 command_runner.go:130] >       ],
	I0816 13:12:11.608783   40000 command_runner.go:130] >       "repoDigests": [
	I0816 13:12:11.608792   40000 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0816 13:12:11.608800   40000 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0816 13:12:11.608805   40000 command_runner.go:130] >       ],
	I0816 13:12:11.608809   40000 command_runner.go:130] >       "size": "92728217",
	I0816 13:12:11.608813   40000 command_runner.go:130] >       "uid": null,
	I0816 13:12:11.608817   40000 command_runner.go:130] >       "username": "",
	I0816 13:12:11.608821   40000 command_runner.go:130] >       "spec": null,
	I0816 13:12:11.608827   40000 command_runner.go:130] >       "pinned": false
	I0816 13:12:11.608830   40000 command_runner.go:130] >     },
	I0816 13:12:11.608834   40000 command_runner.go:130] >     {
	I0816 13:12:11.608840   40000 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0816 13:12:11.608846   40000 command_runner.go:130] >       "repoTags": [
	I0816 13:12:11.608851   40000 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0816 13:12:11.608856   40000 command_runner.go:130] >       ],
	I0816 13:12:11.608860   40000 command_runner.go:130] >       "repoDigests": [
	I0816 13:12:11.608867   40000 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0816 13:12:11.608876   40000 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0816 13:12:11.608880   40000 command_runner.go:130] >       ],
	I0816 13:12:11.608884   40000 command_runner.go:130] >       "size": "68420936",
	I0816 13:12:11.608890   40000 command_runner.go:130] >       "uid": {
	I0816 13:12:11.608894   40000 command_runner.go:130] >         "value": "0"
	I0816 13:12:11.608897   40000 command_runner.go:130] >       },
	I0816 13:12:11.608901   40000 command_runner.go:130] >       "username": "",
	I0816 13:12:11.608918   40000 command_runner.go:130] >       "spec": null,
	I0816 13:12:11.608924   40000 command_runner.go:130] >       "pinned": false
	I0816 13:12:11.608932   40000 command_runner.go:130] >     },
	I0816 13:12:11.608936   40000 command_runner.go:130] >     {
	I0816 13:12:11.608948   40000 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0816 13:12:11.608957   40000 command_runner.go:130] >       "repoTags": [
	I0816 13:12:11.608964   40000 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0816 13:12:11.608970   40000 command_runner.go:130] >       ],
	I0816 13:12:11.608974   40000 command_runner.go:130] >       "repoDigests": [
	I0816 13:12:11.608980   40000 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0816 13:12:11.608990   40000 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0816 13:12:11.608994   40000 command_runner.go:130] >       ],
	I0816 13:12:11.609001   40000 command_runner.go:130] >       "size": "742080",
	I0816 13:12:11.609004   40000 command_runner.go:130] >       "uid": {
	I0816 13:12:11.609008   40000 command_runner.go:130] >         "value": "65535"
	I0816 13:12:11.609012   40000 command_runner.go:130] >       },
	I0816 13:12:11.609016   40000 command_runner.go:130] >       "username": "",
	I0816 13:12:11.609019   40000 command_runner.go:130] >       "spec": null,
	I0816 13:12:11.609023   40000 command_runner.go:130] >       "pinned": true
	I0816 13:12:11.609027   40000 command_runner.go:130] >     }
	I0816 13:12:11.609030   40000 command_runner.go:130] >   ]
	I0816 13:12:11.609035   40000 command_runner.go:130] > }
	I0816 13:12:11.609638   40000 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 13:12:11.609661   40000 cache_images.go:84] Images are preloaded, skipping loading
	I0816 13:12:11.609669   40000 kubeadm.go:934] updating node { 192.168.39.208 8443 v1.31.0 crio true true} ...
	I0816 13:12:11.609781   40000 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-336982 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.208
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:multinode-336982 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 13:12:11.609843   40000 ssh_runner.go:195] Run: crio config
	I0816 13:12:11.642737   40000 command_runner.go:130] ! time="2024-08-16 13:12:11.620759817Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0816 13:12:11.649389   40000 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0816 13:12:11.656708   40000 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0816 13:12:11.656728   40000 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0816 13:12:11.656735   40000 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0816 13:12:11.656739   40000 command_runner.go:130] > #
	I0816 13:12:11.656747   40000 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0816 13:12:11.656753   40000 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0816 13:12:11.656760   40000 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0816 13:12:11.656772   40000 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0816 13:12:11.656779   40000 command_runner.go:130] > # reload'.
	I0816 13:12:11.656788   40000 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0816 13:12:11.656801   40000 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0816 13:12:11.656810   40000 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0816 13:12:11.656822   40000 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0816 13:12:11.656830   40000 command_runner.go:130] > [crio]
	I0816 13:12:11.656842   40000 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0816 13:12:11.656853   40000 command_runner.go:130] > # containers images, in this directory.
	I0816 13:12:11.656860   40000 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0816 13:12:11.656876   40000 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0816 13:12:11.656885   40000 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0816 13:12:11.656893   40000 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0816 13:12:11.656899   40000 command_runner.go:130] > # imagestore = ""
	I0816 13:12:11.656923   40000 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0816 13:12:11.656936   40000 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0816 13:12:11.656943   40000 command_runner.go:130] > storage_driver = "overlay"
	I0816 13:12:11.656955   40000 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0816 13:12:11.656967   40000 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0816 13:12:11.656976   40000 command_runner.go:130] > storage_option = [
	I0816 13:12:11.656986   40000 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0816 13:12:11.656994   40000 command_runner.go:130] > ]
	I0816 13:12:11.657006   40000 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0816 13:12:11.657019   40000 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0816 13:12:11.657028   40000 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0816 13:12:11.657039   40000 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0816 13:12:11.657052   40000 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0816 13:12:11.657062   40000 command_runner.go:130] > # always happen on a node reboot
	I0816 13:12:11.657073   40000 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0816 13:12:11.657085   40000 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0816 13:12:11.657092   40000 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0816 13:12:11.657100   40000 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0816 13:12:11.657105   40000 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0816 13:12:11.657114   40000 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0816 13:12:11.657124   40000 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0816 13:12:11.657131   40000 command_runner.go:130] > # internal_wipe = true
	I0816 13:12:11.657139   40000 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0816 13:12:11.657146   40000 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0816 13:12:11.657150   40000 command_runner.go:130] > # internal_repair = false
	I0816 13:12:11.657157   40000 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0816 13:12:11.657163   40000 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0816 13:12:11.657171   40000 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0816 13:12:11.657178   40000 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0816 13:12:11.657185   40000 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0816 13:12:11.657191   40000 command_runner.go:130] > [crio.api]
	I0816 13:12:11.657196   40000 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0816 13:12:11.657201   40000 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0816 13:12:11.657208   40000 command_runner.go:130] > # IP address on which the stream server will listen.
	I0816 13:12:11.657212   40000 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0816 13:12:11.657220   40000 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0816 13:12:11.657229   40000 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0816 13:12:11.657235   40000 command_runner.go:130] > # stream_port = "0"
	I0816 13:12:11.657241   40000 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0816 13:12:11.657247   40000 command_runner.go:130] > # stream_enable_tls = false
	I0816 13:12:11.657255   40000 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0816 13:12:11.657261   40000 command_runner.go:130] > # stream_idle_timeout = ""
	I0816 13:12:11.657267   40000 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0816 13:12:11.657277   40000 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0816 13:12:11.657283   40000 command_runner.go:130] > # minutes.
	I0816 13:12:11.657288   40000 command_runner.go:130] > # stream_tls_cert = ""
	I0816 13:12:11.657294   40000 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0816 13:12:11.657302   40000 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0816 13:12:11.657309   40000 command_runner.go:130] > # stream_tls_key = ""
	I0816 13:12:11.657314   40000 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0816 13:12:11.657323   40000 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0816 13:12:11.657336   40000 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0816 13:12:11.657342   40000 command_runner.go:130] > # stream_tls_ca = ""
	I0816 13:12:11.657349   40000 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0816 13:12:11.657356   40000 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0816 13:12:11.657363   40000 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0816 13:12:11.657369   40000 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0816 13:12:11.657376   40000 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0816 13:12:11.657383   40000 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0816 13:12:11.657387   40000 command_runner.go:130] > [crio.runtime]
	I0816 13:12:11.657395   40000 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0816 13:12:11.657403   40000 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0816 13:12:11.657407   40000 command_runner.go:130] > # "nofile=1024:2048"
	I0816 13:12:11.657415   40000 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0816 13:12:11.657421   40000 command_runner.go:130] > # default_ulimits = [
	I0816 13:12:11.657425   40000 command_runner.go:130] > # ]
	I0816 13:12:11.657433   40000 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0816 13:12:11.657437   40000 command_runner.go:130] > # no_pivot = false
	I0816 13:12:11.657443   40000 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0816 13:12:11.657450   40000 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0816 13:12:11.657455   40000 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0816 13:12:11.657462   40000 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0816 13:12:11.657468   40000 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0816 13:12:11.657476   40000 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0816 13:12:11.657483   40000 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0816 13:12:11.657488   40000 command_runner.go:130] > # Cgroup setting for conmon
	I0816 13:12:11.657496   40000 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0816 13:12:11.657503   40000 command_runner.go:130] > conmon_cgroup = "pod"
	I0816 13:12:11.657509   40000 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0816 13:12:11.657517   40000 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0816 13:12:11.657526   40000 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0816 13:12:11.657532   40000 command_runner.go:130] > conmon_env = [
	I0816 13:12:11.657538   40000 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0816 13:12:11.657543   40000 command_runner.go:130] > ]
	I0816 13:12:11.657549   40000 command_runner.go:130] > # Additional environment variables to set for all the
	I0816 13:12:11.657556   40000 command_runner.go:130] > # containers. These are overridden if set in the
	I0816 13:12:11.657562   40000 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0816 13:12:11.657568   40000 command_runner.go:130] > # default_env = [
	I0816 13:12:11.657571   40000 command_runner.go:130] > # ]
	I0816 13:12:11.657577   40000 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0816 13:12:11.657586   40000 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0816 13:12:11.657592   40000 command_runner.go:130] > # selinux = false
	I0816 13:12:11.657597   40000 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0816 13:12:11.657606   40000 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0816 13:12:11.657613   40000 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0816 13:12:11.657618   40000 command_runner.go:130] > # seccomp_profile = ""
	I0816 13:12:11.657624   40000 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0816 13:12:11.657632   40000 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0816 13:12:11.657640   40000 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0816 13:12:11.657644   40000 command_runner.go:130] > # which might increase security.
	I0816 13:12:11.657651   40000 command_runner.go:130] > # This option is currently deprecated,
	I0816 13:12:11.657657   40000 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0816 13:12:11.657663   40000 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0816 13:12:11.657669   40000 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0816 13:12:11.657677   40000 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0816 13:12:11.657683   40000 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0816 13:12:11.657691   40000 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0816 13:12:11.657698   40000 command_runner.go:130] > # This option supports live configuration reload.
	I0816 13:12:11.657703   40000 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0816 13:12:11.657710   40000 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0816 13:12:11.657714   40000 command_runner.go:130] > # the cgroup blockio controller.
	I0816 13:12:11.657720   40000 command_runner.go:130] > # blockio_config_file = ""
	I0816 13:12:11.657727   40000 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0816 13:12:11.657732   40000 command_runner.go:130] > # blockio parameters.
	I0816 13:12:11.657736   40000 command_runner.go:130] > # blockio_reload = false
	I0816 13:12:11.657744   40000 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0816 13:12:11.657750   40000 command_runner.go:130] > # irqbalance daemon.
	I0816 13:12:11.657755   40000 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0816 13:12:11.657761   40000 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0816 13:12:11.657769   40000 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0816 13:12:11.657779   40000 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0816 13:12:11.657786   40000 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0816 13:12:11.657796   40000 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0816 13:12:11.657803   40000 command_runner.go:130] > # This option supports live configuration reload.
	I0816 13:12:11.657807   40000 command_runner.go:130] > # rdt_config_file = ""
	I0816 13:12:11.657814   40000 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0816 13:12:11.657819   40000 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0816 13:12:11.657835   40000 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0816 13:12:11.657841   40000 command_runner.go:130] > # separate_pull_cgroup = ""
	I0816 13:12:11.657847   40000 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0816 13:12:11.657854   40000 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0816 13:12:11.657862   40000 command_runner.go:130] > # will be added.
	I0816 13:12:11.657866   40000 command_runner.go:130] > # default_capabilities = [
	I0816 13:12:11.657872   40000 command_runner.go:130] > # 	"CHOWN",
	I0816 13:12:11.657876   40000 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0816 13:12:11.657881   40000 command_runner.go:130] > # 	"FSETID",
	I0816 13:12:11.657885   40000 command_runner.go:130] > # 	"FOWNER",
	I0816 13:12:11.657892   40000 command_runner.go:130] > # 	"SETGID",
	I0816 13:12:11.657896   40000 command_runner.go:130] > # 	"SETUID",
	I0816 13:12:11.657902   40000 command_runner.go:130] > # 	"SETPCAP",
	I0816 13:12:11.657906   40000 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0816 13:12:11.657912   40000 command_runner.go:130] > # 	"KILL",
	I0816 13:12:11.657915   40000 command_runner.go:130] > # ]
	I0816 13:12:11.657925   40000 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0816 13:12:11.657933   40000 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0816 13:12:11.657938   40000 command_runner.go:130] > # add_inheritable_capabilities = false
	I0816 13:12:11.657947   40000 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0816 13:12:11.657959   40000 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0816 13:12:11.657968   40000 command_runner.go:130] > default_sysctls = [
	I0816 13:12:11.657978   40000 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0816 13:12:11.657986   40000 command_runner.go:130] > ]
	I0816 13:12:11.657992   40000 command_runner.go:130] > # List of devices on the host that a
	I0816 13:12:11.658003   40000 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0816 13:12:11.658012   40000 command_runner.go:130] > # allowed_devices = [
	I0816 13:12:11.658018   40000 command_runner.go:130] > # 	"/dev/fuse",
	I0816 13:12:11.658026   40000 command_runner.go:130] > # ]
	I0816 13:12:11.658034   40000 command_runner.go:130] > # List of additional devices. specified as
	I0816 13:12:11.658042   40000 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0816 13:12:11.658049   40000 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0816 13:12:11.658055   40000 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0816 13:12:11.658061   40000 command_runner.go:130] > # additional_devices = [
	I0816 13:12:11.658065   40000 command_runner.go:130] > # ]
	I0816 13:12:11.658072   40000 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0816 13:12:11.658077   40000 command_runner.go:130] > # cdi_spec_dirs = [
	I0816 13:12:11.658082   40000 command_runner.go:130] > # 	"/etc/cdi",
	I0816 13:12:11.658087   40000 command_runner.go:130] > # 	"/var/run/cdi",
	I0816 13:12:11.658092   40000 command_runner.go:130] > # ]
	I0816 13:12:11.658098   40000 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0816 13:12:11.658106   40000 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0816 13:12:11.658112   40000 command_runner.go:130] > # Defaults to false.
	I0816 13:12:11.658117   40000 command_runner.go:130] > # device_ownership_from_security_context = false
	I0816 13:12:11.658125   40000 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0816 13:12:11.658133   40000 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0816 13:12:11.658138   40000 command_runner.go:130] > # hooks_dir = [
	I0816 13:12:11.658145   40000 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0816 13:12:11.658148   40000 command_runner.go:130] > # ]
	I0816 13:12:11.658154   40000 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0816 13:12:11.658162   40000 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0816 13:12:11.658169   40000 command_runner.go:130] > # its default mounts from the following two files:
	I0816 13:12:11.658175   40000 command_runner.go:130] > #
	I0816 13:12:11.658181   40000 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0816 13:12:11.658189   40000 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0816 13:12:11.658197   40000 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0816 13:12:11.658201   40000 command_runner.go:130] > #
	I0816 13:12:11.658209   40000 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0816 13:12:11.658219   40000 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0816 13:12:11.658230   40000 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0816 13:12:11.658237   40000 command_runner.go:130] > #      only add mounts it finds in this file.
	I0816 13:12:11.658241   40000 command_runner.go:130] > #
	I0816 13:12:11.658246   40000 command_runner.go:130] > # default_mounts_file = ""
	I0816 13:12:11.658252   40000 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0816 13:12:11.658261   40000 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0816 13:12:11.658265   40000 command_runner.go:130] > pids_limit = 1024
	I0816 13:12:11.658273   40000 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0816 13:12:11.658281   40000 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0816 13:12:11.658290   40000 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0816 13:12:11.658297   40000 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0816 13:12:11.658303   40000 command_runner.go:130] > # log_size_max = -1
	I0816 13:12:11.658310   40000 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0816 13:12:11.658316   40000 command_runner.go:130] > # log_to_journald = false
	I0816 13:12:11.658323   40000 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0816 13:12:11.658332   40000 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0816 13:12:11.658339   40000 command_runner.go:130] > # Path to directory for container attach sockets.
	I0816 13:12:11.658344   40000 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0816 13:12:11.658352   40000 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0816 13:12:11.658356   40000 command_runner.go:130] > # bind_mount_prefix = ""
	I0816 13:12:11.658363   40000 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0816 13:12:11.658367   40000 command_runner.go:130] > # read_only = false
	I0816 13:12:11.658375   40000 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0816 13:12:11.658385   40000 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0816 13:12:11.658391   40000 command_runner.go:130] > # live configuration reload.
	I0816 13:12:11.658395   40000 command_runner.go:130] > # log_level = "info"
	I0816 13:12:11.658401   40000 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0816 13:12:11.658407   40000 command_runner.go:130] > # This option supports live configuration reload.
	I0816 13:12:11.658411   40000 command_runner.go:130] > # log_filter = ""
	I0816 13:12:11.658420   40000 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0816 13:12:11.658428   40000 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0816 13:12:11.658435   40000 command_runner.go:130] > # separated by comma.
	I0816 13:12:11.658442   40000 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0816 13:12:11.658448   40000 command_runner.go:130] > # uid_mappings = ""
	I0816 13:12:11.658453   40000 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0816 13:12:11.658461   40000 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0816 13:12:11.658467   40000 command_runner.go:130] > # separated by comma.
	I0816 13:12:11.658476   40000 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0816 13:12:11.658482   40000 command_runner.go:130] > # gid_mappings = ""
	I0816 13:12:11.658488   40000 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0816 13:12:11.658496   40000 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0816 13:12:11.658504   40000 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0816 13:12:11.658514   40000 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0816 13:12:11.658520   40000 command_runner.go:130] > # minimum_mappable_uid = -1
	I0816 13:12:11.658526   40000 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0816 13:12:11.658534   40000 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0816 13:12:11.658543   40000 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0816 13:12:11.658553   40000 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0816 13:12:11.658559   40000 command_runner.go:130] > # minimum_mappable_gid = -1
	I0816 13:12:11.658565   40000 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0816 13:12:11.658573   40000 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0816 13:12:11.658581   40000 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0816 13:12:11.658585   40000 command_runner.go:130] > # ctr_stop_timeout = 30
	I0816 13:12:11.658592   40000 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0816 13:12:11.658601   40000 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0816 13:12:11.658607   40000 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0816 13:12:11.658614   40000 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0816 13:12:11.658618   40000 command_runner.go:130] > drop_infra_ctr = false
	I0816 13:12:11.658626   40000 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0816 13:12:11.658634   40000 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0816 13:12:11.658644   40000 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0816 13:12:11.658649   40000 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0816 13:12:11.658656   40000 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0816 13:12:11.658664   40000 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0816 13:12:11.658671   40000 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0816 13:12:11.658676   40000 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0816 13:12:11.658682   40000 command_runner.go:130] > # shared_cpuset = ""
	I0816 13:12:11.658687   40000 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0816 13:12:11.658694   40000 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0816 13:12:11.658699   40000 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0816 13:12:11.658707   40000 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0816 13:12:11.658714   40000 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0816 13:12:11.658719   40000 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0816 13:12:11.658727   40000 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0816 13:12:11.658733   40000 command_runner.go:130] > # enable_criu_support = false
	I0816 13:12:11.658738   40000 command_runner.go:130] > # Enable/disable the generation of the container,
	I0816 13:12:11.658746   40000 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0816 13:12:11.658753   40000 command_runner.go:130] > # enable_pod_events = false
	I0816 13:12:11.658759   40000 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0816 13:12:11.658767   40000 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0816 13:12:11.658774   40000 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0816 13:12:11.658779   40000 command_runner.go:130] > # default_runtime = "runc"
	I0816 13:12:11.658786   40000 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0816 13:12:11.658793   40000 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0816 13:12:11.658804   40000 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0816 13:12:11.658811   40000 command_runner.go:130] > # creation as a file is not desired either.
	I0816 13:12:11.658819   40000 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0816 13:12:11.658825   40000 command_runner.go:130] > # the hostname is being managed dynamically.
	I0816 13:12:11.658830   40000 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0816 13:12:11.658835   40000 command_runner.go:130] > # ]
	I0816 13:12:11.658841   40000 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0816 13:12:11.658849   40000 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0816 13:12:11.658857   40000 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0816 13:12:11.658864   40000 command_runner.go:130] > # Each entry in the table should follow the format:
	I0816 13:12:11.658868   40000 command_runner.go:130] > #
	I0816 13:12:11.658873   40000 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0816 13:12:11.658880   40000 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0816 13:12:11.658920   40000 command_runner.go:130] > # runtime_type = "oci"
	I0816 13:12:11.658929   40000 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0816 13:12:11.658933   40000 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0816 13:12:11.658938   40000 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0816 13:12:11.658945   40000 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0816 13:12:11.658950   40000 command_runner.go:130] > # monitor_env = []
	I0816 13:12:11.658960   40000 command_runner.go:130] > # privileged_without_host_devices = false
	I0816 13:12:11.658969   40000 command_runner.go:130] > # allowed_annotations = []
	I0816 13:12:11.658980   40000 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0816 13:12:11.658988   40000 command_runner.go:130] > # Where:
	I0816 13:12:11.658999   40000 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0816 13:12:11.659012   40000 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0816 13:12:11.659024   40000 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0816 13:12:11.659036   40000 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0816 13:12:11.659045   40000 command_runner.go:130] > #   in $PATH.
	I0816 13:12:11.659058   40000 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0816 13:12:11.659067   40000 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0816 13:12:11.659075   40000 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0816 13:12:11.659082   40000 command_runner.go:130] > #   state.
	I0816 13:12:11.659088   40000 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0816 13:12:11.659096   40000 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0816 13:12:11.659104   40000 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0816 13:12:11.659112   40000 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0816 13:12:11.659118   40000 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0816 13:12:11.659126   40000 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0816 13:12:11.659135   40000 command_runner.go:130] > #   The currently recognized values are:
	I0816 13:12:11.659143   40000 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0816 13:12:11.659153   40000 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0816 13:12:11.659160   40000 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0816 13:12:11.659168   40000 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0816 13:12:11.659180   40000 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0816 13:12:11.659188   40000 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0816 13:12:11.659197   40000 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0816 13:12:11.659205   40000 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0816 13:12:11.659214   40000 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0816 13:12:11.659221   40000 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0816 13:12:11.659231   40000 command_runner.go:130] > #   deprecated option "conmon".
	I0816 13:12:11.659238   40000 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0816 13:12:11.659245   40000 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0816 13:12:11.659252   40000 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0816 13:12:11.659259   40000 command_runner.go:130] > #   should be moved to the container's cgroup
	I0816 13:12:11.659266   40000 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0816 13:12:11.659273   40000 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0816 13:12:11.659279   40000 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0816 13:12:11.659287   40000 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0816 13:12:11.659290   40000 command_runner.go:130] > #
	I0816 13:12:11.659297   40000 command_runner.go:130] > # Using the seccomp notifier feature:
	I0816 13:12:11.659300   40000 command_runner.go:130] > #
	I0816 13:12:11.659306   40000 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0816 13:12:11.659314   40000 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0816 13:12:11.659320   40000 command_runner.go:130] > #
	I0816 13:12:11.659326   40000 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0816 13:12:11.659334   40000 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0816 13:12:11.659337   40000 command_runner.go:130] > #
	I0816 13:12:11.659344   40000 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0816 13:12:11.659349   40000 command_runner.go:130] > # feature.
	I0816 13:12:11.659353   40000 command_runner.go:130] > #
	I0816 13:12:11.659361   40000 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0816 13:12:11.659367   40000 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0816 13:12:11.659375   40000 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0816 13:12:11.659383   40000 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0816 13:12:11.659391   40000 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0816 13:12:11.659394   40000 command_runner.go:130] > #
	I0816 13:12:11.659400   40000 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0816 13:12:11.659408   40000 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0816 13:12:11.659413   40000 command_runner.go:130] > #
	I0816 13:12:11.659419   40000 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0816 13:12:11.659426   40000 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0816 13:12:11.659429   40000 command_runner.go:130] > #
	I0816 13:12:11.659435   40000 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0816 13:12:11.659443   40000 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0816 13:12:11.659449   40000 command_runner.go:130] > # limitation.
	I0816 13:12:11.659454   40000 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0816 13:12:11.659461   40000 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0816 13:12:11.659464   40000 command_runner.go:130] > runtime_type = "oci"
	I0816 13:12:11.659468   40000 command_runner.go:130] > runtime_root = "/run/runc"
	I0816 13:12:11.659474   40000 command_runner.go:130] > runtime_config_path = ""
	I0816 13:12:11.659482   40000 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0816 13:12:11.659486   40000 command_runner.go:130] > monitor_cgroup = "pod"
	I0816 13:12:11.659492   40000 command_runner.go:130] > monitor_exec_cgroup = ""
	I0816 13:12:11.659496   40000 command_runner.go:130] > monitor_env = [
	I0816 13:12:11.659504   40000 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0816 13:12:11.659508   40000 command_runner.go:130] > ]
	I0816 13:12:11.659513   40000 command_runner.go:130] > privileged_without_host_devices = false
	I0816 13:12:11.659521   40000 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0816 13:12:11.659528   40000 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0816 13:12:11.659535   40000 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0816 13:12:11.659544   40000 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0816 13:12:11.659553   40000 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0816 13:12:11.659559   40000 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0816 13:12:11.659570   40000 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0816 13:12:11.659580   40000 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0816 13:12:11.659587   40000 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0816 13:12:11.659594   40000 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0816 13:12:11.659598   40000 command_runner.go:130] > # Example:
	I0816 13:12:11.659603   40000 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0816 13:12:11.659608   40000 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0816 13:12:11.659613   40000 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0816 13:12:11.659617   40000 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0816 13:12:11.659621   40000 command_runner.go:130] > # cpuset = 0
	I0816 13:12:11.659624   40000 command_runner.go:130] > # cpushares = "0-1"
	I0816 13:12:11.659628   40000 command_runner.go:130] > # Where:
	I0816 13:12:11.659632   40000 command_runner.go:130] > # The workload name is workload-type.
	I0816 13:12:11.659639   40000 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0816 13:12:11.659644   40000 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0816 13:12:11.659649   40000 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0816 13:12:11.659662   40000 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0816 13:12:11.659668   40000 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0816 13:12:11.659672   40000 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0816 13:12:11.659678   40000 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0816 13:12:11.659682   40000 command_runner.go:130] > # Default value is set to true
	I0816 13:12:11.659687   40000 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0816 13:12:11.659692   40000 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0816 13:12:11.659696   40000 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0816 13:12:11.659702   40000 command_runner.go:130] > # Default value is set to 'false'
	I0816 13:12:11.659706   40000 command_runner.go:130] > # disable_hostport_mapping = false
	I0816 13:12:11.659712   40000 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0816 13:12:11.659715   40000 command_runner.go:130] > #
	I0816 13:12:11.659720   40000 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0816 13:12:11.659726   40000 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0816 13:12:11.659733   40000 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0816 13:12:11.659739   40000 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0816 13:12:11.659745   40000 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0816 13:12:11.659748   40000 command_runner.go:130] > [crio.image]
	I0816 13:12:11.659753   40000 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0816 13:12:11.659758   40000 command_runner.go:130] > # default_transport = "docker://"
	I0816 13:12:11.659764   40000 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0816 13:12:11.659773   40000 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0816 13:12:11.659777   40000 command_runner.go:130] > # global_auth_file = ""
	I0816 13:12:11.659784   40000 command_runner.go:130] > # The image used to instantiate infra containers.
	I0816 13:12:11.659788   40000 command_runner.go:130] > # This option supports live configuration reload.
	I0816 13:12:11.659795   40000 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0816 13:12:11.659801   40000 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0816 13:12:11.659809   40000 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0816 13:12:11.659814   40000 command_runner.go:130] > # This option supports live configuration reload.
	I0816 13:12:11.659820   40000 command_runner.go:130] > # pause_image_auth_file = ""
	I0816 13:12:11.659826   40000 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0816 13:12:11.659835   40000 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0816 13:12:11.659843   40000 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0816 13:12:11.659851   40000 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0816 13:12:11.659857   40000 command_runner.go:130] > # pause_command = "/pause"
	I0816 13:12:11.659863   40000 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0816 13:12:11.659877   40000 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0816 13:12:11.659886   40000 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0816 13:12:11.659894   40000 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0816 13:12:11.659903   40000 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0816 13:12:11.659911   40000 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0816 13:12:11.659917   40000 command_runner.go:130] > # pinned_images = [
	I0816 13:12:11.659921   40000 command_runner.go:130] > # ]
	I0816 13:12:11.659930   40000 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0816 13:12:11.659938   40000 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0816 13:12:11.659947   40000 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0816 13:12:11.659959   40000 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0816 13:12:11.659970   40000 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0816 13:12:11.659978   40000 command_runner.go:130] > # signature_policy = ""
	I0816 13:12:11.659990   40000 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0816 13:12:11.660003   40000 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0816 13:12:11.660015   40000 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0816 13:12:11.660027   40000 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0816 13:12:11.660039   40000 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0816 13:12:11.660049   40000 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0816 13:12:11.660058   40000 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0816 13:12:11.660065   40000 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0816 13:12:11.660072   40000 command_runner.go:130] > # changing them here.
	I0816 13:12:11.660076   40000 command_runner.go:130] > # insecure_registries = [
	I0816 13:12:11.660081   40000 command_runner.go:130] > # ]
	I0816 13:12:11.660087   40000 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0816 13:12:11.660094   40000 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0816 13:12:11.660099   40000 command_runner.go:130] > # image_volumes = "mkdir"
	I0816 13:12:11.660106   40000 command_runner.go:130] > # Temporary directory to use for storing big files
	I0816 13:12:11.660110   40000 command_runner.go:130] > # big_files_temporary_dir = ""
	I0816 13:12:11.660118   40000 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0816 13:12:11.660122   40000 command_runner.go:130] > # CNI plugins.
	I0816 13:12:11.660128   40000 command_runner.go:130] > [crio.network]
	I0816 13:12:11.660134   40000 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0816 13:12:11.660141   40000 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0816 13:12:11.660145   40000 command_runner.go:130] > # cni_default_network = ""
	I0816 13:12:11.660152   40000 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0816 13:12:11.660164   40000 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0816 13:12:11.660172   40000 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0816 13:12:11.660179   40000 command_runner.go:130] > # plugin_dirs = [
	I0816 13:12:11.660183   40000 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0816 13:12:11.660188   40000 command_runner.go:130] > # ]
	I0816 13:12:11.660194   40000 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0816 13:12:11.660197   40000 command_runner.go:130] > [crio.metrics]
	I0816 13:12:11.660203   40000 command_runner.go:130] > # Globally enable or disable metrics support.
	I0816 13:12:11.660207   40000 command_runner.go:130] > enable_metrics = true
	I0816 13:12:11.660214   40000 command_runner.go:130] > # Specify enabled metrics collectors.
	I0816 13:12:11.660219   40000 command_runner.go:130] > # Per default all metrics are enabled.
	I0816 13:12:11.660230   40000 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0816 13:12:11.660238   40000 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0816 13:12:11.660246   40000 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0816 13:12:11.660250   40000 command_runner.go:130] > # metrics_collectors = [
	I0816 13:12:11.660256   40000 command_runner.go:130] > # 	"operations",
	I0816 13:12:11.660261   40000 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0816 13:12:11.660267   40000 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0816 13:12:11.660271   40000 command_runner.go:130] > # 	"operations_errors",
	I0816 13:12:11.660275   40000 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0816 13:12:11.660281   40000 command_runner.go:130] > # 	"image_pulls_by_name",
	I0816 13:12:11.660286   40000 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0816 13:12:11.660293   40000 command_runner.go:130] > # 	"image_pulls_failures",
	I0816 13:12:11.660297   40000 command_runner.go:130] > # 	"image_pulls_successes",
	I0816 13:12:11.660303   40000 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0816 13:12:11.660309   40000 command_runner.go:130] > # 	"image_layer_reuse",
	I0816 13:12:11.660316   40000 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0816 13:12:11.660320   40000 command_runner.go:130] > # 	"containers_oom_total",
	I0816 13:12:11.660326   40000 command_runner.go:130] > # 	"containers_oom",
	I0816 13:12:11.660330   40000 command_runner.go:130] > # 	"processes_defunct",
	I0816 13:12:11.660336   40000 command_runner.go:130] > # 	"operations_total",
	I0816 13:12:11.660340   40000 command_runner.go:130] > # 	"operations_latency_seconds",
	I0816 13:12:11.660346   40000 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0816 13:12:11.660351   40000 command_runner.go:130] > # 	"operations_errors_total",
	I0816 13:12:11.660358   40000 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0816 13:12:11.660362   40000 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0816 13:12:11.660372   40000 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0816 13:12:11.660379   40000 command_runner.go:130] > # 	"image_pulls_success_total",
	I0816 13:12:11.660383   40000 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0816 13:12:11.660389   40000 command_runner.go:130] > # 	"containers_oom_count_total",
	I0816 13:12:11.660393   40000 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0816 13:12:11.660400   40000 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0816 13:12:11.660403   40000 command_runner.go:130] > # ]
	I0816 13:12:11.660410   40000 command_runner.go:130] > # The port on which the metrics server will listen.
	I0816 13:12:11.660414   40000 command_runner.go:130] > # metrics_port = 9090
	I0816 13:12:11.660419   40000 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0816 13:12:11.660425   40000 command_runner.go:130] > # metrics_socket = ""
	I0816 13:12:11.660430   40000 command_runner.go:130] > # The certificate for the secure metrics server.
	I0816 13:12:11.660437   40000 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0816 13:12:11.660443   40000 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0816 13:12:11.660450   40000 command_runner.go:130] > # certificate on any modification event.
	I0816 13:12:11.660454   40000 command_runner.go:130] > # metrics_cert = ""
	I0816 13:12:11.660461   40000 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0816 13:12:11.660466   40000 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0816 13:12:11.660472   40000 command_runner.go:130] > # metrics_key = ""
	I0816 13:12:11.660479   40000 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0816 13:12:11.660485   40000 command_runner.go:130] > [crio.tracing]
	I0816 13:12:11.660491   40000 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0816 13:12:11.660497   40000 command_runner.go:130] > # enable_tracing = false
	I0816 13:12:11.660503   40000 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0816 13:12:11.660509   40000 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0816 13:12:11.660516   40000 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0816 13:12:11.660522   40000 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0816 13:12:11.660526   40000 command_runner.go:130] > # CRI-O NRI configuration.
	I0816 13:12:11.660532   40000 command_runner.go:130] > [crio.nri]
	I0816 13:12:11.660536   40000 command_runner.go:130] > # Globally enable or disable NRI.
	I0816 13:12:11.660542   40000 command_runner.go:130] > # enable_nri = false
	I0816 13:12:11.660546   40000 command_runner.go:130] > # NRI socket to listen on.
	I0816 13:12:11.660553   40000 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0816 13:12:11.660557   40000 command_runner.go:130] > # NRI plugin directory to use.
	I0816 13:12:11.660565   40000 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0816 13:12:11.660570   40000 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0816 13:12:11.660581   40000 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0816 13:12:11.660588   40000 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0816 13:12:11.660593   40000 command_runner.go:130] > # nri_disable_connections = false
	I0816 13:12:11.660600   40000 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0816 13:12:11.660604   40000 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0816 13:12:11.660611   40000 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0816 13:12:11.660615   40000 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0816 13:12:11.660621   40000 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0816 13:12:11.660627   40000 command_runner.go:130] > [crio.stats]
	I0816 13:12:11.660633   40000 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0816 13:12:11.660640   40000 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0816 13:12:11.660645   40000 command_runner.go:130] > # stats_collection_period = 0
	I0816 13:12:11.660799   40000 cni.go:84] Creating CNI manager for ""
	I0816 13:12:11.660814   40000 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0816 13:12:11.660824   40000 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 13:12:11.660845   40000 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.208 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-336982 NodeName:multinode-336982 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.208"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.208 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 13:12:11.660990   40000 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.208
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-336982"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.208
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.208"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 13:12:11.661060   40000 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 13:12:11.671252   40000 command_runner.go:130] > kubeadm
	I0816 13:12:11.671269   40000 command_runner.go:130] > kubectl
	I0816 13:12:11.671276   40000 command_runner.go:130] > kubelet
	I0816 13:12:11.671288   40000 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 13:12:11.671330   40000 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 13:12:11.680410   40000 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0816 13:12:11.697244   40000 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 13:12:11.713522   40000 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0816 13:12:11.730045   40000 ssh_runner.go:195] Run: grep 192.168.39.208	control-plane.minikube.internal$ /etc/hosts
	I0816 13:12:11.733977   40000 command_runner.go:130] > 192.168.39.208	control-plane.minikube.internal
	I0816 13:12:11.734038   40000 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:12:11.886814   40000 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 13:12:11.901857   40000 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/multinode-336982 for IP: 192.168.39.208
	I0816 13:12:11.901881   40000 certs.go:194] generating shared ca certs ...
	I0816 13:12:11.901895   40000 certs.go:226] acquiring lock for ca certs: {Name:mkdb46755373c135acf8239d2d4352f4b0b3d1d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:12:11.902096   40000 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key
	I0816 13:12:11.902217   40000 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key
	I0816 13:12:11.902232   40000 certs.go:256] generating profile certs ...
	I0816 13:12:11.902338   40000 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/multinode-336982/client.key
	I0816 13:12:11.902409   40000 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/multinode-336982/apiserver.key.0d3a4771
	I0816 13:12:11.902462   40000 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/multinode-336982/proxy-client.key
	I0816 13:12:11.902476   40000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0816 13:12:11.902497   40000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0816 13:12:11.902515   40000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0816 13:12:11.902533   40000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0816 13:12:11.902547   40000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/multinode-336982/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0816 13:12:11.902565   40000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/multinode-336982/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0816 13:12:11.902584   40000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/multinode-336982/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0816 13:12:11.902606   40000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/multinode-336982/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0816 13:12:11.902669   40000 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem (1338 bytes)
	W0816 13:12:11.902709   40000 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149_empty.pem, impossibly tiny 0 bytes
	I0816 13:12:11.902724   40000 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 13:12:11.902757   40000 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem (1082 bytes)
	I0816 13:12:11.902787   40000 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem (1123 bytes)
	I0816 13:12:11.902826   40000 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem (1675 bytes)
	I0816 13:12:11.902879   40000 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem (1708 bytes)
	I0816 13:12:11.902917   40000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem -> /usr/share/ca-certificates/111492.pem
	I0816 13:12:11.902936   40000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:12:11.902956   40000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem -> /usr/share/ca-certificates/11149.pem
	I0816 13:12:11.903555   40000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 13:12:11.928501   40000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 13:12:11.951991   40000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 13:12:11.975912   40000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 13:12:11.998856   40000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/multinode-336982/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0816 13:12:12.023369   40000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/multinode-336982/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 13:12:12.046695   40000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/multinode-336982/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 13:12:12.069962   40000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/multinode-336982/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 13:12:12.093558   40000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /usr/share/ca-certificates/111492.pem (1708 bytes)
	I0816 13:12:12.116801   40000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 13:12:12.140641   40000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem --> /usr/share/ca-certificates/11149.pem (1338 bytes)
	I0816 13:12:12.163663   40000 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 13:12:12.180712   40000 ssh_runner.go:195] Run: openssl version
	I0816 13:12:12.186359   40000 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0816 13:12:12.186476   40000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 13:12:12.198403   40000 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:12:12.206047   40000 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 16 12:22 /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:12:12.206313   40000 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 12:22 /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:12:12.206358   40000 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:12:12.215453   40000 command_runner.go:130] > b5213941
	I0816 13:12:12.215708   40000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 13:12:12.242165   40000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11149.pem && ln -fs /usr/share/ca-certificates/11149.pem /etc/ssl/certs/11149.pem"
	I0816 13:12:12.266885   40000 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11149.pem
	I0816 13:12:12.273182   40000 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 16 12:33 /usr/share/ca-certificates/11149.pem
	I0816 13:12:12.273359   40000 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 12:33 /usr/share/ca-certificates/11149.pem
	I0816 13:12:12.273422   40000 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11149.pem
	I0816 13:12:12.281766   40000 command_runner.go:130] > 51391683
	I0816 13:12:12.282058   40000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11149.pem /etc/ssl/certs/51391683.0"
	I0816 13:12:12.310543   40000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111492.pem && ln -fs /usr/share/ca-certificates/111492.pem /etc/ssl/certs/111492.pem"
	I0816 13:12:12.337804   40000 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111492.pem
	I0816 13:12:12.342702   40000 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 16 12:33 /usr/share/ca-certificates/111492.pem
	I0816 13:12:12.342737   40000 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 12:33 /usr/share/ca-certificates/111492.pem
	I0816 13:12:12.342791   40000 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111492.pem
	I0816 13:12:12.353611   40000 command_runner.go:130] > 3ec20f2e
	I0816 13:12:12.353757   40000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111492.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 13:12:12.364036   40000 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 13:12:12.374562   40000 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 13:12:12.374595   40000 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0816 13:12:12.374603   40000 command_runner.go:130] > Device: 253,1	Inode: 5244438     Links: 1
	I0816 13:12:12.374612   40000 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0816 13:12:12.374624   40000 command_runner.go:130] > Access: 2024-08-16 13:05:25.988149491 +0000
	I0816 13:12:12.374635   40000 command_runner.go:130] > Modify: 2024-08-16 13:05:25.988149491 +0000
	I0816 13:12:12.374648   40000 command_runner.go:130] > Change: 2024-08-16 13:05:25.988149491 +0000
	I0816 13:12:12.374659   40000 command_runner.go:130] >  Birth: 2024-08-16 13:05:25.988149491 +0000
	I0816 13:12:12.374726   40000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 13:12:12.383792   40000 command_runner.go:130] > Certificate will not expire
	I0816 13:12:12.383867   40000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 13:12:12.389619   40000 command_runner.go:130] > Certificate will not expire
	I0816 13:12:12.389697   40000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 13:12:12.395595   40000 command_runner.go:130] > Certificate will not expire
	I0816 13:12:12.395668   40000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 13:12:12.403744   40000 command_runner.go:130] > Certificate will not expire
	I0816 13:12:12.403992   40000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 13:12:12.415146   40000 command_runner.go:130] > Certificate will not expire
	I0816 13:12:12.417091   40000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 13:12:12.427767   40000 command_runner.go:130] > Certificate will not expire
	I0816 13:12:12.427837   40000 kubeadm.go:392] StartCluster: {Name:multinode-336982 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:multinode-336982 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.190 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.145 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 13:12:12.427937   40000 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 13:12:12.427987   40000 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 13:12:12.492379   40000 command_runner.go:130] > 1bf884fd123a86f6a94ab5aea8257e3302f8a85a9269f32ebf4329e5e3a47b39
	I0816 13:12:12.492407   40000 command_runner.go:130] > bf650b256082f2286f5edf9635d8701a768b8e0725633fe268a78e645daebefe
	I0816 13:12:12.492416   40000 command_runner.go:130] > 851fbcba07c087e4024905c5d5a1a90eb6c2fdf8219077078d724fa20411b383
	I0816 13:12:12.492431   40000 command_runner.go:130] > 171a9c405c59e0ecaa54d87efd81e8b7ea92feb12d2b0e9e89ad028a749fd793
	I0816 13:12:12.492605   40000 command_runner.go:130] > 212bd68acb7c3f8a5334f4c436c1b81fcfb0240447356d8ca45553de7b2d1463
	I0816 13:12:12.492755   40000 command_runner.go:130] > 65630aa0a16feb60c141b71af051e690c23aef4d2dd50e00f887afe607359e70
	I0816 13:12:12.492845   40000 command_runner.go:130] > 5b58598ac934ca7f1a7851e587a2d1dd3a072d35c90bfedf4580b46cf9c49b23
	I0816 13:12:12.492920   40000 command_runner.go:130] > 99746d40c6523384b1bf891b6e187b4e084acd439343373ecb64225c76096431
	I0816 13:12:12.495704   40000 cri.go:89] found id: "1bf884fd123a86f6a94ab5aea8257e3302f8a85a9269f32ebf4329e5e3a47b39"
	I0816 13:12:12.495721   40000 cri.go:89] found id: "bf650b256082f2286f5edf9635d8701a768b8e0725633fe268a78e645daebefe"
	I0816 13:12:12.495727   40000 cri.go:89] found id: "851fbcba07c087e4024905c5d5a1a90eb6c2fdf8219077078d724fa20411b383"
	I0816 13:12:12.495732   40000 cri.go:89] found id: "171a9c405c59e0ecaa54d87efd81e8b7ea92feb12d2b0e9e89ad028a749fd793"
	I0816 13:12:12.495735   40000 cri.go:89] found id: "212bd68acb7c3f8a5334f4c436c1b81fcfb0240447356d8ca45553de7b2d1463"
	I0816 13:12:12.495740   40000 cri.go:89] found id: "65630aa0a16feb60c141b71af051e690c23aef4d2dd50e00f887afe607359e70"
	I0816 13:12:12.495744   40000 cri.go:89] found id: "5b58598ac934ca7f1a7851e587a2d1dd3a072d35c90bfedf4580b46cf9c49b23"
	I0816 13:12:12.495747   40000 cri.go:89] found id: "99746d40c6523384b1bf891b6e187b4e084acd439343373ecb64225c76096431"
	I0816 13:12:12.495751   40000 cri.go:89] found id: ""
	I0816 13:12:12.495800   40000 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 16 13:14:02 multinode-336982 crio[2748]: time="2024-08-16 13:14:02.038828542Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723814042038804907,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=10761169-5e0e-4ca1-9536-a62a52663290 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 13:14:02 multinode-336982 crio[2748]: time="2024-08-16 13:14:02.039298088Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fb38a1ce-ee58-419b-bfba-0e43e15520a3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:14:02 multinode-336982 crio[2748]: time="2024-08-16 13:14:02.039365413Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fb38a1ce-ee58-419b-bfba-0e43e15520a3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:14:02 multinode-336982 crio[2748]: time="2024-08-16 13:14:02.039733884Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a9414759f87719c77eee45e982c56ec13250cff68aba0734f305f822c9fe9b4,PodSandboxId:4a67ef1185ff3fdf7dbb57b9cf130f1f6212b274b5219a5c1d9d1d391dd38b93,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723813970975893003,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-m9dxd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 29ae930d-fd36-432b-bcc5-dabed3bccf88,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e05069c8cc20dd27a545d01b2509dd1a1bcc53d588427b0b631ec8a5bd0cf92e,PodSandboxId:d043b69b8d054de1d1e28b7b5d2cafe491e7da223056f81c4994bc63f44c047a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723813945450948375,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-hlww9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cca73c48-3261-41f5-ab42-eeedd88963e1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8ca000436a5314e0960863ed8b9db6fc418404e98414d0dd4edb8a36cafddde,PodSandboxId:ae2403f8e9d76f3bdabb33554edf3b9f64cc80066f556e4c812288ce9adfcb88,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723813937603021042,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f5nrl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d280e9e-5cf6-4085-b0f8-44f6
6b50d628,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17bb1938edcda574d2bd217180511d384e6a0d85479c79f4c1d31db5029c1d8c,PodSandboxId:d832dc0a065b1be593654f0f4a53a7189f4171989f83e961774cf7f3ef53fb4c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723813937653061342,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6n4gk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c810fd88-0141-4de3-ba4f-df2fc8a80fd7,},Annotations:map[string]s
tring{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:330eb5847d3245030ee44a01d452db0c1f31ddc5727677ecbcd799ae614ebb97,PodSandboxId:a0eb69d1a32b259eac3f51ed898343a647a756f7c0f14837d87860e213b6b6d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723813937460567822,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-336982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6772094a87c90b58c0abaaf77a03215c,},Annotations:map[string]string{io.kuber
netes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9919750845f3d2c3e35c1bdf9ff7dbcc3d1dc5557af45c7985144f0b6a09741,PodSandboxId:b03c7e115b2d36ba00eb1b23a5a8461e1c2166b743d98a68032bff3448c0ad01,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723813937443960933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-336982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8dda651fbb89b3aee00d92913e93916,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes
.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:777e3c00b611eab51e49b9f47ec33b57e3b049a860b59b4e99ba6924ab849b92,PodSandboxId:c93bff5a6499e1b0f4e2794a1140ce6853b91d77fbbf50229d7ed7e3e4a3ece3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723813937476028014,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0b36228-a7b4-472f-84a8-fd741f3ec98f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:326c1e94f5e1fd94cc3c87623d36864f04a7b798119996892336497c0f01ab5f,PodSandboxId:ce022a6356e7f35f41dffa77eafed98ba920b85d4d1d9184bd9710d2d2e931a8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723813937365956311,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-336982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da3ccf05dc75dab1bf425f41781a5a4c,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a0e2cdd1f40cc329c45713d49852c0612cdd99521c075c6211fde473ad0cfdc,PodSandboxId:f173ddb920f3c36f95768ea64edf8b554de4a85232cf40bd9d5a2819359861c5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723813937319149997,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-336982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9801eff48296cf0bd365840495007eee,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a556ba4d113ba15bbf1bbe5329aab9f84a5d66c9d80385f3a5d3ded62054521,PodSandboxId:d043b69b8d054de1d1e28b7b5d2cafe491e7da223056f81c4994bc63f44c047a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723813932422085150,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-hlww9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cca73c48-3261-41f5-ab42-eeedd88963e1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51fce230c5a5add847549e4b80c0af67c7829b5cbc70fae3d0fc0e77df2922fd,PodSandboxId:c40879fe636ed06ebf7e08733c05d197ceccb9f2a7d03263e31f21e861d4eed0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723813612436947929,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-m9dxd,io
.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 29ae930d-fd36-432b-bcc5-dabed3bccf88,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf650b256082f2286f5edf9635d8701a768b8e0725633fe268a78e645daebefe,PodSandboxId:86a68133870d872722d73bd0d0865707a96f664d06848e8bbc5c1caae5c37e89,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723813555841387819,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: e0b36228-a7b4-472f-84a8-fd741f3ec98f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:851fbcba07c087e4024905c5d5a1a90eb6c2fdf8219077078d724fa20411b383,PodSandboxId:f6cd1d65812a6f90dd9f1edcad13b4abe58c7f1bfd1354e589902918e38a0081,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723813544241323796,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6n4gk,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: c810fd88-0141-4de3-ba4f-df2fc8a80fd7,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:171a9c405c59e0ecaa54d87efd81e8b7ea92feb12d2b0e9e89ad028a749fd793,PodSandboxId:1af3d34c2cf06f14f0d6905b12a75635ff29624c535f579cc354ac67c9e38df0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723813540454543415,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f5nrl,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 1d280e9e-5cf6-4085-b0f8-44f66b50d628,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65630aa0a16feb60c141b71af051e690c23aef4d2dd50e00f887afe607359e70,PodSandboxId:a53322d16f113ed80dd70eeb5c6dbcd7d16f4188dbab27f46f87ec40d1cbf585,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723813529636993225,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-336982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8dda651fbb89b3aee00d92913e93916
,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:212bd68acb7c3f8a5334f4c436c1b81fcfb0240447356d8ca45553de7b2d1463,PodSandboxId:032c3a483ab2c52f59e816355da58d5d0663a9ea40398ee3a14f6ca439ccb1e5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723813529673786686,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-336982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6772094a87c90b58c0abaaf77a03215c,},Annotations:
map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b58598ac934ca7f1a7851e587a2d1dd3a072d35c90bfedf4580b46cf9c49b23,PodSandboxId:e961af5b081dac6fdf599d48c2e23a435171b48020c3067f05f0272eaf27aff3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723813529616989104,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-336982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da3ccf05dc75dab1bf425f41781a5a4c,},
Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99746d40c6523384b1bf891b6e187b4e084acd439343373ecb64225c76096431,PodSandboxId:98262cbe870074e579385169515b57d9111d6dfe3690d3e781eb73b5e75f9d76,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723813529589334843,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-336982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9801eff48296cf0bd365840495007eee,},Annotations:map
[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fb38a1ce-ee58-419b-bfba-0e43e15520a3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:14:02 multinode-336982 crio[2748]: time="2024-08-16 13:14:02.079391249Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e9b9df5f-f9f3-4240-b47a-f2e27cd12dee name=/runtime.v1.RuntimeService/Version
	Aug 16 13:14:02 multinode-336982 crio[2748]: time="2024-08-16 13:14:02.079613456Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e9b9df5f-f9f3-4240-b47a-f2e27cd12dee name=/runtime.v1.RuntimeService/Version
	Aug 16 13:14:02 multinode-336982 crio[2748]: time="2024-08-16 13:14:02.080558199Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0ab0b87c-33d2-45e1-8349-524725d8d40b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 13:14:02 multinode-336982 crio[2748]: time="2024-08-16 13:14:02.081209188Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723814042081184725,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0ab0b87c-33d2-45e1-8349-524725d8d40b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 13:14:02 multinode-336982 crio[2748]: time="2024-08-16 13:14:02.081692426Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=81f9ed70-c7de-41de-a2fb-62329af5305b name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:14:02 multinode-336982 crio[2748]: time="2024-08-16 13:14:02.081747192Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=81f9ed70-c7de-41de-a2fb-62329af5305b name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:14:02 multinode-336982 crio[2748]: time="2024-08-16 13:14:02.082113279Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a9414759f87719c77eee45e982c56ec13250cff68aba0734f305f822c9fe9b4,PodSandboxId:4a67ef1185ff3fdf7dbb57b9cf130f1f6212b274b5219a5c1d9d1d391dd38b93,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723813970975893003,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-m9dxd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 29ae930d-fd36-432b-bcc5-dabed3bccf88,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e05069c8cc20dd27a545d01b2509dd1a1bcc53d588427b0b631ec8a5bd0cf92e,PodSandboxId:d043b69b8d054de1d1e28b7b5d2cafe491e7da223056f81c4994bc63f44c047a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723813945450948375,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-hlww9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cca73c48-3261-41f5-ab42-eeedd88963e1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8ca000436a5314e0960863ed8b9db6fc418404e98414d0dd4edb8a36cafddde,PodSandboxId:ae2403f8e9d76f3bdabb33554edf3b9f64cc80066f556e4c812288ce9adfcb88,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723813937603021042,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f5nrl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d280e9e-5cf6-4085-b0f8-44f6
6b50d628,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17bb1938edcda574d2bd217180511d384e6a0d85479c79f4c1d31db5029c1d8c,PodSandboxId:d832dc0a065b1be593654f0f4a53a7189f4171989f83e961774cf7f3ef53fb4c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723813937653061342,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6n4gk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c810fd88-0141-4de3-ba4f-df2fc8a80fd7,},Annotations:map[string]s
tring{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:330eb5847d3245030ee44a01d452db0c1f31ddc5727677ecbcd799ae614ebb97,PodSandboxId:a0eb69d1a32b259eac3f51ed898343a647a756f7c0f14837d87860e213b6b6d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723813937460567822,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-336982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6772094a87c90b58c0abaaf77a03215c,},Annotations:map[string]string{io.kuber
netes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9919750845f3d2c3e35c1bdf9ff7dbcc3d1dc5557af45c7985144f0b6a09741,PodSandboxId:b03c7e115b2d36ba00eb1b23a5a8461e1c2166b743d98a68032bff3448c0ad01,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723813937443960933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-336982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8dda651fbb89b3aee00d92913e93916,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes
.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:777e3c00b611eab51e49b9f47ec33b57e3b049a860b59b4e99ba6924ab849b92,PodSandboxId:c93bff5a6499e1b0f4e2794a1140ce6853b91d77fbbf50229d7ed7e3e4a3ece3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723813937476028014,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0b36228-a7b4-472f-84a8-fd741f3ec98f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:326c1e94f5e1fd94cc3c87623d36864f04a7b798119996892336497c0f01ab5f,PodSandboxId:ce022a6356e7f35f41dffa77eafed98ba920b85d4d1d9184bd9710d2d2e931a8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723813937365956311,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-336982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da3ccf05dc75dab1bf425f41781a5a4c,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a0e2cdd1f40cc329c45713d49852c0612cdd99521c075c6211fde473ad0cfdc,PodSandboxId:f173ddb920f3c36f95768ea64edf8b554de4a85232cf40bd9d5a2819359861c5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723813937319149997,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-336982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9801eff48296cf0bd365840495007eee,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a556ba4d113ba15bbf1bbe5329aab9f84a5d66c9d80385f3a5d3ded62054521,PodSandboxId:d043b69b8d054de1d1e28b7b5d2cafe491e7da223056f81c4994bc63f44c047a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723813932422085150,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-hlww9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cca73c48-3261-41f5-ab42-eeedd88963e1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51fce230c5a5add847549e4b80c0af67c7829b5cbc70fae3d0fc0e77df2922fd,PodSandboxId:c40879fe636ed06ebf7e08733c05d197ceccb9f2a7d03263e31f21e861d4eed0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723813612436947929,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-m9dxd,io
.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 29ae930d-fd36-432b-bcc5-dabed3bccf88,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf650b256082f2286f5edf9635d8701a768b8e0725633fe268a78e645daebefe,PodSandboxId:86a68133870d872722d73bd0d0865707a96f664d06848e8bbc5c1caae5c37e89,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723813555841387819,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: e0b36228-a7b4-472f-84a8-fd741f3ec98f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:851fbcba07c087e4024905c5d5a1a90eb6c2fdf8219077078d724fa20411b383,PodSandboxId:f6cd1d65812a6f90dd9f1edcad13b4abe58c7f1bfd1354e589902918e38a0081,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723813544241323796,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6n4gk,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: c810fd88-0141-4de3-ba4f-df2fc8a80fd7,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:171a9c405c59e0ecaa54d87efd81e8b7ea92feb12d2b0e9e89ad028a749fd793,PodSandboxId:1af3d34c2cf06f14f0d6905b12a75635ff29624c535f579cc354ac67c9e38df0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723813540454543415,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f5nrl,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 1d280e9e-5cf6-4085-b0f8-44f66b50d628,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65630aa0a16feb60c141b71af051e690c23aef4d2dd50e00f887afe607359e70,PodSandboxId:a53322d16f113ed80dd70eeb5c6dbcd7d16f4188dbab27f46f87ec40d1cbf585,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723813529636993225,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-336982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8dda651fbb89b3aee00d92913e93916
,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:212bd68acb7c3f8a5334f4c436c1b81fcfb0240447356d8ca45553de7b2d1463,PodSandboxId:032c3a483ab2c52f59e816355da58d5d0663a9ea40398ee3a14f6ca439ccb1e5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723813529673786686,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-336982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6772094a87c90b58c0abaaf77a03215c,},Annotations:
map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b58598ac934ca7f1a7851e587a2d1dd3a072d35c90bfedf4580b46cf9c49b23,PodSandboxId:e961af5b081dac6fdf599d48c2e23a435171b48020c3067f05f0272eaf27aff3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723813529616989104,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-336982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da3ccf05dc75dab1bf425f41781a5a4c,},
Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99746d40c6523384b1bf891b6e187b4e084acd439343373ecb64225c76096431,PodSandboxId:98262cbe870074e579385169515b57d9111d6dfe3690d3e781eb73b5e75f9d76,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723813529589334843,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-336982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9801eff48296cf0bd365840495007eee,},Annotations:map
[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=81f9ed70-c7de-41de-a2fb-62329af5305b name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:14:02 multinode-336982 crio[2748]: time="2024-08-16 13:14:02.125524429Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=59d83df4-9a3c-4213-bd79-a049c6d921d9 name=/runtime.v1.RuntimeService/Version
	Aug 16 13:14:02 multinode-336982 crio[2748]: time="2024-08-16 13:14:02.125598916Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=59d83df4-9a3c-4213-bd79-a049c6d921d9 name=/runtime.v1.RuntimeService/Version
	Aug 16 13:14:02 multinode-336982 crio[2748]: time="2024-08-16 13:14:02.126735357Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4810c255-1a93-4d82-9d58-88faf0bf10f3 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 13:14:02 multinode-336982 crio[2748]: time="2024-08-16 13:14:02.127149929Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723814042127126839,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4810c255-1a93-4d82-9d58-88faf0bf10f3 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 13:14:02 multinode-336982 crio[2748]: time="2024-08-16 13:14:02.127729628Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5ca6dc0e-4189-4372-a247-c65ce79f5e26 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:14:02 multinode-336982 crio[2748]: time="2024-08-16 13:14:02.127784466Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5ca6dc0e-4189-4372-a247-c65ce79f5e26 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:14:02 multinode-336982 crio[2748]: time="2024-08-16 13:14:02.128123216Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a9414759f87719c77eee45e982c56ec13250cff68aba0734f305f822c9fe9b4,PodSandboxId:4a67ef1185ff3fdf7dbb57b9cf130f1f6212b274b5219a5c1d9d1d391dd38b93,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723813970975893003,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-m9dxd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 29ae930d-fd36-432b-bcc5-dabed3bccf88,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e05069c8cc20dd27a545d01b2509dd1a1bcc53d588427b0b631ec8a5bd0cf92e,PodSandboxId:d043b69b8d054de1d1e28b7b5d2cafe491e7da223056f81c4994bc63f44c047a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723813945450948375,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-hlww9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cca73c48-3261-41f5-ab42-eeedd88963e1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8ca000436a5314e0960863ed8b9db6fc418404e98414d0dd4edb8a36cafddde,PodSandboxId:ae2403f8e9d76f3bdabb33554edf3b9f64cc80066f556e4c812288ce9adfcb88,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723813937603021042,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f5nrl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d280e9e-5cf6-4085-b0f8-44f6
6b50d628,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17bb1938edcda574d2bd217180511d384e6a0d85479c79f4c1d31db5029c1d8c,PodSandboxId:d832dc0a065b1be593654f0f4a53a7189f4171989f83e961774cf7f3ef53fb4c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723813937653061342,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6n4gk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c810fd88-0141-4de3-ba4f-df2fc8a80fd7,},Annotations:map[string]s
tring{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:330eb5847d3245030ee44a01d452db0c1f31ddc5727677ecbcd799ae614ebb97,PodSandboxId:a0eb69d1a32b259eac3f51ed898343a647a756f7c0f14837d87860e213b6b6d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723813937460567822,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-336982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6772094a87c90b58c0abaaf77a03215c,},Annotations:map[string]string{io.kuber
netes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9919750845f3d2c3e35c1bdf9ff7dbcc3d1dc5557af45c7985144f0b6a09741,PodSandboxId:b03c7e115b2d36ba00eb1b23a5a8461e1c2166b743d98a68032bff3448c0ad01,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723813937443960933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-336982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8dda651fbb89b3aee00d92913e93916,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes
.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:777e3c00b611eab51e49b9f47ec33b57e3b049a860b59b4e99ba6924ab849b92,PodSandboxId:c93bff5a6499e1b0f4e2794a1140ce6853b91d77fbbf50229d7ed7e3e4a3ece3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723813937476028014,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0b36228-a7b4-472f-84a8-fd741f3ec98f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:326c1e94f5e1fd94cc3c87623d36864f04a7b798119996892336497c0f01ab5f,PodSandboxId:ce022a6356e7f35f41dffa77eafed98ba920b85d4d1d9184bd9710d2d2e931a8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723813937365956311,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-336982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da3ccf05dc75dab1bf425f41781a5a4c,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a0e2cdd1f40cc329c45713d49852c0612cdd99521c075c6211fde473ad0cfdc,PodSandboxId:f173ddb920f3c36f95768ea64edf8b554de4a85232cf40bd9d5a2819359861c5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723813937319149997,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-336982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9801eff48296cf0bd365840495007eee,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a556ba4d113ba15bbf1bbe5329aab9f84a5d66c9d80385f3a5d3ded62054521,PodSandboxId:d043b69b8d054de1d1e28b7b5d2cafe491e7da223056f81c4994bc63f44c047a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723813932422085150,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-hlww9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cca73c48-3261-41f5-ab42-eeedd88963e1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51fce230c5a5add847549e4b80c0af67c7829b5cbc70fae3d0fc0e77df2922fd,PodSandboxId:c40879fe636ed06ebf7e08733c05d197ceccb9f2a7d03263e31f21e861d4eed0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723813612436947929,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-m9dxd,io
.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 29ae930d-fd36-432b-bcc5-dabed3bccf88,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf650b256082f2286f5edf9635d8701a768b8e0725633fe268a78e645daebefe,PodSandboxId:86a68133870d872722d73bd0d0865707a96f664d06848e8bbc5c1caae5c37e89,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723813555841387819,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: e0b36228-a7b4-472f-84a8-fd741f3ec98f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:851fbcba07c087e4024905c5d5a1a90eb6c2fdf8219077078d724fa20411b383,PodSandboxId:f6cd1d65812a6f90dd9f1edcad13b4abe58c7f1bfd1354e589902918e38a0081,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723813544241323796,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6n4gk,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: c810fd88-0141-4de3-ba4f-df2fc8a80fd7,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:171a9c405c59e0ecaa54d87efd81e8b7ea92feb12d2b0e9e89ad028a749fd793,PodSandboxId:1af3d34c2cf06f14f0d6905b12a75635ff29624c535f579cc354ac67c9e38df0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723813540454543415,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f5nrl,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 1d280e9e-5cf6-4085-b0f8-44f66b50d628,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65630aa0a16feb60c141b71af051e690c23aef4d2dd50e00f887afe607359e70,PodSandboxId:a53322d16f113ed80dd70eeb5c6dbcd7d16f4188dbab27f46f87ec40d1cbf585,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723813529636993225,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-336982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8dda651fbb89b3aee00d92913e93916
,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:212bd68acb7c3f8a5334f4c436c1b81fcfb0240447356d8ca45553de7b2d1463,PodSandboxId:032c3a483ab2c52f59e816355da58d5d0663a9ea40398ee3a14f6ca439ccb1e5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723813529673786686,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-336982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6772094a87c90b58c0abaaf77a03215c,},Annotations:
map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b58598ac934ca7f1a7851e587a2d1dd3a072d35c90bfedf4580b46cf9c49b23,PodSandboxId:e961af5b081dac6fdf599d48c2e23a435171b48020c3067f05f0272eaf27aff3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723813529616989104,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-336982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da3ccf05dc75dab1bf425f41781a5a4c,},
Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99746d40c6523384b1bf891b6e187b4e084acd439343373ecb64225c76096431,PodSandboxId:98262cbe870074e579385169515b57d9111d6dfe3690d3e781eb73b5e75f9d76,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723813529589334843,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-336982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9801eff48296cf0bd365840495007eee,},Annotations:map
[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5ca6dc0e-4189-4372-a247-c65ce79f5e26 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:14:02 multinode-336982 crio[2748]: time="2024-08-16 13:14:02.171295865Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1c6f6bf7-1175-41d4-9903-d89adc459a12 name=/runtime.v1.RuntimeService/Version
	Aug 16 13:14:02 multinode-336982 crio[2748]: time="2024-08-16 13:14:02.171365665Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1c6f6bf7-1175-41d4-9903-d89adc459a12 name=/runtime.v1.RuntimeService/Version
	Aug 16 13:14:02 multinode-336982 crio[2748]: time="2024-08-16 13:14:02.172749867Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9b465b9b-83ff-443d-9c3a-bccf6778a39e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 13:14:02 multinode-336982 crio[2748]: time="2024-08-16 13:14:02.173211808Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723814042173188124,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9b465b9b-83ff-443d-9c3a-bccf6778a39e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 13:14:02 multinode-336982 crio[2748]: time="2024-08-16 13:14:02.173758410Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fee2ce4a-39a8-4896-902c-68b68de58565 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:14:02 multinode-336982 crio[2748]: time="2024-08-16 13:14:02.173842946Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fee2ce4a-39a8-4896-902c-68b68de58565 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:14:02 multinode-336982 crio[2748]: time="2024-08-16 13:14:02.174325246Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a9414759f87719c77eee45e982c56ec13250cff68aba0734f305f822c9fe9b4,PodSandboxId:4a67ef1185ff3fdf7dbb57b9cf130f1f6212b274b5219a5c1d9d1d391dd38b93,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723813970975893003,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-m9dxd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 29ae930d-fd36-432b-bcc5-dabed3bccf88,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e05069c8cc20dd27a545d01b2509dd1a1bcc53d588427b0b631ec8a5bd0cf92e,PodSandboxId:d043b69b8d054de1d1e28b7b5d2cafe491e7da223056f81c4994bc63f44c047a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723813945450948375,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-hlww9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cca73c48-3261-41f5-ab42-eeedd88963e1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8ca000436a5314e0960863ed8b9db6fc418404e98414d0dd4edb8a36cafddde,PodSandboxId:ae2403f8e9d76f3bdabb33554edf3b9f64cc80066f556e4c812288ce9adfcb88,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723813937603021042,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f5nrl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d280e9e-5cf6-4085-b0f8-44f6
6b50d628,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17bb1938edcda574d2bd217180511d384e6a0d85479c79f4c1d31db5029c1d8c,PodSandboxId:d832dc0a065b1be593654f0f4a53a7189f4171989f83e961774cf7f3ef53fb4c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723813937653061342,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6n4gk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c810fd88-0141-4de3-ba4f-df2fc8a80fd7,},Annotations:map[string]s
tring{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:330eb5847d3245030ee44a01d452db0c1f31ddc5727677ecbcd799ae614ebb97,PodSandboxId:a0eb69d1a32b259eac3f51ed898343a647a756f7c0f14837d87860e213b6b6d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723813937460567822,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-336982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6772094a87c90b58c0abaaf77a03215c,},Annotations:map[string]string{io.kuber
netes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9919750845f3d2c3e35c1bdf9ff7dbcc3d1dc5557af45c7985144f0b6a09741,PodSandboxId:b03c7e115b2d36ba00eb1b23a5a8461e1c2166b743d98a68032bff3448c0ad01,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723813937443960933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-336982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8dda651fbb89b3aee00d92913e93916,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes
.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:777e3c00b611eab51e49b9f47ec33b57e3b049a860b59b4e99ba6924ab849b92,PodSandboxId:c93bff5a6499e1b0f4e2794a1140ce6853b91d77fbbf50229d7ed7e3e4a3ece3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723813937476028014,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0b36228-a7b4-472f-84a8-fd741f3ec98f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:326c1e94f5e1fd94cc3c87623d36864f04a7b798119996892336497c0f01ab5f,PodSandboxId:ce022a6356e7f35f41dffa77eafed98ba920b85d4d1d9184bd9710d2d2e931a8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723813937365956311,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-336982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da3ccf05dc75dab1bf425f41781a5a4c,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a0e2cdd1f40cc329c45713d49852c0612cdd99521c075c6211fde473ad0cfdc,PodSandboxId:f173ddb920f3c36f95768ea64edf8b554de4a85232cf40bd9d5a2819359861c5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723813937319149997,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-336982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9801eff48296cf0bd365840495007eee,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a556ba4d113ba15bbf1bbe5329aab9f84a5d66c9d80385f3a5d3ded62054521,PodSandboxId:d043b69b8d054de1d1e28b7b5d2cafe491e7da223056f81c4994bc63f44c047a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723813932422085150,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-hlww9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cca73c48-3261-41f5-ab42-eeedd88963e1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51fce230c5a5add847549e4b80c0af67c7829b5cbc70fae3d0fc0e77df2922fd,PodSandboxId:c40879fe636ed06ebf7e08733c05d197ceccb9f2a7d03263e31f21e861d4eed0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723813612436947929,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-m9dxd,io
.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 29ae930d-fd36-432b-bcc5-dabed3bccf88,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf650b256082f2286f5edf9635d8701a768b8e0725633fe268a78e645daebefe,PodSandboxId:86a68133870d872722d73bd0d0865707a96f664d06848e8bbc5c1caae5c37e89,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723813555841387819,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: e0b36228-a7b4-472f-84a8-fd741f3ec98f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:851fbcba07c087e4024905c5d5a1a90eb6c2fdf8219077078d724fa20411b383,PodSandboxId:f6cd1d65812a6f90dd9f1edcad13b4abe58c7f1bfd1354e589902918e38a0081,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723813544241323796,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6n4gk,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: c810fd88-0141-4de3-ba4f-df2fc8a80fd7,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:171a9c405c59e0ecaa54d87efd81e8b7ea92feb12d2b0e9e89ad028a749fd793,PodSandboxId:1af3d34c2cf06f14f0d6905b12a75635ff29624c535f579cc354ac67c9e38df0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723813540454543415,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f5nrl,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 1d280e9e-5cf6-4085-b0f8-44f66b50d628,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65630aa0a16feb60c141b71af051e690c23aef4d2dd50e00f887afe607359e70,PodSandboxId:a53322d16f113ed80dd70eeb5c6dbcd7d16f4188dbab27f46f87ec40d1cbf585,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723813529636993225,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-336982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8dda651fbb89b3aee00d92913e93916
,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:212bd68acb7c3f8a5334f4c436c1b81fcfb0240447356d8ca45553de7b2d1463,PodSandboxId:032c3a483ab2c52f59e816355da58d5d0663a9ea40398ee3a14f6ca439ccb1e5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723813529673786686,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-336982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6772094a87c90b58c0abaaf77a03215c,},Annotations:
map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b58598ac934ca7f1a7851e587a2d1dd3a072d35c90bfedf4580b46cf9c49b23,PodSandboxId:e961af5b081dac6fdf599d48c2e23a435171b48020c3067f05f0272eaf27aff3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723813529616989104,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-336982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da3ccf05dc75dab1bf425f41781a5a4c,},
Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99746d40c6523384b1bf891b6e187b4e084acd439343373ecb64225c76096431,PodSandboxId:98262cbe870074e579385169515b57d9111d6dfe3690d3e781eb73b5e75f9d76,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723813529589334843,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-336982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9801eff48296cf0bd365840495007eee,},Annotations:map
[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fee2ce4a-39a8-4896-902c-68b68de58565 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	5a9414759f877       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   4a67ef1185ff3       busybox-7dff88458-m9dxd
	e05069c8cc20d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   2                   d043b69b8d054       coredns-6f6b679f8f-hlww9
	17bb1938edcda       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      About a minute ago   Running             kindnet-cni               1                   d832dc0a065b1       kindnet-6n4gk
	e8ca000436a53       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      About a minute ago   Running             kube-proxy                1                   ae2403f8e9d76       kube-proxy-f5nrl
	777e3c00b611e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   c93bff5a6499e       storage-provisioner
	330eb5847d324       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      About a minute ago   Running             kube-scheduler            1                   a0eb69d1a32b2       kube-scheduler-multinode-336982
	d9919750845f3       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      About a minute ago   Running             etcd                      1                   b03c7e115b2d3       etcd-multinode-336982
	326c1e94f5e1f       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      About a minute ago   Running             kube-controller-manager   1                   ce022a6356e7f       kube-controller-manager-multinode-336982
	9a0e2cdd1f40c       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      About a minute ago   Running             kube-apiserver            1                   f173ddb920f3c       kube-apiserver-multinode-336982
	7a556ba4d113b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Exited              coredns                   1                   d043b69b8d054       coredns-6f6b679f8f-hlww9
	51fce230c5a5a       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   c40879fe636ed       busybox-7dff88458-m9dxd
	bf650b256082f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago        Exited              storage-provisioner       0                   86a68133870d8       storage-provisioner
	851fbcba07c08       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    8 minutes ago        Exited              kindnet-cni               0                   f6cd1d65812a6       kindnet-6n4gk
	171a9c405c59e       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      8 minutes ago        Exited              kube-proxy                0                   1af3d34c2cf06       kube-proxy-f5nrl
	212bd68acb7c3       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      8 minutes ago        Exited              kube-scheduler            0                   032c3a483ab2c       kube-scheduler-multinode-336982
	65630aa0a16fe       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      8 minutes ago        Exited              etcd                      0                   a53322d16f113       etcd-multinode-336982
	5b58598ac934c       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      8 minutes ago        Exited              kube-controller-manager   0                   e961af5b081da       kube-controller-manager-multinode-336982
	99746d40c6523       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      8 minutes ago        Exited              kube-apiserver            0                   98262cbe87007       kube-apiserver-multinode-336982
	
	
	==> coredns [7a556ba4d113ba15bbf1bbe5329aab9f84a5d66c9d80385f3a5d3ded62054521] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:51821 - 50262 "HINFO IN 6604188459201968296.7220454333062668310. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024604934s
	
	
	==> coredns [e05069c8cc20dd27a545d01b2509dd1a1bcc53d588427b0b631ec8a5bd0cf92e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:56010 - 34265 "HINFO IN 7361033659007836417.3270599287150432285. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015098427s
	
	
	==> describe nodes <==
	Name:               multinode-336982
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-336982
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab84f9bc76071a77c857a14f5c66dccc01002b05
	                    minikube.k8s.io/name=multinode-336982
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_16T13_05_35_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 13:05:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-336982
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 13:13:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 13:12:24 +0000   Fri, 16 Aug 2024 13:05:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 13:12:24 +0000   Fri, 16 Aug 2024 13:05:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 13:12:24 +0000   Fri, 16 Aug 2024 13:05:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 13:12:24 +0000   Fri, 16 Aug 2024 13:05:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.208
	  Hostname:    multinode-336982
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c6f93cbd0c5d47e4b50511cb3c82abea
	  System UUID:                c6f93cbd-0c5d-47e4-b505-11cb3c82abea
	  Boot ID:                    b66d90f4-a0f3-498a-a206-a3b8d9ad2e69
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-m9dxd                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m14s
	  kube-system                 coredns-6f6b679f8f-hlww9                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     8m22s
	  kube-system                 etcd-multinode-336982                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         8m28s
	  kube-system                 kindnet-6n4gk                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      8m23s
	  kube-system                 kube-apiserver-multinode-336982             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m27s
	  kube-system                 kube-controller-manager-multinode-336982    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m27s
	  kube-system                 kube-proxy-f5nrl                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m23s
	  kube-system                 kube-scheduler-multinode-336982             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m27s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                   From             Message
	  ----     ------                   ----                  ----             -------
	  Normal   Starting                 100s                  kube-proxy       
	  Normal   Starting                 8m21s                 kube-proxy       
	  Normal   Starting                 8m28s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8m28s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  8m27s                 kubelet          Node multinode-336982 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m27s                 kubelet          Node multinode-336982 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m27s                 kubelet          Node multinode-336982 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           8m24s                 node-controller  Node multinode-336982 event: Registered Node multinode-336982 in Controller
	  Normal   NodeReady                8m7s                  kubelet          Node multinode-336982 status is now: NodeReady
	  Warning  ContainerGCFailed        2m28s                 kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             109s (x7 over 2m50s)  kubelet          Node multinode-336982 status is now: NodeNotReady
	  Normal   RegisteredNode           98s                   node-controller  Node multinode-336982 event: Registered Node multinode-336982 in Controller
	  Normal   Starting                 98s                   kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  98s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  98s                   kubelet          Node multinode-336982 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    98s                   kubelet          Node multinode-336982 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     98s                   kubelet          Node multinode-336982 status is now: NodeHasSufficientPID
	
	
	Name:               multinode-336982-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-336982-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab84f9bc76071a77c857a14f5c66dccc01002b05
	                    minikube.k8s.io/name=multinode-336982
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_16T13_13_01_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 13:13:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-336982-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 13:14:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 13:13:31 +0000   Fri, 16 Aug 2024 13:13:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 13:13:31 +0000   Fri, 16 Aug 2024 13:13:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 13:13:31 +0000   Fri, 16 Aug 2024 13:13:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 13:13:31 +0000   Fri, 16 Aug 2024 13:13:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.190
	  Hostname:    multinode-336982-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 899bd1eb956a4d558f6a9f86cd27b24a
	  System UUID:                899bd1eb-956a-4d55-8f6a-9f86cd27b24a
	  Boot ID:                    195247a4-a11a-43f0-9450-d95e14f6c438
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-rllpf    0 (0%)        0 (0%)      0 (0%)           0 (0%)         66s
	  kube-system                 kindnet-hp65f              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m37s
	  kube-system                 kube-proxy-p44kb           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 57s                    kube-proxy       
	  Normal  Starting                 7m31s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  7m37s (x2 over 7m37s)  kubelet          Node multinode-336982-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m37s (x2 over 7m37s)  kubelet          Node multinode-336982-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m37s (x2 over 7m37s)  kubelet          Node multinode-336982-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m16s                  kubelet          Node multinode-336982-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  62s (x2 over 62s)      kubelet          Node multinode-336982-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s (x2 over 62s)      kubelet          Node multinode-336982-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s (x2 over 62s)      kubelet          Node multinode-336982-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  62s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           58s                    node-controller  Node multinode-336982-m02 event: Registered Node multinode-336982-m02 in Controller
	  Normal  NodeReady                42s                    kubelet          Node multinode-336982-m02 status is now: NodeReady
	
	
	Name:               multinode-336982-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-336982-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab84f9bc76071a77c857a14f5c66dccc01002b05
	                    minikube.k8s.io/name=multinode-336982
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_16T13_13_40_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 13:13:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-336982-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 13:13:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 13:13:59 +0000   Fri, 16 Aug 2024 13:13:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 13:13:59 +0000   Fri, 16 Aug 2024 13:13:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 13:13:59 +0000   Fri, 16 Aug 2024 13:13:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 13:13:59 +0000   Fri, 16 Aug 2024 13:13:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.145
	  Hostname:    multinode-336982-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 66f4ec3211c8421f90a4294ac03b961c
	  System UUID:                66f4ec32-11c8-421f-90a4-294ac03b961c
	  Boot ID:                    feb546c5-c386-43aa-a8f4-a8c66f5d499f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-kp5tg       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m39s
	  kube-system                 kube-proxy-lg8jj    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m34s                  kube-proxy  
	  Normal  Starting                 18s                    kube-proxy  
	  Normal  Starting                 5m44s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  6m39s (x2 over 6m39s)  kubelet     Node multinode-336982-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m39s (x2 over 6m39s)  kubelet     Node multinode-336982-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m39s (x2 over 6m39s)  kubelet     Node multinode-336982-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m39s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m19s                  kubelet     Node multinode-336982-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m49s (x2 over 5m49s)  kubelet     Node multinode-336982-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m49s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m49s (x2 over 5m49s)  kubelet     Node multinode-336982-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m49s (x2 over 5m49s)  kubelet     Node multinode-336982-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m29s                  kubelet     Node multinode-336982-m03 status is now: NodeReady
	  Normal  Starting                 23s                    kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  23s (x2 over 23s)      kubelet     Node multinode-336982-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x2 over 23s)      kubelet     Node multinode-336982-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x2 over 23s)      kubelet     Node multinode-336982-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s                     kubelet     Node multinode-336982-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.051498] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.197692] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.117742] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.272944] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +3.970134] systemd-fstab-generator[759]: Ignoring "noauto" option for root device
	[  +4.469535] systemd-fstab-generator[896]: Ignoring "noauto" option for root device
	[  +0.060945] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.991922] systemd-fstab-generator[1233]: Ignoring "noauto" option for root device
	[  +0.086982] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.120007] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.096299] systemd-fstab-generator[1336]: Ignoring "noauto" option for root device
	[  +5.085128] kauditd_printk_skb: 59 callbacks suppressed
	[Aug16 13:06] kauditd_printk_skb: 14 callbacks suppressed
	[Aug16 13:12] systemd-fstab-generator[2666]: Ignoring "noauto" option for root device
	[  +0.148332] systemd-fstab-generator[2678]: Ignoring "noauto" option for root device
	[  +0.187933] systemd-fstab-generator[2692]: Ignoring "noauto" option for root device
	[  +0.131211] systemd-fstab-generator[2704]: Ignoring "noauto" option for root device
	[  +0.278116] systemd-fstab-generator[2732]: Ignoring "noauto" option for root device
	[  +0.795771] systemd-fstab-generator[2831]: Ignoring "noauto" option for root device
	[  +5.418072] kauditd_printk_skb: 132 callbacks suppressed
	[  +6.698504] systemd-fstab-generator[3699]: Ignoring "noauto" option for root device
	[  +0.097607] kauditd_printk_skb: 62 callbacks suppressed
	[  +8.244538] kauditd_printk_skb: 21 callbacks suppressed
	[  +3.724894] systemd-fstab-generator[3879]: Ignoring "noauto" option for root device
	[ +14.942388] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [65630aa0a16feb60c141b71af051e690c23aef4d2dd50e00f887afe607359e70] <==
	{"level":"info","ts":"2024-08-16T13:05:30.809860Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T13:05:30.811883Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-16T13:05:30.812473Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-16T13:05:30.812510Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2024-08-16T13:06:25.358304Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"152.563385ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16210303730053671054 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-336982-m02.17ec37528eff62ef\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-336982-m02.17ec37528eff62ef\" value_size:642 lease:6986931693198894650 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-08-16T13:06:25.358616Z","caller":"traceutil/trace.go:171","msg":"trace[246380907] linearizableReadLoop","detail":"{readStateIndex:466; appliedIndex:465; }","duration":"187.795701ms","start":"2024-08-16T13:06:25.170804Z","end":"2024-08-16T13:06:25.358600Z","steps":["trace[246380907] 'read index received'  (duration: 34.389447ms)","trace[246380907] 'applied index is now lower than readState.Index'  (duration: 153.405221ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-16T13:06:25.358683Z","caller":"traceutil/trace.go:171","msg":"trace[1612275950] transaction","detail":"{read_only:false; response_revision:446; number_of_response:1; }","duration":"231.163624ms","start":"2024-08-16T13:06:25.127500Z","end":"2024-08-16T13:06:25.358663Z","steps":["trace[1612275950] 'process raft request'  (duration: 77.734696ms)","trace[1612275950] 'compare'  (duration: 152.438097ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-16T13:06:25.358727Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"187.912968ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-16T13:06:25.358746Z","caller":"traceutil/trace.go:171","msg":"trace[1988710534] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:446; }","duration":"187.940458ms","start":"2024-08-16T13:06:25.170800Z","end":"2024-08-16T13:06:25.358741Z","steps":["trace[1988710534] 'agreement among raft nodes before linearized reading'  (duration: 187.875046ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T13:06:30.623762Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.665083ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-16T13:06:30.624562Z","caller":"traceutil/trace.go:171","msg":"trace[365037968] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:477; }","duration":"102.476047ms","start":"2024-08-16T13:06:30.522073Z","end":"2024-08-16T13:06:30.624549Z","steps":["trace[365037968] 'range keys from in-memory index tree'  (duration: 101.655657ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T13:07:23.575809Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.024393ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16210303730053671573 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-336982-m03.17ec37601ea32b97\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-336982-m03.17ec37601ea32b97\" value_size:646 lease:6986931693198895374 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-08-16T13:07:23.576306Z","caller":"traceutil/trace.go:171","msg":"trace[846683994] transaction","detail":"{read_only:false; response_revision:580; number_of_response:1; }","duration":"209.56128ms","start":"2024-08-16T13:07:23.366670Z","end":"2024-08-16T13:07:23.576232Z","steps":["trace[846683994] 'process raft request'  (duration: 77.921444ms)","trace[846683994] 'compare'  (duration: 130.740012ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-16T13:07:27.755098Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.88044ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-336982-m03\" ","response":"range_response_count:1 size:2887"}
	{"level":"info","ts":"2024-08-16T13:07:27.755161Z","caller":"traceutil/trace.go:171","msg":"trace[729201194] range","detail":"{range_begin:/registry/minions/multinode-336982-m03; range_end:; response_count:1; response_revision:616; }","duration":"140.961034ms","start":"2024-08-16T13:07:27.614189Z","end":"2024-08-16T13:07:27.755150Z","steps":["trace[729201194] 'range keys from in-memory index tree'  (duration: 140.777146ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T13:10:38.919122Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-16T13:10:38.919267Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-336982","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.208:2380"],"advertise-client-urls":["https://192.168.39.208:2379"]}
	{"level":"warn","ts":"2024-08-16T13:10:38.919526Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-16T13:10:38.919637Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-16T13:10:39.008887Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.208:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-16T13:10:39.008925Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.208:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-16T13:10:39.009010Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7fe6bf77aaafe0f6","current-leader-member-id":"7fe6bf77aaafe0f6"}
	{"level":"info","ts":"2024-08-16T13:10:39.011494Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.208:2380"}
	{"level":"info","ts":"2024-08-16T13:10:39.011654Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.208:2380"}
	{"level":"info","ts":"2024-08-16T13:10:39.011701Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-336982","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.208:2380"],"advertise-client-urls":["https://192.168.39.208:2379"]}
	
	
	==> etcd [d9919750845f3d2c3e35c1bdf9ff7dbcc3d1dc5557af45c7985144f0b6a09741] <==
	{"level":"info","ts":"2024-08-16T13:12:18.068244Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-16T13:12:19.904698Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7fe6bf77aaafe0f6 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-16T13:12:19.904760Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7fe6bf77aaafe0f6 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-16T13:12:19.904798Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7fe6bf77aaafe0f6 received MsgPreVoteResp from 7fe6bf77aaafe0f6 at term 2"}
	{"level":"info","ts":"2024-08-16T13:12:19.904817Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7fe6bf77aaafe0f6 became candidate at term 3"}
	{"level":"info","ts":"2024-08-16T13:12:19.904824Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7fe6bf77aaafe0f6 received MsgVoteResp from 7fe6bf77aaafe0f6 at term 3"}
	{"level":"info","ts":"2024-08-16T13:12:19.904841Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7fe6bf77aaafe0f6 became leader at term 3"}
	{"level":"info","ts":"2024-08-16T13:12:19.904848Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7fe6bf77aaafe0f6 elected leader 7fe6bf77aaafe0f6 at term 3"}
	{"level":"info","ts":"2024-08-16T13:12:19.907482Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7fe6bf77aaafe0f6","local-member-attributes":"{Name:multinode-336982 ClientURLs:[https://192.168.39.208:2379]}","request-path":"/0/members/7fe6bf77aaafe0f6/attributes","cluster-id":"fb8a78b66dce1ac7","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-16T13:12:19.907543Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T13:12:19.907728Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-16T13:12:19.907805Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-16T13:12:19.907901Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T13:12:19.909175Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T13:12:19.909686Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T13:12:19.910108Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.208:2379"}
	{"level":"info","ts":"2024-08-16T13:12:19.911335Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-16T13:13:04.794859Z","caller":"traceutil/trace.go:171","msg":"trace[1905363610] transaction","detail":"{read_only:false; response_revision:1129; number_of_response:1; }","duration":"162.716563ms","start":"2024-08-16T13:13:04.632106Z","end":"2024-08-16T13:13:04.794823Z","steps":["trace[1905363610] 'process raft request'  (duration: 162.566632ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T13:13:04.795964Z","caller":"traceutil/trace.go:171","msg":"trace[2106163456] linearizableReadLoop","detail":"{readStateIndex:1238; appliedIndex:1237; }","duration":"133.397624ms","start":"2024-08-16T13:13:04.662548Z","end":"2024-08-16T13:13:04.795945Z","steps":["trace[2106163456] 'read index received'  (duration: 132.829972ms)","trace[2106163456] 'applied index is now lower than readState.Index'  (duration: 566.944µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-16T13:13:04.796152Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"133.550452ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-16T13:13:04.796270Z","caller":"traceutil/trace.go:171","msg":"trace[1279985025] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1130; }","duration":"133.713513ms","start":"2024-08-16T13:13:04.662541Z","end":"2024-08-16T13:13:04.796255Z","steps":["trace[1279985025] 'agreement among raft nodes before linearized reading'  (duration: 133.495098ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T13:13:04.796840Z","caller":"traceutil/trace.go:171","msg":"trace[251242566] transaction","detail":"{read_only:false; response_revision:1130; number_of_response:1; }","duration":"153.821259ms","start":"2024-08-16T13:13:04.643006Z","end":"2024-08-16T13:13:04.796828Z","steps":["trace[251242566] 'process raft request'  (duration: 152.83643ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T13:13:47.809199Z","caller":"traceutil/trace.go:171","msg":"trace[145983938] linearizableReadLoop","detail":"{readStateIndex:1352; appliedIndex:1351; }","duration":"146.990408ms","start":"2024-08-16T13:13:47.662135Z","end":"2024-08-16T13:13:47.809125Z","steps":["trace[145983938] 'read index received'  (duration: 117.340294ms)","trace[145983938] 'applied index is now lower than readState.Index'  (duration: 29.649068ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-16T13:13:47.809370Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.177894ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-16T13:13:47.809598Z","caller":"traceutil/trace.go:171","msg":"trace[1585124629] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1232; }","duration":"147.456121ms","start":"2024-08-16T13:13:47.662129Z","end":"2024-08-16T13:13:47.809585Z","steps":["trace[1585124629] 'agreement among raft nodes before linearized reading'  (duration: 147.142498ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:14:02 up 9 min,  0 users,  load average: 0.21, 0.33, 0.18
	Linux multinode-336982 5.10.207 #1 SMP Wed Aug 14 19:18:01 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [17bb1938edcda574d2bd217180511d384e6a0d85479c79f4c1d31db5029c1d8c] <==
	I0816 13:13:18.730761       1 main.go:322] Node multinode-336982-m02 has CIDR [10.244.1.0/24] 
	I0816 13:13:18.730903       1 main.go:295] Handling node with IPs: map[192.168.39.145:{}]
	I0816 13:13:18.730925       1 main.go:322] Node multinode-336982-m03 has CIDR [10.244.3.0/24] 
	I0816 13:13:28.732367       1 main.go:295] Handling node with IPs: map[192.168.39.208:{}]
	I0816 13:13:28.732401       1 main.go:299] handling current node
	I0816 13:13:28.732470       1 main.go:295] Handling node with IPs: map[192.168.39.190:{}]
	I0816 13:13:28.732478       1 main.go:322] Node multinode-336982-m02 has CIDR [10.244.1.0/24] 
	I0816 13:13:28.732641       1 main.go:295] Handling node with IPs: map[192.168.39.145:{}]
	I0816 13:13:28.732675       1 main.go:322] Node multinode-336982-m03 has CIDR [10.244.3.0/24] 
	I0816 13:13:38.730223       1 main.go:295] Handling node with IPs: map[192.168.39.208:{}]
	I0816 13:13:38.730272       1 main.go:299] handling current node
	I0816 13:13:38.730287       1 main.go:295] Handling node with IPs: map[192.168.39.190:{}]
	I0816 13:13:38.730292       1 main.go:322] Node multinode-336982-m02 has CIDR [10.244.1.0/24] 
	I0816 13:13:48.733108       1 main.go:295] Handling node with IPs: map[192.168.39.208:{}]
	I0816 13:13:48.733154       1 main.go:299] handling current node
	I0816 13:13:48.733168       1 main.go:295] Handling node with IPs: map[192.168.39.190:{}]
	I0816 13:13:48.733174       1 main.go:322] Node multinode-336982-m02 has CIDR [10.244.1.0/24] 
	I0816 13:13:48.733327       1 main.go:295] Handling node with IPs: map[192.168.39.145:{}]
	I0816 13:13:48.733352       1 main.go:322] Node multinode-336982-m03 has CIDR [10.244.2.0/24] 
	I0816 13:13:58.731707       1 main.go:295] Handling node with IPs: map[192.168.39.208:{}]
	I0816 13:13:58.731753       1 main.go:299] handling current node
	I0816 13:13:58.731767       1 main.go:295] Handling node with IPs: map[192.168.39.190:{}]
	I0816 13:13:58.731772       1 main.go:322] Node multinode-336982-m02 has CIDR [10.244.1.0/24] 
	I0816 13:13:58.731899       1 main.go:295] Handling node with IPs: map[192.168.39.145:{}]
	I0816 13:13:58.731923       1 main.go:322] Node multinode-336982-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [851fbcba07c087e4024905c5d5a1a90eb6c2fdf8219077078d724fa20411b383] <==
	I0816 13:09:55.222600       1 main.go:322] Node multinode-336982-m02 has CIDR [10.244.1.0/24] 
	I0816 13:10:05.221852       1 main.go:295] Handling node with IPs: map[192.168.39.208:{}]
	I0816 13:10:05.221976       1 main.go:299] handling current node
	I0816 13:10:05.222009       1 main.go:295] Handling node with IPs: map[192.168.39.190:{}]
	I0816 13:10:05.222027       1 main.go:322] Node multinode-336982-m02 has CIDR [10.244.1.0/24] 
	I0816 13:10:05.222173       1 main.go:295] Handling node with IPs: map[192.168.39.145:{}]
	I0816 13:10:05.222195       1 main.go:322] Node multinode-336982-m03 has CIDR [10.244.3.0/24] 
	I0816 13:10:15.227865       1 main.go:295] Handling node with IPs: map[192.168.39.145:{}]
	I0816 13:10:15.227966       1 main.go:322] Node multinode-336982-m03 has CIDR [10.244.3.0/24] 
	I0816 13:10:15.228162       1 main.go:295] Handling node with IPs: map[192.168.39.208:{}]
	I0816 13:10:15.228190       1 main.go:299] handling current node
	I0816 13:10:15.228215       1 main.go:295] Handling node with IPs: map[192.168.39.190:{}]
	I0816 13:10:15.228232       1 main.go:322] Node multinode-336982-m02 has CIDR [10.244.1.0/24] 
	I0816 13:10:25.228650       1 main.go:295] Handling node with IPs: map[192.168.39.208:{}]
	I0816 13:10:25.228749       1 main.go:299] handling current node
	I0816 13:10:25.228780       1 main.go:295] Handling node with IPs: map[192.168.39.190:{}]
	I0816 13:10:25.228814       1 main.go:322] Node multinode-336982-m02 has CIDR [10.244.1.0/24] 
	I0816 13:10:25.228962       1 main.go:295] Handling node with IPs: map[192.168.39.145:{}]
	I0816 13:10:25.228985       1 main.go:322] Node multinode-336982-m03 has CIDR [10.244.3.0/24] 
	I0816 13:10:35.222179       1 main.go:295] Handling node with IPs: map[192.168.39.190:{}]
	I0816 13:10:35.222214       1 main.go:322] Node multinode-336982-m02 has CIDR [10.244.1.0/24] 
	I0816 13:10:35.222349       1 main.go:295] Handling node with IPs: map[192.168.39.145:{}]
	I0816 13:10:35.222355       1 main.go:322] Node multinode-336982-m03 has CIDR [10.244.3.0/24] 
	I0816 13:10:35.222601       1 main.go:295] Handling node with IPs: map[192.168.39.208:{}]
	I0816 13:10:35.222611       1 main.go:299] handling current node
	
	
	==> kube-apiserver [99746d40c6523384b1bf891b6e187b4e084acd439343373ecb64225c76096431] <==
	I0816 13:05:33.812633       1 controller.go:615] quota admission added evaluator for: endpoints
	I0816 13:05:33.817025       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0816 13:05:34.160735       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0816 13:05:34.979230       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0816 13:05:35.002966       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0816 13:05:35.013707       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0816 13:05:39.756995       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0816 13:05:39.987320       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0816 13:06:53.744780       1 conn.go:339] Error on socket receive: read tcp 192.168.39.208:8443->192.168.39.1:57522: use of closed network connection
	E0816 13:06:53.914614       1 conn.go:339] Error on socket receive: read tcp 192.168.39.208:8443->192.168.39.1:57544: use of closed network connection
	E0816 13:06:54.097217       1 conn.go:339] Error on socket receive: read tcp 192.168.39.208:8443->192.168.39.1:57560: use of closed network connection
	E0816 13:06:54.263230       1 conn.go:339] Error on socket receive: read tcp 192.168.39.208:8443->192.168.39.1:57576: use of closed network connection
	E0816 13:06:54.445165       1 conn.go:339] Error on socket receive: read tcp 192.168.39.208:8443->192.168.39.1:57596: use of closed network connection
	E0816 13:06:54.603189       1 conn.go:339] Error on socket receive: read tcp 192.168.39.208:8443->192.168.39.1:57610: use of closed network connection
	E0816 13:06:54.889147       1 conn.go:339] Error on socket receive: read tcp 192.168.39.208:8443->192.168.39.1:57622: use of closed network connection
	E0816 13:06:55.066329       1 conn.go:339] Error on socket receive: read tcp 192.168.39.208:8443->192.168.39.1:57632: use of closed network connection
	E0816 13:06:55.246686       1 conn.go:339] Error on socket receive: read tcp 192.168.39.208:8443->192.168.39.1:57642: use of closed network connection
	E0816 13:06:55.403606       1 conn.go:339] Error on socket receive: read tcp 192.168.39.208:8443->192.168.39.1:57660: use of closed network connection
	I0816 13:10:38.917928       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W0816 13:10:38.941961       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:10:38.946810       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:10:38.946980       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:10:38.947054       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:10:38.948084       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:10:38.948834       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [9a0e2cdd1f40cc329c45713d49852c0612cdd99521c075c6211fde473ad0cfdc] <==
	I0816 13:12:21.241928       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0816 13:12:21.243501       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0816 13:12:21.243553       1 shared_informer.go:320] Caches are synced for configmaps
	I0816 13:12:21.260143       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0816 13:12:21.260228       1 policy_source.go:224] refreshing policies
	I0816 13:12:21.279657       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0816 13:12:21.297657       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0816 13:12:21.307902       1 aggregator.go:171] initial CRD sync complete...
	I0816 13:12:21.307973       1 autoregister_controller.go:144] Starting autoregister controller
	I0816 13:12:21.307982       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0816 13:12:21.307991       1 cache.go:39] Caches are synced for autoregister controller
	E0816 13:12:21.336941       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0816 13:12:21.342164       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0816 13:12:21.342502       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0816 13:12:21.342894       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0816 13:12:21.343751       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0816 13:12:21.369233       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0816 13:12:22.146935       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0816 13:12:24.612733       1 controller.go:615] quota admission added evaluator for: endpoints
	I0816 13:12:24.822013       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0816 13:12:24.889371       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0816 13:12:24.981205       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0816 13:12:24.997536       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0816 13:12:25.081355       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0816 13:12:25.089801       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [326c1e94f5e1fd94cc3c87623d36864f04a7b798119996892336497c0f01ab5f] <==
	I0816 13:13:20.307014       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m02"
	I0816 13:13:20.314472       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="81.596µs"
	I0816 13:13:20.329204       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="46.664µs"
	I0816 13:13:23.863798       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="8.073604ms"
	I0816 13:13:23.864142       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="64.414µs"
	I0816 13:13:24.649519       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m02"
	I0816 13:13:31.536220       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m02"
	I0816 13:13:38.221077       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m03"
	I0816 13:13:38.238621       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m03"
	I0816 13:13:38.484777       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-336982-m02"
	I0816 13:13:38.484854       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m03"
	I0816 13:13:39.641604       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-336982-m02"
	I0816 13:13:39.655048       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-336982-m03" podCIDRs=["10.244.2.0/24"]
	I0816 13:13:39.655529       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m03"
	I0816 13:13:39.655773       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-336982-m03\" does not exist"
	I0816 13:13:39.660389       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m03"
	I0816 13:13:39.681125       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m03"
	I0816 13:13:39.738367       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m03"
	I0816 13:13:40.024502       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m03"
	I0816 13:13:40.348096       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m03"
	I0816 13:13:49.770979       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m03"
	I0816 13:13:59.279652       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m03"
	I0816 13:13:59.279735       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-336982-m02"
	I0816 13:13:59.295577       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m03"
	I0816 13:13:59.670195       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m03"
	
	
	==> kube-controller-manager [5b58598ac934ca7f1a7851e587a2d1dd3a072d35c90bfedf4580b46cf9c49b23] <==
	I0816 13:08:12.525866       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m03"
	I0816 13:08:12.525981       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-336982-m02"
	I0816 13:08:13.550731       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-336982-m03\" does not exist"
	I0816 13:08:13.551175       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-336982-m02"
	I0816 13:08:13.577722       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-336982-m03" podCIDRs=["10.244.3.0/24"]
	I0816 13:08:13.577764       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m03"
	I0816 13:08:13.577789       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m03"
	I0816 13:08:13.577940       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m03"
	I0816 13:08:13.979334       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m03"
	I0816 13:08:14.034542       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m03"
	I0816 13:08:14.370112       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m03"
	I0816 13:08:23.577608       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m03"
	I0816 13:08:33.443192       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m03"
	I0816 13:08:33.444570       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-336982-m02"
	I0816 13:08:33.454826       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m03"
	I0816 13:08:33.926842       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m03"
	I0816 13:09:18.945841       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m02"
	I0816 13:09:18.946989       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-336982-m03"
	I0816 13:09:18.951284       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m03"
	I0816 13:09:18.970318       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m02"
	I0816 13:09:18.978933       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m03"
	I0816 13:09:19.026697       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="14.187166ms"
	I0816 13:09:19.027045       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="105.289µs"
	I0816 13:09:24.040707       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m02"
	I0816 13:09:34.121041       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m03"
	
	
	==> kube-proxy [171a9c405c59e0ecaa54d87efd81e8b7ea92feb12d2b0e9e89ad028a749fd793] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0816 13:05:40.859239       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0816 13:05:40.874196       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.208"]
	E0816 13:05:40.874290       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 13:05:40.935503       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0816 13:05:40.935556       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0816 13:05:40.935587       1 server_linux.go:169] "Using iptables Proxier"
	I0816 13:05:40.938984       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 13:05:40.939282       1 server.go:483] "Version info" version="v1.31.0"
	I0816 13:05:40.939297       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 13:05:40.940857       1 config.go:197] "Starting service config controller"
	I0816 13:05:40.940883       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 13:05:40.940926       1 config.go:104] "Starting endpoint slice config controller"
	I0816 13:05:40.940931       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 13:05:40.941372       1 config.go:326] "Starting node config controller"
	I0816 13:05:40.941378       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 13:05:41.045312       1 shared_informer.go:320] Caches are synced for node config
	I0816 13:05:41.045356       1 shared_informer.go:320] Caches are synced for service config
	I0816 13:05:41.045381       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [e8ca000436a5314e0960863ed8b9db6fc418404e98414d0dd4edb8a36cafddde] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0816 13:12:18.856613       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0816 13:12:21.318808       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.208"]
	E0816 13:12:21.318942       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 13:12:21.502089       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0816 13:12:21.504995       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0816 13:12:21.510469       1 server_linux.go:169] "Using iptables Proxier"
	I0816 13:12:21.519063       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 13:12:21.519356       1 server.go:483] "Version info" version="v1.31.0"
	I0816 13:12:21.520254       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 13:12:21.527103       1 config.go:197] "Starting service config controller"
	I0816 13:12:21.527152       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 13:12:21.527172       1 config.go:104] "Starting endpoint slice config controller"
	I0816 13:12:21.527176       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 13:12:21.532096       1 config.go:326] "Starting node config controller"
	I0816 13:12:21.532179       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 13:12:21.627726       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0816 13:12:21.627835       1 shared_informer.go:320] Caches are synced for service config
	I0816 13:12:21.632560       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [212bd68acb7c3f8a5334f4c436c1b81fcfb0240447356d8ca45553de7b2d1463] <==
	E0816 13:05:32.171292       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0816 13:05:33.028631       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0816 13:05:33.028684       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 13:05:33.142102       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0816 13:05:33.142153       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0816 13:05:33.205126       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 13:05:33.205270       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 13:05:33.224780       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 13:05:33.224836       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 13:05:33.229885       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 13:05:33.230966       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 13:05:33.259484       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 13:05:33.259878       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 13:05:33.335485       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 13:05:33.335537       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 13:05:33.439290       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 13:05:33.439345       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 13:05:33.447726       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 13:05:33.447776       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0816 13:05:33.473478       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 13:05:33.473524       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 13:05:33.619444       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 13:05:33.619479       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0816 13:05:36.463088       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0816 13:10:38.912505       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [330eb5847d3245030ee44a01d452db0c1f31ddc5727677ecbcd799ae614ebb97] <==
	I0816 13:12:18.600393       1 serving.go:386] Generated self-signed cert in-memory
	W0816 13:12:21.238154       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0816 13:12:21.238200       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0816 13:12:21.238210       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0816 13:12:21.238221       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0816 13:12:21.276742       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0816 13:12:21.276787       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 13:12:21.285824       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0816 13:12:21.285885       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 13:12:21.286600       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0816 13:12:21.286697       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0816 13:12:21.386282       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 16 13:12:25 multinode-336982 kubelet[3706]: I0816 13:12:25.439526    3706 scope.go:117] "RemoveContainer" containerID="7a556ba4d113ba15bbf1bbe5329aab9f84a5d66c9d80385f3a5d3ded62054521"
	Aug 16 13:12:32 multinode-336982 kubelet[3706]: I0816 13:12:32.193113    3706 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Aug 16 13:12:34 multinode-336982 kubelet[3706]: E0816 13:12:34.306392    3706 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723813954306040971,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:12:34 multinode-336982 kubelet[3706]: E0816 13:12:34.306810    3706 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723813954306040971,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:12:44 multinode-336982 kubelet[3706]: E0816 13:12:44.309103    3706 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723813964308610593,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:12:44 multinode-336982 kubelet[3706]: E0816 13:12:44.309393    3706 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723813964308610593,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:12:54 multinode-336982 kubelet[3706]: E0816 13:12:54.311648    3706 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723813974311319875,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:12:54 multinode-336982 kubelet[3706]: E0816 13:12:54.311677    3706 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723813974311319875,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:13:04 multinode-336982 kubelet[3706]: E0816 13:13:04.316228    3706 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723813984315397363,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:13:04 multinode-336982 kubelet[3706]: E0816 13:13:04.316970    3706 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723813984315397363,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:13:14 multinode-336982 kubelet[3706]: E0816 13:13:14.320507    3706 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723813994319447771,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:13:14 multinode-336982 kubelet[3706]: E0816 13:13:14.321097    3706 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723813994319447771,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:13:24 multinode-336982 kubelet[3706]: E0816 13:13:24.311914    3706 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 13:13:24 multinode-336982 kubelet[3706]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 13:13:24 multinode-336982 kubelet[3706]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 13:13:24 multinode-336982 kubelet[3706]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 13:13:24 multinode-336982 kubelet[3706]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 13:13:24 multinode-336982 kubelet[3706]: E0816 13:13:24.323675    3706 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723814004323212148,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:13:24 multinode-336982 kubelet[3706]: E0816 13:13:24.323726    3706 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723814004323212148,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:13:34 multinode-336982 kubelet[3706]: E0816 13:13:34.325097    3706 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723814014324867130,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:13:34 multinode-336982 kubelet[3706]: E0816 13:13:34.325123    3706 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723814014324867130,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:13:44 multinode-336982 kubelet[3706]: E0816 13:13:44.326388    3706 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723814024326164091,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:13:44 multinode-336982 kubelet[3706]: E0816 13:13:44.326466    3706 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723814024326164091,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:13:54 multinode-336982 kubelet[3706]: E0816 13:13:54.329247    3706 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723814034328679406,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:13:54 multinode-336982 kubelet[3706]: E0816 13:13:54.329729    3706 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723814034328679406,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 13:14:01.767426   41072 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19423-3966/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-336982 -n multinode-336982
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-336982 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (327.63s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-336982 stop
E0816 13:15:40.923720   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/functional-756697/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-336982 stop: exit status 82 (2m0.462498438s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-336982-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-336982 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-336982 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-336982 status: exit status 3 (18.760249023s)

                                                
                                                
-- stdout --
	multinode-336982
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-336982-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 13:16:24.757252   41713 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.190:22: connect: no route to host
	E0816 13:16:24.757284   41713 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.190:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-336982 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-336982 -n multinode-336982
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-336982 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-336982 logs -n 25: (1.458282728s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-336982 ssh -n                                                                 | multinode-336982 | jenkins | v1.33.1 | 16 Aug 24 13:07 UTC | 16 Aug 24 13:07 UTC |
	|         | multinode-336982-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-336982 cp multinode-336982-m02:/home/docker/cp-test.txt                       | multinode-336982 | jenkins | v1.33.1 | 16 Aug 24 13:07 UTC | 16 Aug 24 13:07 UTC |
	|         | multinode-336982:/home/docker/cp-test_multinode-336982-m02_multinode-336982.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-336982 ssh -n                                                                 | multinode-336982 | jenkins | v1.33.1 | 16 Aug 24 13:07 UTC | 16 Aug 24 13:07 UTC |
	|         | multinode-336982-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-336982 ssh -n multinode-336982 sudo cat                                       | multinode-336982 | jenkins | v1.33.1 | 16 Aug 24 13:07 UTC | 16 Aug 24 13:07 UTC |
	|         | /home/docker/cp-test_multinode-336982-m02_multinode-336982.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-336982 cp multinode-336982-m02:/home/docker/cp-test.txt                       | multinode-336982 | jenkins | v1.33.1 | 16 Aug 24 13:07 UTC | 16 Aug 24 13:07 UTC |
	|         | multinode-336982-m03:/home/docker/cp-test_multinode-336982-m02_multinode-336982-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-336982 ssh -n                                                                 | multinode-336982 | jenkins | v1.33.1 | 16 Aug 24 13:07 UTC | 16 Aug 24 13:07 UTC |
	|         | multinode-336982-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-336982 ssh -n multinode-336982-m03 sudo cat                                   | multinode-336982 | jenkins | v1.33.1 | 16 Aug 24 13:07 UTC | 16 Aug 24 13:07 UTC |
	|         | /home/docker/cp-test_multinode-336982-m02_multinode-336982-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-336982 cp testdata/cp-test.txt                                                | multinode-336982 | jenkins | v1.33.1 | 16 Aug 24 13:07 UTC | 16 Aug 24 13:07 UTC |
	|         | multinode-336982-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-336982 ssh -n                                                                 | multinode-336982 | jenkins | v1.33.1 | 16 Aug 24 13:07 UTC | 16 Aug 24 13:07 UTC |
	|         | multinode-336982-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-336982 cp multinode-336982-m03:/home/docker/cp-test.txt                       | multinode-336982 | jenkins | v1.33.1 | 16 Aug 24 13:07 UTC | 16 Aug 24 13:07 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile804343114/001/cp-test_multinode-336982-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-336982 ssh -n                                                                 | multinode-336982 | jenkins | v1.33.1 | 16 Aug 24 13:07 UTC | 16 Aug 24 13:07 UTC |
	|         | multinode-336982-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-336982 cp multinode-336982-m03:/home/docker/cp-test.txt                       | multinode-336982 | jenkins | v1.33.1 | 16 Aug 24 13:07 UTC | 16 Aug 24 13:07 UTC |
	|         | multinode-336982:/home/docker/cp-test_multinode-336982-m03_multinode-336982.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-336982 ssh -n                                                                 | multinode-336982 | jenkins | v1.33.1 | 16 Aug 24 13:07 UTC | 16 Aug 24 13:07 UTC |
	|         | multinode-336982-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-336982 ssh -n multinode-336982 sudo cat                                       | multinode-336982 | jenkins | v1.33.1 | 16 Aug 24 13:07 UTC | 16 Aug 24 13:07 UTC |
	|         | /home/docker/cp-test_multinode-336982-m03_multinode-336982.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-336982 cp multinode-336982-m03:/home/docker/cp-test.txt                       | multinode-336982 | jenkins | v1.33.1 | 16 Aug 24 13:07 UTC | 16 Aug 24 13:07 UTC |
	|         | multinode-336982-m02:/home/docker/cp-test_multinode-336982-m03_multinode-336982-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-336982 ssh -n                                                                 | multinode-336982 | jenkins | v1.33.1 | 16 Aug 24 13:07 UTC | 16 Aug 24 13:07 UTC |
	|         | multinode-336982-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-336982 ssh -n multinode-336982-m02 sudo cat                                   | multinode-336982 | jenkins | v1.33.1 | 16 Aug 24 13:07 UTC | 16 Aug 24 13:07 UTC |
	|         | /home/docker/cp-test_multinode-336982-m03_multinode-336982-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-336982 node stop m03                                                          | multinode-336982 | jenkins | v1.33.1 | 16 Aug 24 13:07 UTC | 16 Aug 24 13:07 UTC |
	| node    | multinode-336982 node start                                                             | multinode-336982 | jenkins | v1.33.1 | 16 Aug 24 13:07 UTC | 16 Aug 24 13:08 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-336982                                                                | multinode-336982 | jenkins | v1.33.1 | 16 Aug 24 13:08 UTC |                     |
	| stop    | -p multinode-336982                                                                     | multinode-336982 | jenkins | v1.33.1 | 16 Aug 24 13:08 UTC |                     |
	| start   | -p multinode-336982                                                                     | multinode-336982 | jenkins | v1.33.1 | 16 Aug 24 13:10 UTC | 16 Aug 24 13:14 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-336982                                                                | multinode-336982 | jenkins | v1.33.1 | 16 Aug 24 13:14 UTC |                     |
	| node    | multinode-336982 node delete                                                            | multinode-336982 | jenkins | v1.33.1 | 16 Aug 24 13:14 UTC | 16 Aug 24 13:14 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-336982 stop                                                                   | multinode-336982 | jenkins | v1.33.1 | 16 Aug 24 13:14 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 13:10:37
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 13:10:37.862714   40000 out.go:345] Setting OutFile to fd 1 ...
	I0816 13:10:37.862960   40000 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 13:10:37.862969   40000 out.go:358] Setting ErrFile to fd 2...
	I0816 13:10:37.862974   40000 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 13:10:37.863175   40000 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-3966/.minikube/bin
	I0816 13:10:37.863694   40000 out.go:352] Setting JSON to false
	I0816 13:10:37.864571   40000 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3183,"bootTime":1723810655,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 13:10:37.864627   40000 start.go:139] virtualization: kvm guest
	I0816 13:10:37.867680   40000 out.go:177] * [multinode-336982] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 13:10:37.869050   40000 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 13:10:37.869059   40000 notify.go:220] Checking for updates...
	I0816 13:10:37.870573   40000 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 13:10:37.872080   40000 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 13:10:37.873556   40000 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-3966/.minikube
	I0816 13:10:37.874908   40000 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 13:10:37.876139   40000 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 13:10:37.878245   40000 config.go:182] Loaded profile config "multinode-336982": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 13:10:37.878362   40000 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 13:10:37.878980   40000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 13:10:37.879031   40000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:10:37.894197   40000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39533
	I0816 13:10:37.894583   40000 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:10:37.895145   40000 main.go:141] libmachine: Using API Version  1
	I0816 13:10:37.895166   40000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:10:37.895563   40000 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:10:37.895790   40000 main.go:141] libmachine: (multinode-336982) Calling .DriverName
	I0816 13:10:37.930339   40000 out.go:177] * Using the kvm2 driver based on existing profile
	I0816 13:10:37.931787   40000 start.go:297] selected driver: kvm2
	I0816 13:10:37.931798   40000 start.go:901] validating driver "kvm2" against &{Name:multinode-336982 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:multinode-336982 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.190 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.145 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 13:10:37.931932   40000 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 13:10:37.932268   40000 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 13:10:37.932340   40000 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-3966/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 13:10:37.946778   40000 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0816 13:10:37.947443   40000 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 13:10:37.947477   40000 cni.go:84] Creating CNI manager for ""
	I0816 13:10:37.947487   40000 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0816 13:10:37.947542   40000 start.go:340] cluster config:
	{Name:multinode-336982 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-336982 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.190 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.145 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 13:10:37.947660   40000 iso.go:125] acquiring lock: {Name:mk00139b359c6fe4c84d80e843a2099303fb7c5b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 13:10:37.949514   40000 out.go:177] * Starting "multinode-336982" primary control-plane node in "multinode-336982" cluster
	I0816 13:10:37.950740   40000 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 13:10:37.950786   40000 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0816 13:10:37.950793   40000 cache.go:56] Caching tarball of preloaded images
	I0816 13:10:37.950864   40000 preload.go:172] Found /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 13:10:37.950876   40000 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0816 13:10:37.950981   40000 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/multinode-336982/config.json ...
	I0816 13:10:37.951159   40000 start.go:360] acquireMachinesLock for multinode-336982: {Name:mk0b4b88ee8893cefe753073bc08069ac50e5641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 13:10:37.951193   40000 start.go:364] duration metric: took 18.581µs to acquireMachinesLock for "multinode-336982"
	I0816 13:10:37.951212   40000 start.go:96] Skipping create...Using existing machine configuration
	I0816 13:10:37.951219   40000 fix.go:54] fixHost starting: 
	I0816 13:10:37.951461   40000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 13:10:37.951492   40000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:10:37.965607   40000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34407
	I0816 13:10:37.966026   40000 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:10:37.966455   40000 main.go:141] libmachine: Using API Version  1
	I0816 13:10:37.966477   40000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:10:37.966811   40000 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:10:37.967020   40000 main.go:141] libmachine: (multinode-336982) Calling .DriverName
	I0816 13:10:37.967182   40000 main.go:141] libmachine: (multinode-336982) Calling .GetState
	I0816 13:10:37.968597   40000 fix.go:112] recreateIfNeeded on multinode-336982: state=Running err=<nil>
	W0816 13:10:37.968623   40000 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 13:10:37.971390   40000 out.go:177] * Updating the running kvm2 "multinode-336982" VM ...
	I0816 13:10:37.972750   40000 machine.go:93] provisionDockerMachine start ...
	I0816 13:10:37.972769   40000 main.go:141] libmachine: (multinode-336982) Calling .DriverName
	I0816 13:10:37.973000   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHHostname
	I0816 13:10:37.975406   40000 main.go:141] libmachine: (multinode-336982) DBG | domain multinode-336982 has defined MAC address 52:54:00:26:1f:11 in network mk-multinode-336982
	I0816 13:10:37.975796   40000 main.go:141] libmachine: (multinode-336982) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:1f:11", ip: ""} in network mk-multinode-336982: {Iface:virbr1 ExpiryTime:2024-08-16 14:05:09 +0000 UTC Type:0 Mac:52:54:00:26:1f:11 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-336982 Clientid:01:52:54:00:26:1f:11}
	I0816 13:10:37.975822   40000 main.go:141] libmachine: (multinode-336982) DBG | domain multinode-336982 has defined IP address 192.168.39.208 and MAC address 52:54:00:26:1f:11 in network mk-multinode-336982
	I0816 13:10:37.975964   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHPort
	I0816 13:10:37.976119   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHKeyPath
	I0816 13:10:37.976296   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHKeyPath
	I0816 13:10:37.976426   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHUsername
	I0816 13:10:37.976592   40000 main.go:141] libmachine: Using SSH client type: native
	I0816 13:10:37.976826   40000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I0816 13:10:37.976843   40000 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 13:10:38.078028   40000 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-336982
	
	I0816 13:10:38.078053   40000 main.go:141] libmachine: (multinode-336982) Calling .GetMachineName
	I0816 13:10:38.078277   40000 buildroot.go:166] provisioning hostname "multinode-336982"
	I0816 13:10:38.078298   40000 main.go:141] libmachine: (multinode-336982) Calling .GetMachineName
	I0816 13:10:38.078500   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHHostname
	I0816 13:10:38.081036   40000 main.go:141] libmachine: (multinode-336982) DBG | domain multinode-336982 has defined MAC address 52:54:00:26:1f:11 in network mk-multinode-336982
	I0816 13:10:38.081382   40000 main.go:141] libmachine: (multinode-336982) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:1f:11", ip: ""} in network mk-multinode-336982: {Iface:virbr1 ExpiryTime:2024-08-16 14:05:09 +0000 UTC Type:0 Mac:52:54:00:26:1f:11 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-336982 Clientid:01:52:54:00:26:1f:11}
	I0816 13:10:38.081409   40000 main.go:141] libmachine: (multinode-336982) DBG | domain multinode-336982 has defined IP address 192.168.39.208 and MAC address 52:54:00:26:1f:11 in network mk-multinode-336982
	I0816 13:10:38.081588   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHPort
	I0816 13:10:38.081754   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHKeyPath
	I0816 13:10:38.081901   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHKeyPath
	I0816 13:10:38.082026   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHUsername
	I0816 13:10:38.082151   40000 main.go:141] libmachine: Using SSH client type: native
	I0816 13:10:38.082304   40000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I0816 13:10:38.082318   40000 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-336982 && echo "multinode-336982" | sudo tee /etc/hostname
	I0816 13:10:38.201525   40000 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-336982
	
	I0816 13:10:38.201548   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHHostname
	I0816 13:10:38.204246   40000 main.go:141] libmachine: (multinode-336982) DBG | domain multinode-336982 has defined MAC address 52:54:00:26:1f:11 in network mk-multinode-336982
	I0816 13:10:38.204664   40000 main.go:141] libmachine: (multinode-336982) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:1f:11", ip: ""} in network mk-multinode-336982: {Iface:virbr1 ExpiryTime:2024-08-16 14:05:09 +0000 UTC Type:0 Mac:52:54:00:26:1f:11 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-336982 Clientid:01:52:54:00:26:1f:11}
	I0816 13:10:38.204694   40000 main.go:141] libmachine: (multinode-336982) DBG | domain multinode-336982 has defined IP address 192.168.39.208 and MAC address 52:54:00:26:1f:11 in network mk-multinode-336982
	I0816 13:10:38.204869   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHPort
	I0816 13:10:38.205042   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHKeyPath
	I0816 13:10:38.205217   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHKeyPath
	I0816 13:10:38.205352   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHUsername
	I0816 13:10:38.205533   40000 main.go:141] libmachine: Using SSH client type: native
	I0816 13:10:38.205693   40000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I0816 13:10:38.205711   40000 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-336982' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-336982/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-336982' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 13:10:38.302767   40000 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 13:10:38.302797   40000 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-3966/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-3966/.minikube}
	I0816 13:10:38.302822   40000 buildroot.go:174] setting up certificates
	I0816 13:10:38.302835   40000 provision.go:84] configureAuth start
	I0816 13:10:38.302850   40000 main.go:141] libmachine: (multinode-336982) Calling .GetMachineName
	I0816 13:10:38.303251   40000 main.go:141] libmachine: (multinode-336982) Calling .GetIP
	I0816 13:10:38.305949   40000 main.go:141] libmachine: (multinode-336982) DBG | domain multinode-336982 has defined MAC address 52:54:00:26:1f:11 in network mk-multinode-336982
	I0816 13:10:38.306403   40000 main.go:141] libmachine: (multinode-336982) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:1f:11", ip: ""} in network mk-multinode-336982: {Iface:virbr1 ExpiryTime:2024-08-16 14:05:09 +0000 UTC Type:0 Mac:52:54:00:26:1f:11 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-336982 Clientid:01:52:54:00:26:1f:11}
	I0816 13:10:38.306432   40000 main.go:141] libmachine: (multinode-336982) DBG | domain multinode-336982 has defined IP address 192.168.39.208 and MAC address 52:54:00:26:1f:11 in network mk-multinode-336982
	I0816 13:10:38.306591   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHHostname
	I0816 13:10:38.308564   40000 main.go:141] libmachine: (multinode-336982) DBG | domain multinode-336982 has defined MAC address 52:54:00:26:1f:11 in network mk-multinode-336982
	I0816 13:10:38.308825   40000 main.go:141] libmachine: (multinode-336982) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:1f:11", ip: ""} in network mk-multinode-336982: {Iface:virbr1 ExpiryTime:2024-08-16 14:05:09 +0000 UTC Type:0 Mac:52:54:00:26:1f:11 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-336982 Clientid:01:52:54:00:26:1f:11}
	I0816 13:10:38.308851   40000 main.go:141] libmachine: (multinode-336982) DBG | domain multinode-336982 has defined IP address 192.168.39.208 and MAC address 52:54:00:26:1f:11 in network mk-multinode-336982
	I0816 13:10:38.308948   40000 provision.go:143] copyHostCerts
	I0816 13:10:38.308983   40000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem
	I0816 13:10:38.309016   40000 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem, removing ...
	I0816 13:10:38.309031   40000 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem
	I0816 13:10:38.309098   40000 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem (1082 bytes)
	I0816 13:10:38.309193   40000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem
	I0816 13:10:38.309211   40000 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem, removing ...
	I0816 13:10:38.309219   40000 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem
	I0816 13:10:38.309243   40000 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem (1123 bytes)
	I0816 13:10:38.309287   40000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem
	I0816 13:10:38.309305   40000 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem, removing ...
	I0816 13:10:38.309309   40000 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem
	I0816 13:10:38.309330   40000 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem (1675 bytes)
	I0816 13:10:38.309372   40000 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem org=jenkins.multinode-336982 san=[127.0.0.1 192.168.39.208 localhost minikube multinode-336982]
	I0816 13:10:38.628766   40000 provision.go:177] copyRemoteCerts
	I0816 13:10:38.628823   40000 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 13:10:38.628844   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHHostname
	I0816 13:10:38.631298   40000 main.go:141] libmachine: (multinode-336982) DBG | domain multinode-336982 has defined MAC address 52:54:00:26:1f:11 in network mk-multinode-336982
	I0816 13:10:38.631643   40000 main.go:141] libmachine: (multinode-336982) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:1f:11", ip: ""} in network mk-multinode-336982: {Iface:virbr1 ExpiryTime:2024-08-16 14:05:09 +0000 UTC Type:0 Mac:52:54:00:26:1f:11 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-336982 Clientid:01:52:54:00:26:1f:11}
	I0816 13:10:38.631674   40000 main.go:141] libmachine: (multinode-336982) DBG | domain multinode-336982 has defined IP address 192.168.39.208 and MAC address 52:54:00:26:1f:11 in network mk-multinode-336982
	I0816 13:10:38.631855   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHPort
	I0816 13:10:38.632079   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHKeyPath
	I0816 13:10:38.632291   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHUsername
	I0816 13:10:38.632404   40000 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/multinode-336982/id_rsa Username:docker}
	I0816 13:10:38.711540   40000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 13:10:38.711608   40000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 13:10:38.737228   40000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 13:10:38.737302   40000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 13:10:38.763873   40000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 13:10:38.763945   40000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0816 13:10:38.788994   40000 provision.go:87] duration metric: took 486.143983ms to configureAuth
	I0816 13:10:38.789029   40000 buildroot.go:189] setting minikube options for container-runtime
	I0816 13:10:38.789399   40000 config.go:182] Loaded profile config "multinode-336982": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 13:10:38.789508   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHHostname
	I0816 13:10:38.791870   40000 main.go:141] libmachine: (multinode-336982) DBG | domain multinode-336982 has defined MAC address 52:54:00:26:1f:11 in network mk-multinode-336982
	I0816 13:10:38.792252   40000 main.go:141] libmachine: (multinode-336982) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:1f:11", ip: ""} in network mk-multinode-336982: {Iface:virbr1 ExpiryTime:2024-08-16 14:05:09 +0000 UTC Type:0 Mac:52:54:00:26:1f:11 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-336982 Clientid:01:52:54:00:26:1f:11}
	I0816 13:10:38.792284   40000 main.go:141] libmachine: (multinode-336982) DBG | domain multinode-336982 has defined IP address 192.168.39.208 and MAC address 52:54:00:26:1f:11 in network mk-multinode-336982
	I0816 13:10:38.792387   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHPort
	I0816 13:10:38.792563   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHKeyPath
	I0816 13:10:38.792717   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHKeyPath
	I0816 13:10:38.792862   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHUsername
	I0816 13:10:38.793045   40000 main.go:141] libmachine: Using SSH client type: native
	I0816 13:10:38.793189   40000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I0816 13:10:38.793203   40000 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 13:12:09.626425   40000 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 13:12:09.626453   40000 machine.go:96] duration metric: took 1m31.653690279s to provisionDockerMachine
	I0816 13:12:09.626465   40000 start.go:293] postStartSetup for "multinode-336982" (driver="kvm2")
	I0816 13:12:09.626479   40000 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 13:12:09.626497   40000 main.go:141] libmachine: (multinode-336982) Calling .DriverName
	I0816 13:12:09.626816   40000 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 13:12:09.626845   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHHostname
	I0816 13:12:09.629758   40000 main.go:141] libmachine: (multinode-336982) DBG | domain multinode-336982 has defined MAC address 52:54:00:26:1f:11 in network mk-multinode-336982
	I0816 13:12:09.630165   40000 main.go:141] libmachine: (multinode-336982) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:1f:11", ip: ""} in network mk-multinode-336982: {Iface:virbr1 ExpiryTime:2024-08-16 14:05:09 +0000 UTC Type:0 Mac:52:54:00:26:1f:11 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-336982 Clientid:01:52:54:00:26:1f:11}
	I0816 13:12:09.630304   40000 main.go:141] libmachine: (multinode-336982) DBG | domain multinode-336982 has defined IP address 192.168.39.208 and MAC address 52:54:00:26:1f:11 in network mk-multinode-336982
	I0816 13:12:09.630403   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHPort
	I0816 13:12:09.630652   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHKeyPath
	I0816 13:12:09.630821   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHUsername
	I0816 13:12:09.630952   40000 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/multinode-336982/id_rsa Username:docker}
	I0816 13:12:09.712956   40000 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 13:12:09.717434   40000 command_runner.go:130] > NAME=Buildroot
	I0816 13:12:09.717452   40000 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0816 13:12:09.717457   40000 command_runner.go:130] > ID=buildroot
	I0816 13:12:09.717464   40000 command_runner.go:130] > VERSION_ID=2023.02.9
	I0816 13:12:09.717469   40000 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0816 13:12:09.717504   40000 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 13:12:09.717520   40000 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/addons for local assets ...
	I0816 13:12:09.717585   40000 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/files for local assets ...
	I0816 13:12:09.717674   40000 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem -> 111492.pem in /etc/ssl/certs
	I0816 13:12:09.717684   40000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem -> /etc/ssl/certs/111492.pem
	I0816 13:12:09.717782   40000 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 13:12:09.727599   40000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /etc/ssl/certs/111492.pem (1708 bytes)
	I0816 13:12:09.754383   40000 start.go:296] duration metric: took 127.906448ms for postStartSetup
	I0816 13:12:09.754428   40000 fix.go:56] duration metric: took 1m31.803207589s for fixHost
	I0816 13:12:09.754452   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHHostname
	I0816 13:12:09.757213   40000 main.go:141] libmachine: (multinode-336982) DBG | domain multinode-336982 has defined MAC address 52:54:00:26:1f:11 in network mk-multinode-336982
	I0816 13:12:09.757647   40000 main.go:141] libmachine: (multinode-336982) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:1f:11", ip: ""} in network mk-multinode-336982: {Iface:virbr1 ExpiryTime:2024-08-16 14:05:09 +0000 UTC Type:0 Mac:52:54:00:26:1f:11 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-336982 Clientid:01:52:54:00:26:1f:11}
	I0816 13:12:09.757676   40000 main.go:141] libmachine: (multinode-336982) DBG | domain multinode-336982 has defined IP address 192.168.39.208 and MAC address 52:54:00:26:1f:11 in network mk-multinode-336982
	I0816 13:12:09.757836   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHPort
	I0816 13:12:09.758054   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHKeyPath
	I0816 13:12:09.758225   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHKeyPath
	I0816 13:12:09.758371   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHUsername
	I0816 13:12:09.758567   40000 main.go:141] libmachine: Using SSH client type: native
	I0816 13:12:09.758709   40000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I0816 13:12:09.758719   40000 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 13:12:09.853733   40000 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723813929.831822526
	
	I0816 13:12:09.853753   40000 fix.go:216] guest clock: 1723813929.831822526
	I0816 13:12:09.853761   40000 fix.go:229] Guest: 2024-08-16 13:12:09.831822526 +0000 UTC Remote: 2024-08-16 13:12:09.754433623 +0000 UTC m=+91.924913360 (delta=77.388903ms)
	I0816 13:12:09.853791   40000 fix.go:200] guest clock delta is within tolerance: 77.388903ms
	I0816 13:12:09.853795   40000 start.go:83] releasing machines lock for "multinode-336982", held for 1m31.902593602s
	I0816 13:12:09.853813   40000 main.go:141] libmachine: (multinode-336982) Calling .DriverName
	I0816 13:12:09.854101   40000 main.go:141] libmachine: (multinode-336982) Calling .GetIP
	I0816 13:12:09.856610   40000 main.go:141] libmachine: (multinode-336982) DBG | domain multinode-336982 has defined MAC address 52:54:00:26:1f:11 in network mk-multinode-336982
	I0816 13:12:09.856972   40000 main.go:141] libmachine: (multinode-336982) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:1f:11", ip: ""} in network mk-multinode-336982: {Iface:virbr1 ExpiryTime:2024-08-16 14:05:09 +0000 UTC Type:0 Mac:52:54:00:26:1f:11 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-336982 Clientid:01:52:54:00:26:1f:11}
	I0816 13:12:09.856999   40000 main.go:141] libmachine: (multinode-336982) DBG | domain multinode-336982 has defined IP address 192.168.39.208 and MAC address 52:54:00:26:1f:11 in network mk-multinode-336982
	I0816 13:12:09.857109   40000 main.go:141] libmachine: (multinode-336982) Calling .DriverName
	I0816 13:12:09.857542   40000 main.go:141] libmachine: (multinode-336982) Calling .DriverName
	I0816 13:12:09.857713   40000 main.go:141] libmachine: (multinode-336982) Calling .DriverName
	I0816 13:12:09.857801   40000 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 13:12:09.857852   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHHostname
	I0816 13:12:09.857943   40000 ssh_runner.go:195] Run: cat /version.json
	I0816 13:12:09.857969   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHHostname
	I0816 13:12:09.860531   40000 main.go:141] libmachine: (multinode-336982) DBG | domain multinode-336982 has defined MAC address 52:54:00:26:1f:11 in network mk-multinode-336982
	I0816 13:12:09.860860   40000 main.go:141] libmachine: (multinode-336982) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:1f:11", ip: ""} in network mk-multinode-336982: {Iface:virbr1 ExpiryTime:2024-08-16 14:05:09 +0000 UTC Type:0 Mac:52:54:00:26:1f:11 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-336982 Clientid:01:52:54:00:26:1f:11}
	I0816 13:12:09.860886   40000 main.go:141] libmachine: (multinode-336982) DBG | domain multinode-336982 has defined IP address 192.168.39.208 and MAC address 52:54:00:26:1f:11 in network mk-multinode-336982
	I0816 13:12:09.860919   40000 main.go:141] libmachine: (multinode-336982) DBG | domain multinode-336982 has defined MAC address 52:54:00:26:1f:11 in network mk-multinode-336982
	I0816 13:12:09.861038   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHPort
	I0816 13:12:09.861228   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHKeyPath
	I0816 13:12:09.861372   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHUsername
	I0816 13:12:09.861456   40000 main.go:141] libmachine: (multinode-336982) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:1f:11", ip: ""} in network mk-multinode-336982: {Iface:virbr1 ExpiryTime:2024-08-16 14:05:09 +0000 UTC Type:0 Mac:52:54:00:26:1f:11 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-336982 Clientid:01:52:54:00:26:1f:11}
	I0816 13:12:09.861490   40000 main.go:141] libmachine: (multinode-336982) DBG | domain multinode-336982 has defined IP address 192.168.39.208 and MAC address 52:54:00:26:1f:11 in network mk-multinode-336982
	I0816 13:12:09.861492   40000 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/multinode-336982/id_rsa Username:docker}
	I0816 13:12:09.861655   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHPort
	I0816 13:12:09.861811   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHKeyPath
	I0816 13:12:09.861965   40000 main.go:141] libmachine: (multinode-336982) Calling .GetSSHUsername
	I0816 13:12:09.862100   40000 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/multinode-336982/id_rsa Username:docker}
	I0816 13:12:09.954791   40000 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0816 13:12:09.954837   40000 command_runner.go:130] > {"iso_version": "v1.33.1-1723650137-19443", "kicbase_version": "v0.0.44-1723567951-19429", "minikube_version": "v1.33.1", "commit": "0de88034feeac7cdc6e3fa82af59b9e46ac52b3e"}
	I0816 13:12:09.954961   40000 ssh_runner.go:195] Run: systemctl --version
	I0816 13:12:09.960590   40000 command_runner.go:130] > systemd 252 (252)
	I0816 13:12:09.960620   40000 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0816 13:12:09.960866   40000 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 13:12:10.121449   40000 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0816 13:12:10.129227   40000 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0816 13:12:10.129274   40000 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 13:12:10.129343   40000 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 13:12:10.138877   40000 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0816 13:12:10.138900   40000 start.go:495] detecting cgroup driver to use...
	I0816 13:12:10.138953   40000 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 13:12:10.154966   40000 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 13:12:10.170445   40000 docker.go:217] disabling cri-docker service (if available) ...
	I0816 13:12:10.170517   40000 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 13:12:10.184593   40000 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 13:12:10.198999   40000 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 13:12:10.347015   40000 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 13:12:10.498588   40000 docker.go:233] disabling docker service ...
	I0816 13:12:10.498647   40000 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 13:12:10.516301   40000 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 13:12:10.530034   40000 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 13:12:10.670250   40000 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 13:12:10.806431   40000 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 13:12:10.820851   40000 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 13:12:10.839862   40000 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0816 13:12:10.839907   40000 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 13:12:10.839962   40000 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:12:10.850809   40000 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 13:12:10.850917   40000 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:12:10.861426   40000 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:12:10.872048   40000 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:12:10.882663   40000 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 13:12:10.893516   40000 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:12:10.903836   40000 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:12:10.915264   40000 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:12:10.925754   40000 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 13:12:10.935190   40000 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0816 13:12:10.935258   40000 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 13:12:10.944154   40000 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:12:11.093364   40000 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 13:12:11.411161   40000 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 13:12:11.411244   40000 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 13:12:11.416300   40000 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0816 13:12:11.416326   40000 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0816 13:12:11.416335   40000 command_runner.go:130] > Device: 0,22	Inode: 1335        Links: 1
	I0816 13:12:11.416344   40000 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0816 13:12:11.416352   40000 command_runner.go:130] > Access: 2024-08-16 13:12:11.286505345 +0000
	I0816 13:12:11.416368   40000 command_runner.go:130] > Modify: 2024-08-16 13:12:11.285505322 +0000
	I0816 13:12:11.416378   40000 command_runner.go:130] > Change: 2024-08-16 13:12:11.286505345 +0000
	I0816 13:12:11.416383   40000 command_runner.go:130] >  Birth: -
	I0816 13:12:11.416411   40000 start.go:563] Will wait 60s for crictl version
	I0816 13:12:11.416454   40000 ssh_runner.go:195] Run: which crictl
	I0816 13:12:11.420272   40000 command_runner.go:130] > /usr/bin/crictl
	I0816 13:12:11.420334   40000 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 13:12:11.463487   40000 command_runner.go:130] > Version:  0.1.0
	I0816 13:12:11.463508   40000 command_runner.go:130] > RuntimeName:  cri-o
	I0816 13:12:11.463512   40000 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0816 13:12:11.463518   40000 command_runner.go:130] > RuntimeApiVersion:  v1
	I0816 13:12:11.463665   40000 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 13:12:11.463739   40000 ssh_runner.go:195] Run: crio --version
	I0816 13:12:11.489892   40000 command_runner.go:130] > crio version 1.29.1
	I0816 13:12:11.489913   40000 command_runner.go:130] > Version:        1.29.1
	I0816 13:12:11.489921   40000 command_runner.go:130] > GitCommit:      unknown
	I0816 13:12:11.489927   40000 command_runner.go:130] > GitCommitDate:  unknown
	I0816 13:12:11.489933   40000 command_runner.go:130] > GitTreeState:   clean
	I0816 13:12:11.489941   40000 command_runner.go:130] > BuildDate:      2024-08-14T19:54:05Z
	I0816 13:12:11.489947   40000 command_runner.go:130] > GoVersion:      go1.21.6
	I0816 13:12:11.489952   40000 command_runner.go:130] > Compiler:       gc
	I0816 13:12:11.489957   40000 command_runner.go:130] > Platform:       linux/amd64
	I0816 13:12:11.489962   40000 command_runner.go:130] > Linkmode:       dynamic
	I0816 13:12:11.489969   40000 command_runner.go:130] > BuildTags:      
	I0816 13:12:11.489975   40000 command_runner.go:130] >   containers_image_ostree_stub
	I0816 13:12:11.489982   40000 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0816 13:12:11.489988   40000 command_runner.go:130] >   btrfs_noversion
	I0816 13:12:11.489995   40000 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0816 13:12:11.490005   40000 command_runner.go:130] >   libdm_no_deferred_remove
	I0816 13:12:11.490013   40000 command_runner.go:130] >   seccomp
	I0816 13:12:11.490023   40000 command_runner.go:130] > LDFlags:          unknown
	I0816 13:12:11.490031   40000 command_runner.go:130] > SeccompEnabled:   true
	I0816 13:12:11.490083   40000 command_runner.go:130] > AppArmorEnabled:  false
	I0816 13:12:11.491236   40000 ssh_runner.go:195] Run: crio --version
	I0816 13:12:11.520075   40000 command_runner.go:130] > crio version 1.29.1
	I0816 13:12:11.520099   40000 command_runner.go:130] > Version:        1.29.1
	I0816 13:12:11.520106   40000 command_runner.go:130] > GitCommit:      unknown
	I0816 13:12:11.520111   40000 command_runner.go:130] > GitCommitDate:  unknown
	I0816 13:12:11.520115   40000 command_runner.go:130] > GitTreeState:   clean
	I0816 13:12:11.520120   40000 command_runner.go:130] > BuildDate:      2024-08-14T19:54:05Z
	I0816 13:12:11.520124   40000 command_runner.go:130] > GoVersion:      go1.21.6
	I0816 13:12:11.520129   40000 command_runner.go:130] > Compiler:       gc
	I0816 13:12:11.520133   40000 command_runner.go:130] > Platform:       linux/amd64
	I0816 13:12:11.520137   40000 command_runner.go:130] > Linkmode:       dynamic
	I0816 13:12:11.520142   40000 command_runner.go:130] > BuildTags:      
	I0816 13:12:11.520146   40000 command_runner.go:130] >   containers_image_ostree_stub
	I0816 13:12:11.520151   40000 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0816 13:12:11.520155   40000 command_runner.go:130] >   btrfs_noversion
	I0816 13:12:11.520158   40000 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0816 13:12:11.520162   40000 command_runner.go:130] >   libdm_no_deferred_remove
	I0816 13:12:11.520165   40000 command_runner.go:130] >   seccomp
	I0816 13:12:11.520169   40000 command_runner.go:130] > LDFlags:          unknown
	I0816 13:12:11.520174   40000 command_runner.go:130] > SeccompEnabled:   true
	I0816 13:12:11.520179   40000 command_runner.go:130] > AppArmorEnabled:  false
	I0816 13:12:11.523047   40000 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 13:12:11.524509   40000 main.go:141] libmachine: (multinode-336982) Calling .GetIP
	I0816 13:12:11.527111   40000 main.go:141] libmachine: (multinode-336982) DBG | domain multinode-336982 has defined MAC address 52:54:00:26:1f:11 in network mk-multinode-336982
	I0816 13:12:11.527410   40000 main.go:141] libmachine: (multinode-336982) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:1f:11", ip: ""} in network mk-multinode-336982: {Iface:virbr1 ExpiryTime:2024-08-16 14:05:09 +0000 UTC Type:0 Mac:52:54:00:26:1f:11 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-336982 Clientid:01:52:54:00:26:1f:11}
	I0816 13:12:11.527434   40000 main.go:141] libmachine: (multinode-336982) DBG | domain multinode-336982 has defined IP address 192.168.39.208 and MAC address 52:54:00:26:1f:11 in network mk-multinode-336982
	I0816 13:12:11.527580   40000 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0816 13:12:11.531896   40000 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0816 13:12:11.532000   40000 kubeadm.go:883] updating cluster {Name:multinode-336982 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.0 ClusterName:multinode-336982 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.190 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.145 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fa
lse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 13:12:11.532159   40000 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 13:12:11.532201   40000 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 13:12:11.573486   40000 command_runner.go:130] > {
	I0816 13:12:11.573510   40000 command_runner.go:130] >   "images": [
	I0816 13:12:11.573515   40000 command_runner.go:130] >     {
	I0816 13:12:11.573523   40000 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0816 13:12:11.573528   40000 command_runner.go:130] >       "repoTags": [
	I0816 13:12:11.573534   40000 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0816 13:12:11.573537   40000 command_runner.go:130] >       ],
	I0816 13:12:11.573555   40000 command_runner.go:130] >       "repoDigests": [
	I0816 13:12:11.573563   40000 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0816 13:12:11.573570   40000 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0816 13:12:11.573574   40000 command_runner.go:130] >       ],
	I0816 13:12:11.573578   40000 command_runner.go:130] >       "size": "87165492",
	I0816 13:12:11.573582   40000 command_runner.go:130] >       "uid": null,
	I0816 13:12:11.573587   40000 command_runner.go:130] >       "username": "",
	I0816 13:12:11.573593   40000 command_runner.go:130] >       "spec": null,
	I0816 13:12:11.573597   40000 command_runner.go:130] >       "pinned": false
	I0816 13:12:11.573600   40000 command_runner.go:130] >     },
	I0816 13:12:11.573604   40000 command_runner.go:130] >     {
	I0816 13:12:11.573611   40000 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0816 13:12:11.573618   40000 command_runner.go:130] >       "repoTags": [
	I0816 13:12:11.573623   40000 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0816 13:12:11.573628   40000 command_runner.go:130] >       ],
	I0816 13:12:11.573632   40000 command_runner.go:130] >       "repoDigests": [
	I0816 13:12:11.573638   40000 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0816 13:12:11.573647   40000 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0816 13:12:11.573650   40000 command_runner.go:130] >       ],
	I0816 13:12:11.573654   40000 command_runner.go:130] >       "size": "87190579",
	I0816 13:12:11.573658   40000 command_runner.go:130] >       "uid": null,
	I0816 13:12:11.573667   40000 command_runner.go:130] >       "username": "",
	I0816 13:12:11.573671   40000 command_runner.go:130] >       "spec": null,
	I0816 13:12:11.573675   40000 command_runner.go:130] >       "pinned": false
	I0816 13:12:11.573678   40000 command_runner.go:130] >     },
	I0816 13:12:11.573682   40000 command_runner.go:130] >     {
	I0816 13:12:11.573687   40000 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0816 13:12:11.573693   40000 command_runner.go:130] >       "repoTags": [
	I0816 13:12:11.573698   40000 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0816 13:12:11.573701   40000 command_runner.go:130] >       ],
	I0816 13:12:11.573705   40000 command_runner.go:130] >       "repoDigests": [
	I0816 13:12:11.573712   40000 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0816 13:12:11.573719   40000 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0816 13:12:11.573723   40000 command_runner.go:130] >       ],
	I0816 13:12:11.573730   40000 command_runner.go:130] >       "size": "1363676",
	I0816 13:12:11.573733   40000 command_runner.go:130] >       "uid": null,
	I0816 13:12:11.573741   40000 command_runner.go:130] >       "username": "",
	I0816 13:12:11.573748   40000 command_runner.go:130] >       "spec": null,
	I0816 13:12:11.573752   40000 command_runner.go:130] >       "pinned": false
	I0816 13:12:11.573755   40000 command_runner.go:130] >     },
	I0816 13:12:11.573758   40000 command_runner.go:130] >     {
	I0816 13:12:11.573763   40000 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0816 13:12:11.573768   40000 command_runner.go:130] >       "repoTags": [
	I0816 13:12:11.573773   40000 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0816 13:12:11.573778   40000 command_runner.go:130] >       ],
	I0816 13:12:11.573782   40000 command_runner.go:130] >       "repoDigests": [
	I0816 13:12:11.573792   40000 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0816 13:12:11.573807   40000 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0816 13:12:11.573813   40000 command_runner.go:130] >       ],
	I0816 13:12:11.573817   40000 command_runner.go:130] >       "size": "31470524",
	I0816 13:12:11.573821   40000 command_runner.go:130] >       "uid": null,
	I0816 13:12:11.573827   40000 command_runner.go:130] >       "username": "",
	I0816 13:12:11.573834   40000 command_runner.go:130] >       "spec": null,
	I0816 13:12:11.573838   40000 command_runner.go:130] >       "pinned": false
	I0816 13:12:11.573844   40000 command_runner.go:130] >     },
	I0816 13:12:11.573848   40000 command_runner.go:130] >     {
	I0816 13:12:11.573854   40000 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0816 13:12:11.573860   40000 command_runner.go:130] >       "repoTags": [
	I0816 13:12:11.573866   40000 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0816 13:12:11.573871   40000 command_runner.go:130] >       ],
	I0816 13:12:11.573876   40000 command_runner.go:130] >       "repoDigests": [
	I0816 13:12:11.573885   40000 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0816 13:12:11.573894   40000 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0816 13:12:11.573899   40000 command_runner.go:130] >       ],
	I0816 13:12:11.573903   40000 command_runner.go:130] >       "size": "61245718",
	I0816 13:12:11.573909   40000 command_runner.go:130] >       "uid": null,
	I0816 13:12:11.573914   40000 command_runner.go:130] >       "username": "nonroot",
	I0816 13:12:11.573920   40000 command_runner.go:130] >       "spec": null,
	I0816 13:12:11.573924   40000 command_runner.go:130] >       "pinned": false
	I0816 13:12:11.573929   40000 command_runner.go:130] >     },
	I0816 13:12:11.573932   40000 command_runner.go:130] >     {
	I0816 13:12:11.573941   40000 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0816 13:12:11.573945   40000 command_runner.go:130] >       "repoTags": [
	I0816 13:12:11.573952   40000 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0816 13:12:11.573958   40000 command_runner.go:130] >       ],
	I0816 13:12:11.573964   40000 command_runner.go:130] >       "repoDigests": [
	I0816 13:12:11.573971   40000 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0816 13:12:11.573979   40000 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0816 13:12:11.573983   40000 command_runner.go:130] >       ],
	I0816 13:12:11.573987   40000 command_runner.go:130] >       "size": "149009664",
	I0816 13:12:11.573993   40000 command_runner.go:130] >       "uid": {
	I0816 13:12:11.573997   40000 command_runner.go:130] >         "value": "0"
	I0816 13:12:11.574001   40000 command_runner.go:130] >       },
	I0816 13:12:11.574005   40000 command_runner.go:130] >       "username": "",
	I0816 13:12:11.574009   40000 command_runner.go:130] >       "spec": null,
	I0816 13:12:11.574013   40000 command_runner.go:130] >       "pinned": false
	I0816 13:12:11.574016   40000 command_runner.go:130] >     },
	I0816 13:12:11.574020   40000 command_runner.go:130] >     {
	I0816 13:12:11.574028   40000 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0816 13:12:11.574032   40000 command_runner.go:130] >       "repoTags": [
	I0816 13:12:11.574037   40000 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0816 13:12:11.574043   40000 command_runner.go:130] >       ],
	I0816 13:12:11.574047   40000 command_runner.go:130] >       "repoDigests": [
	I0816 13:12:11.574056   40000 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0816 13:12:11.574067   40000 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0816 13:12:11.574073   40000 command_runner.go:130] >       ],
	I0816 13:12:11.574078   40000 command_runner.go:130] >       "size": "95233506",
	I0816 13:12:11.574084   40000 command_runner.go:130] >       "uid": {
	I0816 13:12:11.574088   40000 command_runner.go:130] >         "value": "0"
	I0816 13:12:11.574092   40000 command_runner.go:130] >       },
	I0816 13:12:11.574095   40000 command_runner.go:130] >       "username": "",
	I0816 13:12:11.574100   40000 command_runner.go:130] >       "spec": null,
	I0816 13:12:11.574109   40000 command_runner.go:130] >       "pinned": false
	I0816 13:12:11.574116   40000 command_runner.go:130] >     },
	I0816 13:12:11.574119   40000 command_runner.go:130] >     {
	I0816 13:12:11.574127   40000 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0816 13:12:11.574133   40000 command_runner.go:130] >       "repoTags": [
	I0816 13:12:11.574138   40000 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0816 13:12:11.574145   40000 command_runner.go:130] >       ],
	I0816 13:12:11.574149   40000 command_runner.go:130] >       "repoDigests": [
	I0816 13:12:11.574166   40000 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0816 13:12:11.574177   40000 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0816 13:12:11.574183   40000 command_runner.go:130] >       ],
	I0816 13:12:11.574188   40000 command_runner.go:130] >       "size": "89437512",
	I0816 13:12:11.574194   40000 command_runner.go:130] >       "uid": {
	I0816 13:12:11.574202   40000 command_runner.go:130] >         "value": "0"
	I0816 13:12:11.574207   40000 command_runner.go:130] >       },
	I0816 13:12:11.574211   40000 command_runner.go:130] >       "username": "",
	I0816 13:12:11.574216   40000 command_runner.go:130] >       "spec": null,
	I0816 13:12:11.574221   40000 command_runner.go:130] >       "pinned": false
	I0816 13:12:11.574224   40000 command_runner.go:130] >     },
	I0816 13:12:11.574228   40000 command_runner.go:130] >     {
	I0816 13:12:11.574234   40000 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0816 13:12:11.574238   40000 command_runner.go:130] >       "repoTags": [
	I0816 13:12:11.574242   40000 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0816 13:12:11.574245   40000 command_runner.go:130] >       ],
	I0816 13:12:11.574249   40000 command_runner.go:130] >       "repoDigests": [
	I0816 13:12:11.574256   40000 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0816 13:12:11.574263   40000 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0816 13:12:11.574266   40000 command_runner.go:130] >       ],
	I0816 13:12:11.574269   40000 command_runner.go:130] >       "size": "92728217",
	I0816 13:12:11.574273   40000 command_runner.go:130] >       "uid": null,
	I0816 13:12:11.574277   40000 command_runner.go:130] >       "username": "",
	I0816 13:12:11.574280   40000 command_runner.go:130] >       "spec": null,
	I0816 13:12:11.574284   40000 command_runner.go:130] >       "pinned": false
	I0816 13:12:11.574286   40000 command_runner.go:130] >     },
	I0816 13:12:11.574289   40000 command_runner.go:130] >     {
	I0816 13:12:11.574294   40000 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0816 13:12:11.574298   40000 command_runner.go:130] >       "repoTags": [
	I0816 13:12:11.574303   40000 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0816 13:12:11.574309   40000 command_runner.go:130] >       ],
	I0816 13:12:11.574313   40000 command_runner.go:130] >       "repoDigests": [
	I0816 13:12:11.574321   40000 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0816 13:12:11.574330   40000 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0816 13:12:11.574339   40000 command_runner.go:130] >       ],
	I0816 13:12:11.574345   40000 command_runner.go:130] >       "size": "68420936",
	I0816 13:12:11.574349   40000 command_runner.go:130] >       "uid": {
	I0816 13:12:11.574357   40000 command_runner.go:130] >         "value": "0"
	I0816 13:12:11.574363   40000 command_runner.go:130] >       },
	I0816 13:12:11.574367   40000 command_runner.go:130] >       "username": "",
	I0816 13:12:11.574373   40000 command_runner.go:130] >       "spec": null,
	I0816 13:12:11.574377   40000 command_runner.go:130] >       "pinned": false
	I0816 13:12:11.574383   40000 command_runner.go:130] >     },
	I0816 13:12:11.574386   40000 command_runner.go:130] >     {
	I0816 13:12:11.574394   40000 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0816 13:12:11.574400   40000 command_runner.go:130] >       "repoTags": [
	I0816 13:12:11.574405   40000 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0816 13:12:11.574410   40000 command_runner.go:130] >       ],
	I0816 13:12:11.574414   40000 command_runner.go:130] >       "repoDigests": [
	I0816 13:12:11.574421   40000 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0816 13:12:11.574429   40000 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0816 13:12:11.574435   40000 command_runner.go:130] >       ],
	I0816 13:12:11.574440   40000 command_runner.go:130] >       "size": "742080",
	I0816 13:12:11.574445   40000 command_runner.go:130] >       "uid": {
	I0816 13:12:11.574449   40000 command_runner.go:130] >         "value": "65535"
	I0816 13:12:11.574455   40000 command_runner.go:130] >       },
	I0816 13:12:11.574460   40000 command_runner.go:130] >       "username": "",
	I0816 13:12:11.574466   40000 command_runner.go:130] >       "spec": null,
	I0816 13:12:11.574469   40000 command_runner.go:130] >       "pinned": true
	I0816 13:12:11.574475   40000 command_runner.go:130] >     }
	I0816 13:12:11.574478   40000 command_runner.go:130] >   ]
	I0816 13:12:11.574481   40000 command_runner.go:130] > }
	I0816 13:12:11.574648   40000 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 13:12:11.574658   40000 crio.go:433] Images already preloaded, skipping extraction
	I0816 13:12:11.574701   40000 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 13:12:11.607970   40000 command_runner.go:130] > {
	I0816 13:12:11.607997   40000 command_runner.go:130] >   "images": [
	I0816 13:12:11.608002   40000 command_runner.go:130] >     {
	I0816 13:12:11.608010   40000 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0816 13:12:11.608015   40000 command_runner.go:130] >       "repoTags": [
	I0816 13:12:11.608021   40000 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0816 13:12:11.608025   40000 command_runner.go:130] >       ],
	I0816 13:12:11.608029   40000 command_runner.go:130] >       "repoDigests": [
	I0816 13:12:11.608038   40000 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0816 13:12:11.608045   40000 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0816 13:12:11.608048   40000 command_runner.go:130] >       ],
	I0816 13:12:11.608052   40000 command_runner.go:130] >       "size": "87165492",
	I0816 13:12:11.608057   40000 command_runner.go:130] >       "uid": null,
	I0816 13:12:11.608072   40000 command_runner.go:130] >       "username": "",
	I0816 13:12:11.608079   40000 command_runner.go:130] >       "spec": null,
	I0816 13:12:11.608087   40000 command_runner.go:130] >       "pinned": false
	I0816 13:12:11.608090   40000 command_runner.go:130] >     },
	I0816 13:12:11.608097   40000 command_runner.go:130] >     {
	I0816 13:12:11.608103   40000 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0816 13:12:11.608111   40000 command_runner.go:130] >       "repoTags": [
	I0816 13:12:11.608144   40000 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0816 13:12:11.608147   40000 command_runner.go:130] >       ],
	I0816 13:12:11.608151   40000 command_runner.go:130] >       "repoDigests": [
	I0816 13:12:11.608158   40000 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0816 13:12:11.608167   40000 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0816 13:12:11.608173   40000 command_runner.go:130] >       ],
	I0816 13:12:11.608177   40000 command_runner.go:130] >       "size": "87190579",
	I0816 13:12:11.608181   40000 command_runner.go:130] >       "uid": null,
	I0816 13:12:11.608188   40000 command_runner.go:130] >       "username": "",
	I0816 13:12:11.608194   40000 command_runner.go:130] >       "spec": null,
	I0816 13:12:11.608198   40000 command_runner.go:130] >       "pinned": false
	I0816 13:12:11.608203   40000 command_runner.go:130] >     },
	I0816 13:12:11.608207   40000 command_runner.go:130] >     {
	I0816 13:12:11.608213   40000 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0816 13:12:11.608218   40000 command_runner.go:130] >       "repoTags": [
	I0816 13:12:11.608223   40000 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0816 13:12:11.608227   40000 command_runner.go:130] >       ],
	I0816 13:12:11.608230   40000 command_runner.go:130] >       "repoDigests": [
	I0816 13:12:11.608237   40000 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0816 13:12:11.608246   40000 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0816 13:12:11.608250   40000 command_runner.go:130] >       ],
	I0816 13:12:11.608254   40000 command_runner.go:130] >       "size": "1363676",
	I0816 13:12:11.608259   40000 command_runner.go:130] >       "uid": null,
	I0816 13:12:11.608264   40000 command_runner.go:130] >       "username": "",
	I0816 13:12:11.608270   40000 command_runner.go:130] >       "spec": null,
	I0816 13:12:11.608274   40000 command_runner.go:130] >       "pinned": false
	I0816 13:12:11.608279   40000 command_runner.go:130] >     },
	I0816 13:12:11.608282   40000 command_runner.go:130] >     {
	I0816 13:12:11.608290   40000 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0816 13:12:11.608293   40000 command_runner.go:130] >       "repoTags": [
	I0816 13:12:11.608305   40000 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0816 13:12:11.608311   40000 command_runner.go:130] >       ],
	I0816 13:12:11.608315   40000 command_runner.go:130] >       "repoDigests": [
	I0816 13:12:11.608325   40000 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0816 13:12:11.608339   40000 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0816 13:12:11.608345   40000 command_runner.go:130] >       ],
	I0816 13:12:11.608350   40000 command_runner.go:130] >       "size": "31470524",
	I0816 13:12:11.608353   40000 command_runner.go:130] >       "uid": null,
	I0816 13:12:11.608357   40000 command_runner.go:130] >       "username": "",
	I0816 13:12:11.608361   40000 command_runner.go:130] >       "spec": null,
	I0816 13:12:11.608368   40000 command_runner.go:130] >       "pinned": false
	I0816 13:12:11.608373   40000 command_runner.go:130] >     },
	I0816 13:12:11.608377   40000 command_runner.go:130] >     {
	I0816 13:12:11.608383   40000 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0816 13:12:11.608389   40000 command_runner.go:130] >       "repoTags": [
	I0816 13:12:11.608394   40000 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0816 13:12:11.608400   40000 command_runner.go:130] >       ],
	I0816 13:12:11.608404   40000 command_runner.go:130] >       "repoDigests": [
	I0816 13:12:11.608411   40000 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0816 13:12:11.608436   40000 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0816 13:12:11.608447   40000 command_runner.go:130] >       ],
	I0816 13:12:11.608452   40000 command_runner.go:130] >       "size": "61245718",
	I0816 13:12:11.608456   40000 command_runner.go:130] >       "uid": null,
	I0816 13:12:11.608460   40000 command_runner.go:130] >       "username": "nonroot",
	I0816 13:12:11.608464   40000 command_runner.go:130] >       "spec": null,
	I0816 13:12:11.608468   40000 command_runner.go:130] >       "pinned": false
	I0816 13:12:11.608474   40000 command_runner.go:130] >     },
	I0816 13:12:11.608477   40000 command_runner.go:130] >     {
	I0816 13:12:11.608483   40000 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0816 13:12:11.608488   40000 command_runner.go:130] >       "repoTags": [
	I0816 13:12:11.608492   40000 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0816 13:12:11.608498   40000 command_runner.go:130] >       ],
	I0816 13:12:11.608503   40000 command_runner.go:130] >       "repoDigests": [
	I0816 13:12:11.608510   40000 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0816 13:12:11.608518   40000 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0816 13:12:11.608522   40000 command_runner.go:130] >       ],
	I0816 13:12:11.608526   40000 command_runner.go:130] >       "size": "149009664",
	I0816 13:12:11.608529   40000 command_runner.go:130] >       "uid": {
	I0816 13:12:11.608533   40000 command_runner.go:130] >         "value": "0"
	I0816 13:12:11.608539   40000 command_runner.go:130] >       },
	I0816 13:12:11.608542   40000 command_runner.go:130] >       "username": "",
	I0816 13:12:11.608546   40000 command_runner.go:130] >       "spec": null,
	I0816 13:12:11.608550   40000 command_runner.go:130] >       "pinned": false
	I0816 13:12:11.608554   40000 command_runner.go:130] >     },
	I0816 13:12:11.608557   40000 command_runner.go:130] >     {
	I0816 13:12:11.608564   40000 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0816 13:12:11.608570   40000 command_runner.go:130] >       "repoTags": [
	I0816 13:12:11.608576   40000 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0816 13:12:11.608580   40000 command_runner.go:130] >       ],
	I0816 13:12:11.608583   40000 command_runner.go:130] >       "repoDigests": [
	I0816 13:12:11.608593   40000 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0816 13:12:11.608600   40000 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0816 13:12:11.608605   40000 command_runner.go:130] >       ],
	I0816 13:12:11.608609   40000 command_runner.go:130] >       "size": "95233506",
	I0816 13:12:11.608613   40000 command_runner.go:130] >       "uid": {
	I0816 13:12:11.608617   40000 command_runner.go:130] >         "value": "0"
	I0816 13:12:11.608620   40000 command_runner.go:130] >       },
	I0816 13:12:11.608624   40000 command_runner.go:130] >       "username": "",
	I0816 13:12:11.608629   40000 command_runner.go:130] >       "spec": null,
	I0816 13:12:11.608634   40000 command_runner.go:130] >       "pinned": false
	I0816 13:12:11.608637   40000 command_runner.go:130] >     },
	I0816 13:12:11.608641   40000 command_runner.go:130] >     {
	I0816 13:12:11.608653   40000 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0816 13:12:11.608658   40000 command_runner.go:130] >       "repoTags": [
	I0816 13:12:11.608664   40000 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0816 13:12:11.608670   40000 command_runner.go:130] >       ],
	I0816 13:12:11.608674   40000 command_runner.go:130] >       "repoDigests": [
	I0816 13:12:11.608691   40000 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0816 13:12:11.608702   40000 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0816 13:12:11.608709   40000 command_runner.go:130] >       ],
	I0816 13:12:11.608716   40000 command_runner.go:130] >       "size": "89437512",
	I0816 13:12:11.608722   40000 command_runner.go:130] >       "uid": {
	I0816 13:12:11.608726   40000 command_runner.go:130] >         "value": "0"
	I0816 13:12:11.608730   40000 command_runner.go:130] >       },
	I0816 13:12:11.608734   40000 command_runner.go:130] >       "username": "",
	I0816 13:12:11.608740   40000 command_runner.go:130] >       "spec": null,
	I0816 13:12:11.608743   40000 command_runner.go:130] >       "pinned": false
	I0816 13:12:11.608747   40000 command_runner.go:130] >     },
	I0816 13:12:11.608752   40000 command_runner.go:130] >     {
	I0816 13:12:11.608757   40000 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0816 13:12:11.608763   40000 command_runner.go:130] >       "repoTags": [
	I0816 13:12:11.608768   40000 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0816 13:12:11.608778   40000 command_runner.go:130] >       ],
	I0816 13:12:11.608783   40000 command_runner.go:130] >       "repoDigests": [
	I0816 13:12:11.608792   40000 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0816 13:12:11.608800   40000 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0816 13:12:11.608805   40000 command_runner.go:130] >       ],
	I0816 13:12:11.608809   40000 command_runner.go:130] >       "size": "92728217",
	I0816 13:12:11.608813   40000 command_runner.go:130] >       "uid": null,
	I0816 13:12:11.608817   40000 command_runner.go:130] >       "username": "",
	I0816 13:12:11.608821   40000 command_runner.go:130] >       "spec": null,
	I0816 13:12:11.608827   40000 command_runner.go:130] >       "pinned": false
	I0816 13:12:11.608830   40000 command_runner.go:130] >     },
	I0816 13:12:11.608834   40000 command_runner.go:130] >     {
	I0816 13:12:11.608840   40000 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0816 13:12:11.608846   40000 command_runner.go:130] >       "repoTags": [
	I0816 13:12:11.608851   40000 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0816 13:12:11.608856   40000 command_runner.go:130] >       ],
	I0816 13:12:11.608860   40000 command_runner.go:130] >       "repoDigests": [
	I0816 13:12:11.608867   40000 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0816 13:12:11.608876   40000 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0816 13:12:11.608880   40000 command_runner.go:130] >       ],
	I0816 13:12:11.608884   40000 command_runner.go:130] >       "size": "68420936",
	I0816 13:12:11.608890   40000 command_runner.go:130] >       "uid": {
	I0816 13:12:11.608894   40000 command_runner.go:130] >         "value": "0"
	I0816 13:12:11.608897   40000 command_runner.go:130] >       },
	I0816 13:12:11.608901   40000 command_runner.go:130] >       "username": "",
	I0816 13:12:11.608918   40000 command_runner.go:130] >       "spec": null,
	I0816 13:12:11.608924   40000 command_runner.go:130] >       "pinned": false
	I0816 13:12:11.608932   40000 command_runner.go:130] >     },
	I0816 13:12:11.608936   40000 command_runner.go:130] >     {
	I0816 13:12:11.608948   40000 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0816 13:12:11.608957   40000 command_runner.go:130] >       "repoTags": [
	I0816 13:12:11.608964   40000 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0816 13:12:11.608970   40000 command_runner.go:130] >       ],
	I0816 13:12:11.608974   40000 command_runner.go:130] >       "repoDigests": [
	I0816 13:12:11.608980   40000 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0816 13:12:11.608990   40000 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0816 13:12:11.608994   40000 command_runner.go:130] >       ],
	I0816 13:12:11.609001   40000 command_runner.go:130] >       "size": "742080",
	I0816 13:12:11.609004   40000 command_runner.go:130] >       "uid": {
	I0816 13:12:11.609008   40000 command_runner.go:130] >         "value": "65535"
	I0816 13:12:11.609012   40000 command_runner.go:130] >       },
	I0816 13:12:11.609016   40000 command_runner.go:130] >       "username": "",
	I0816 13:12:11.609019   40000 command_runner.go:130] >       "spec": null,
	I0816 13:12:11.609023   40000 command_runner.go:130] >       "pinned": true
	I0816 13:12:11.609027   40000 command_runner.go:130] >     }
	I0816 13:12:11.609030   40000 command_runner.go:130] >   ]
	I0816 13:12:11.609035   40000 command_runner.go:130] > }
	I0816 13:12:11.609638   40000 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 13:12:11.609661   40000 cache_images.go:84] Images are preloaded, skipping loading
	I0816 13:12:11.609669   40000 kubeadm.go:934] updating node { 192.168.39.208 8443 v1.31.0 crio true true} ...
	I0816 13:12:11.609781   40000 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-336982 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.208
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:multinode-336982 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 13:12:11.609843   40000 ssh_runner.go:195] Run: crio config
	I0816 13:12:11.642737   40000 command_runner.go:130] ! time="2024-08-16 13:12:11.620759817Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0816 13:12:11.649389   40000 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0816 13:12:11.656708   40000 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0816 13:12:11.656728   40000 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0816 13:12:11.656735   40000 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0816 13:12:11.656739   40000 command_runner.go:130] > #
	I0816 13:12:11.656747   40000 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0816 13:12:11.656753   40000 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0816 13:12:11.656760   40000 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0816 13:12:11.656772   40000 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0816 13:12:11.656779   40000 command_runner.go:130] > # reload'.
	I0816 13:12:11.656788   40000 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0816 13:12:11.656801   40000 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0816 13:12:11.656810   40000 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0816 13:12:11.656822   40000 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0816 13:12:11.656830   40000 command_runner.go:130] > [crio]
	I0816 13:12:11.656842   40000 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0816 13:12:11.656853   40000 command_runner.go:130] > # containers images, in this directory.
	I0816 13:12:11.656860   40000 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0816 13:12:11.656876   40000 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0816 13:12:11.656885   40000 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0816 13:12:11.656893   40000 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0816 13:12:11.656899   40000 command_runner.go:130] > # imagestore = ""
	I0816 13:12:11.656923   40000 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0816 13:12:11.656936   40000 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0816 13:12:11.656943   40000 command_runner.go:130] > storage_driver = "overlay"
	I0816 13:12:11.656955   40000 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0816 13:12:11.656967   40000 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0816 13:12:11.656976   40000 command_runner.go:130] > storage_option = [
	I0816 13:12:11.656986   40000 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0816 13:12:11.656994   40000 command_runner.go:130] > ]
	I0816 13:12:11.657006   40000 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0816 13:12:11.657019   40000 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0816 13:12:11.657028   40000 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0816 13:12:11.657039   40000 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0816 13:12:11.657052   40000 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0816 13:12:11.657062   40000 command_runner.go:130] > # always happen on a node reboot
	I0816 13:12:11.657073   40000 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0816 13:12:11.657085   40000 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0816 13:12:11.657092   40000 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0816 13:12:11.657100   40000 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0816 13:12:11.657105   40000 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0816 13:12:11.657114   40000 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0816 13:12:11.657124   40000 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0816 13:12:11.657131   40000 command_runner.go:130] > # internal_wipe = true
	I0816 13:12:11.657139   40000 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0816 13:12:11.657146   40000 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0816 13:12:11.657150   40000 command_runner.go:130] > # internal_repair = false
	I0816 13:12:11.657157   40000 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0816 13:12:11.657163   40000 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0816 13:12:11.657171   40000 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0816 13:12:11.657178   40000 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0816 13:12:11.657185   40000 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0816 13:12:11.657191   40000 command_runner.go:130] > [crio.api]
	I0816 13:12:11.657196   40000 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0816 13:12:11.657201   40000 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0816 13:12:11.657208   40000 command_runner.go:130] > # IP address on which the stream server will listen.
	I0816 13:12:11.657212   40000 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0816 13:12:11.657220   40000 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0816 13:12:11.657229   40000 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0816 13:12:11.657235   40000 command_runner.go:130] > # stream_port = "0"
	I0816 13:12:11.657241   40000 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0816 13:12:11.657247   40000 command_runner.go:130] > # stream_enable_tls = false
	I0816 13:12:11.657255   40000 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0816 13:12:11.657261   40000 command_runner.go:130] > # stream_idle_timeout = ""
	I0816 13:12:11.657267   40000 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0816 13:12:11.657277   40000 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0816 13:12:11.657283   40000 command_runner.go:130] > # minutes.
	I0816 13:12:11.657288   40000 command_runner.go:130] > # stream_tls_cert = ""
	I0816 13:12:11.657294   40000 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0816 13:12:11.657302   40000 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0816 13:12:11.657309   40000 command_runner.go:130] > # stream_tls_key = ""
	I0816 13:12:11.657314   40000 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0816 13:12:11.657323   40000 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0816 13:12:11.657336   40000 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0816 13:12:11.657342   40000 command_runner.go:130] > # stream_tls_ca = ""
	I0816 13:12:11.657349   40000 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0816 13:12:11.657356   40000 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0816 13:12:11.657363   40000 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0816 13:12:11.657369   40000 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0816 13:12:11.657376   40000 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0816 13:12:11.657383   40000 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0816 13:12:11.657387   40000 command_runner.go:130] > [crio.runtime]
	I0816 13:12:11.657395   40000 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0816 13:12:11.657403   40000 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0816 13:12:11.657407   40000 command_runner.go:130] > # "nofile=1024:2048"
	I0816 13:12:11.657415   40000 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0816 13:12:11.657421   40000 command_runner.go:130] > # default_ulimits = [
	I0816 13:12:11.657425   40000 command_runner.go:130] > # ]
	I0816 13:12:11.657433   40000 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0816 13:12:11.657437   40000 command_runner.go:130] > # no_pivot = false
	I0816 13:12:11.657443   40000 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0816 13:12:11.657450   40000 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0816 13:12:11.657455   40000 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0816 13:12:11.657462   40000 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0816 13:12:11.657468   40000 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0816 13:12:11.657476   40000 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0816 13:12:11.657483   40000 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0816 13:12:11.657488   40000 command_runner.go:130] > # Cgroup setting for conmon
	I0816 13:12:11.657496   40000 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0816 13:12:11.657503   40000 command_runner.go:130] > conmon_cgroup = "pod"
	I0816 13:12:11.657509   40000 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0816 13:12:11.657517   40000 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0816 13:12:11.657526   40000 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0816 13:12:11.657532   40000 command_runner.go:130] > conmon_env = [
	I0816 13:12:11.657538   40000 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0816 13:12:11.657543   40000 command_runner.go:130] > ]
	I0816 13:12:11.657549   40000 command_runner.go:130] > # Additional environment variables to set for all the
	I0816 13:12:11.657556   40000 command_runner.go:130] > # containers. These are overridden if set in the
	I0816 13:12:11.657562   40000 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0816 13:12:11.657568   40000 command_runner.go:130] > # default_env = [
	I0816 13:12:11.657571   40000 command_runner.go:130] > # ]
	I0816 13:12:11.657577   40000 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0816 13:12:11.657586   40000 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0816 13:12:11.657592   40000 command_runner.go:130] > # selinux = false
	I0816 13:12:11.657597   40000 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0816 13:12:11.657606   40000 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0816 13:12:11.657613   40000 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0816 13:12:11.657618   40000 command_runner.go:130] > # seccomp_profile = ""
	I0816 13:12:11.657624   40000 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0816 13:12:11.657632   40000 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0816 13:12:11.657640   40000 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0816 13:12:11.657644   40000 command_runner.go:130] > # which might increase security.
	I0816 13:12:11.657651   40000 command_runner.go:130] > # This option is currently deprecated,
	I0816 13:12:11.657657   40000 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0816 13:12:11.657663   40000 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0816 13:12:11.657669   40000 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0816 13:12:11.657677   40000 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0816 13:12:11.657683   40000 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0816 13:12:11.657691   40000 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0816 13:12:11.657698   40000 command_runner.go:130] > # This option supports live configuration reload.
	I0816 13:12:11.657703   40000 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0816 13:12:11.657710   40000 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0816 13:12:11.657714   40000 command_runner.go:130] > # the cgroup blockio controller.
	I0816 13:12:11.657720   40000 command_runner.go:130] > # blockio_config_file = ""
	I0816 13:12:11.657727   40000 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0816 13:12:11.657732   40000 command_runner.go:130] > # blockio parameters.
	I0816 13:12:11.657736   40000 command_runner.go:130] > # blockio_reload = false
	I0816 13:12:11.657744   40000 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0816 13:12:11.657750   40000 command_runner.go:130] > # irqbalance daemon.
	I0816 13:12:11.657755   40000 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0816 13:12:11.657761   40000 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0816 13:12:11.657769   40000 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0816 13:12:11.657779   40000 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0816 13:12:11.657786   40000 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0816 13:12:11.657796   40000 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0816 13:12:11.657803   40000 command_runner.go:130] > # This option supports live configuration reload.
	I0816 13:12:11.657807   40000 command_runner.go:130] > # rdt_config_file = ""
	I0816 13:12:11.657814   40000 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0816 13:12:11.657819   40000 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0816 13:12:11.657835   40000 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0816 13:12:11.657841   40000 command_runner.go:130] > # separate_pull_cgroup = ""
	I0816 13:12:11.657847   40000 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0816 13:12:11.657854   40000 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0816 13:12:11.657862   40000 command_runner.go:130] > # will be added.
	I0816 13:12:11.657866   40000 command_runner.go:130] > # default_capabilities = [
	I0816 13:12:11.657872   40000 command_runner.go:130] > # 	"CHOWN",
	I0816 13:12:11.657876   40000 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0816 13:12:11.657881   40000 command_runner.go:130] > # 	"FSETID",
	I0816 13:12:11.657885   40000 command_runner.go:130] > # 	"FOWNER",
	I0816 13:12:11.657892   40000 command_runner.go:130] > # 	"SETGID",
	I0816 13:12:11.657896   40000 command_runner.go:130] > # 	"SETUID",
	I0816 13:12:11.657902   40000 command_runner.go:130] > # 	"SETPCAP",
	I0816 13:12:11.657906   40000 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0816 13:12:11.657912   40000 command_runner.go:130] > # 	"KILL",
	I0816 13:12:11.657915   40000 command_runner.go:130] > # ]
	I0816 13:12:11.657925   40000 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0816 13:12:11.657933   40000 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0816 13:12:11.657938   40000 command_runner.go:130] > # add_inheritable_capabilities = false
	I0816 13:12:11.657947   40000 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0816 13:12:11.657959   40000 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0816 13:12:11.657968   40000 command_runner.go:130] > default_sysctls = [
	I0816 13:12:11.657978   40000 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0816 13:12:11.657986   40000 command_runner.go:130] > ]
	I0816 13:12:11.657992   40000 command_runner.go:130] > # List of devices on the host that a
	I0816 13:12:11.658003   40000 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0816 13:12:11.658012   40000 command_runner.go:130] > # allowed_devices = [
	I0816 13:12:11.658018   40000 command_runner.go:130] > # 	"/dev/fuse",
	I0816 13:12:11.658026   40000 command_runner.go:130] > # ]
	I0816 13:12:11.658034   40000 command_runner.go:130] > # List of additional devices. specified as
	I0816 13:12:11.658042   40000 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0816 13:12:11.658049   40000 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0816 13:12:11.658055   40000 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0816 13:12:11.658061   40000 command_runner.go:130] > # additional_devices = [
	I0816 13:12:11.658065   40000 command_runner.go:130] > # ]
	I0816 13:12:11.658072   40000 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0816 13:12:11.658077   40000 command_runner.go:130] > # cdi_spec_dirs = [
	I0816 13:12:11.658082   40000 command_runner.go:130] > # 	"/etc/cdi",
	I0816 13:12:11.658087   40000 command_runner.go:130] > # 	"/var/run/cdi",
	I0816 13:12:11.658092   40000 command_runner.go:130] > # ]
	I0816 13:12:11.658098   40000 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0816 13:12:11.658106   40000 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0816 13:12:11.658112   40000 command_runner.go:130] > # Defaults to false.
	I0816 13:12:11.658117   40000 command_runner.go:130] > # device_ownership_from_security_context = false
	I0816 13:12:11.658125   40000 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0816 13:12:11.658133   40000 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0816 13:12:11.658138   40000 command_runner.go:130] > # hooks_dir = [
	I0816 13:12:11.658145   40000 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0816 13:12:11.658148   40000 command_runner.go:130] > # ]
	I0816 13:12:11.658154   40000 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0816 13:12:11.658162   40000 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0816 13:12:11.658169   40000 command_runner.go:130] > # its default mounts from the following two files:
	I0816 13:12:11.658175   40000 command_runner.go:130] > #
	I0816 13:12:11.658181   40000 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0816 13:12:11.658189   40000 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0816 13:12:11.658197   40000 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0816 13:12:11.658201   40000 command_runner.go:130] > #
	I0816 13:12:11.658209   40000 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0816 13:12:11.658219   40000 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0816 13:12:11.658230   40000 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0816 13:12:11.658237   40000 command_runner.go:130] > #      only add mounts it finds in this file.
	I0816 13:12:11.658241   40000 command_runner.go:130] > #
	I0816 13:12:11.658246   40000 command_runner.go:130] > # default_mounts_file = ""
	I0816 13:12:11.658252   40000 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0816 13:12:11.658261   40000 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0816 13:12:11.658265   40000 command_runner.go:130] > pids_limit = 1024
	I0816 13:12:11.658273   40000 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0816 13:12:11.658281   40000 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0816 13:12:11.658290   40000 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0816 13:12:11.658297   40000 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0816 13:12:11.658303   40000 command_runner.go:130] > # log_size_max = -1
	I0816 13:12:11.658310   40000 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0816 13:12:11.658316   40000 command_runner.go:130] > # log_to_journald = false
	I0816 13:12:11.658323   40000 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0816 13:12:11.658332   40000 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0816 13:12:11.658339   40000 command_runner.go:130] > # Path to directory for container attach sockets.
	I0816 13:12:11.658344   40000 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0816 13:12:11.658352   40000 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0816 13:12:11.658356   40000 command_runner.go:130] > # bind_mount_prefix = ""
	I0816 13:12:11.658363   40000 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0816 13:12:11.658367   40000 command_runner.go:130] > # read_only = false
	I0816 13:12:11.658375   40000 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0816 13:12:11.658385   40000 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0816 13:12:11.658391   40000 command_runner.go:130] > # live configuration reload.
	I0816 13:12:11.658395   40000 command_runner.go:130] > # log_level = "info"
	I0816 13:12:11.658401   40000 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0816 13:12:11.658407   40000 command_runner.go:130] > # This option supports live configuration reload.
	I0816 13:12:11.658411   40000 command_runner.go:130] > # log_filter = ""
	I0816 13:12:11.658420   40000 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0816 13:12:11.658428   40000 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0816 13:12:11.658435   40000 command_runner.go:130] > # separated by comma.
	I0816 13:12:11.658442   40000 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0816 13:12:11.658448   40000 command_runner.go:130] > # uid_mappings = ""
	I0816 13:12:11.658453   40000 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0816 13:12:11.658461   40000 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0816 13:12:11.658467   40000 command_runner.go:130] > # separated by comma.
	I0816 13:12:11.658476   40000 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0816 13:12:11.658482   40000 command_runner.go:130] > # gid_mappings = ""
	I0816 13:12:11.658488   40000 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0816 13:12:11.658496   40000 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0816 13:12:11.658504   40000 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0816 13:12:11.658514   40000 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0816 13:12:11.658520   40000 command_runner.go:130] > # minimum_mappable_uid = -1
	I0816 13:12:11.658526   40000 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0816 13:12:11.658534   40000 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0816 13:12:11.658543   40000 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0816 13:12:11.658553   40000 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0816 13:12:11.658559   40000 command_runner.go:130] > # minimum_mappable_gid = -1
	I0816 13:12:11.658565   40000 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0816 13:12:11.658573   40000 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0816 13:12:11.658581   40000 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0816 13:12:11.658585   40000 command_runner.go:130] > # ctr_stop_timeout = 30
	I0816 13:12:11.658592   40000 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0816 13:12:11.658601   40000 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0816 13:12:11.658607   40000 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0816 13:12:11.658614   40000 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0816 13:12:11.658618   40000 command_runner.go:130] > drop_infra_ctr = false
	I0816 13:12:11.658626   40000 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0816 13:12:11.658634   40000 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0816 13:12:11.658644   40000 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0816 13:12:11.658649   40000 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0816 13:12:11.658656   40000 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0816 13:12:11.658664   40000 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0816 13:12:11.658671   40000 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0816 13:12:11.658676   40000 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0816 13:12:11.658682   40000 command_runner.go:130] > # shared_cpuset = ""
	I0816 13:12:11.658687   40000 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0816 13:12:11.658694   40000 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0816 13:12:11.658699   40000 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0816 13:12:11.658707   40000 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0816 13:12:11.658714   40000 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0816 13:12:11.658719   40000 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0816 13:12:11.658727   40000 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0816 13:12:11.658733   40000 command_runner.go:130] > # enable_criu_support = false
	I0816 13:12:11.658738   40000 command_runner.go:130] > # Enable/disable the generation of the container,
	I0816 13:12:11.658746   40000 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0816 13:12:11.658753   40000 command_runner.go:130] > # enable_pod_events = false
	I0816 13:12:11.658759   40000 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0816 13:12:11.658767   40000 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0816 13:12:11.658774   40000 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0816 13:12:11.658779   40000 command_runner.go:130] > # default_runtime = "runc"
	I0816 13:12:11.658786   40000 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0816 13:12:11.658793   40000 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0816 13:12:11.658804   40000 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0816 13:12:11.658811   40000 command_runner.go:130] > # creation as a file is not desired either.
	I0816 13:12:11.658819   40000 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0816 13:12:11.658825   40000 command_runner.go:130] > # the hostname is being managed dynamically.
	I0816 13:12:11.658830   40000 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0816 13:12:11.658835   40000 command_runner.go:130] > # ]
	I0816 13:12:11.658841   40000 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0816 13:12:11.658849   40000 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0816 13:12:11.658857   40000 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0816 13:12:11.658864   40000 command_runner.go:130] > # Each entry in the table should follow the format:
	I0816 13:12:11.658868   40000 command_runner.go:130] > #
	I0816 13:12:11.658873   40000 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0816 13:12:11.658880   40000 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0816 13:12:11.658920   40000 command_runner.go:130] > # runtime_type = "oci"
	I0816 13:12:11.658929   40000 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0816 13:12:11.658933   40000 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0816 13:12:11.658938   40000 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0816 13:12:11.658945   40000 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0816 13:12:11.658950   40000 command_runner.go:130] > # monitor_env = []
	I0816 13:12:11.658960   40000 command_runner.go:130] > # privileged_without_host_devices = false
	I0816 13:12:11.658969   40000 command_runner.go:130] > # allowed_annotations = []
	I0816 13:12:11.658980   40000 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0816 13:12:11.658988   40000 command_runner.go:130] > # Where:
	I0816 13:12:11.658999   40000 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0816 13:12:11.659012   40000 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0816 13:12:11.659024   40000 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0816 13:12:11.659036   40000 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0816 13:12:11.659045   40000 command_runner.go:130] > #   in $PATH.
	I0816 13:12:11.659058   40000 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0816 13:12:11.659067   40000 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0816 13:12:11.659075   40000 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0816 13:12:11.659082   40000 command_runner.go:130] > #   state.
	I0816 13:12:11.659088   40000 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0816 13:12:11.659096   40000 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0816 13:12:11.659104   40000 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0816 13:12:11.659112   40000 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0816 13:12:11.659118   40000 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0816 13:12:11.659126   40000 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0816 13:12:11.659135   40000 command_runner.go:130] > #   The currently recognized values are:
	I0816 13:12:11.659143   40000 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0816 13:12:11.659153   40000 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0816 13:12:11.659160   40000 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0816 13:12:11.659168   40000 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0816 13:12:11.659180   40000 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0816 13:12:11.659188   40000 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0816 13:12:11.659197   40000 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0816 13:12:11.659205   40000 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0816 13:12:11.659214   40000 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0816 13:12:11.659221   40000 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0816 13:12:11.659231   40000 command_runner.go:130] > #   deprecated option "conmon".
	I0816 13:12:11.659238   40000 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0816 13:12:11.659245   40000 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0816 13:12:11.659252   40000 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0816 13:12:11.659259   40000 command_runner.go:130] > #   should be moved to the container's cgroup
	I0816 13:12:11.659266   40000 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0816 13:12:11.659273   40000 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0816 13:12:11.659279   40000 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0816 13:12:11.659287   40000 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0816 13:12:11.659290   40000 command_runner.go:130] > #
	I0816 13:12:11.659297   40000 command_runner.go:130] > # Using the seccomp notifier feature:
	I0816 13:12:11.659300   40000 command_runner.go:130] > #
	I0816 13:12:11.659306   40000 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0816 13:12:11.659314   40000 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0816 13:12:11.659320   40000 command_runner.go:130] > #
	I0816 13:12:11.659326   40000 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0816 13:12:11.659334   40000 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0816 13:12:11.659337   40000 command_runner.go:130] > #
	I0816 13:12:11.659344   40000 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0816 13:12:11.659349   40000 command_runner.go:130] > # feature.
	I0816 13:12:11.659353   40000 command_runner.go:130] > #
	I0816 13:12:11.659361   40000 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0816 13:12:11.659367   40000 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0816 13:12:11.659375   40000 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0816 13:12:11.659383   40000 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0816 13:12:11.659391   40000 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0816 13:12:11.659394   40000 command_runner.go:130] > #
	I0816 13:12:11.659400   40000 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0816 13:12:11.659408   40000 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0816 13:12:11.659413   40000 command_runner.go:130] > #
	I0816 13:12:11.659419   40000 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0816 13:12:11.659426   40000 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0816 13:12:11.659429   40000 command_runner.go:130] > #
	I0816 13:12:11.659435   40000 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0816 13:12:11.659443   40000 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0816 13:12:11.659449   40000 command_runner.go:130] > # limitation.
	I0816 13:12:11.659454   40000 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0816 13:12:11.659461   40000 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0816 13:12:11.659464   40000 command_runner.go:130] > runtime_type = "oci"
	I0816 13:12:11.659468   40000 command_runner.go:130] > runtime_root = "/run/runc"
	I0816 13:12:11.659474   40000 command_runner.go:130] > runtime_config_path = ""
	I0816 13:12:11.659482   40000 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0816 13:12:11.659486   40000 command_runner.go:130] > monitor_cgroup = "pod"
	I0816 13:12:11.659492   40000 command_runner.go:130] > monitor_exec_cgroup = ""
	I0816 13:12:11.659496   40000 command_runner.go:130] > monitor_env = [
	I0816 13:12:11.659504   40000 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0816 13:12:11.659508   40000 command_runner.go:130] > ]
	I0816 13:12:11.659513   40000 command_runner.go:130] > privileged_without_host_devices = false
	I0816 13:12:11.659521   40000 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0816 13:12:11.659528   40000 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0816 13:12:11.659535   40000 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0816 13:12:11.659544   40000 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0816 13:12:11.659553   40000 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0816 13:12:11.659559   40000 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0816 13:12:11.659570   40000 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0816 13:12:11.659580   40000 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0816 13:12:11.659587   40000 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0816 13:12:11.659594   40000 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0816 13:12:11.659598   40000 command_runner.go:130] > # Example:
	I0816 13:12:11.659603   40000 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0816 13:12:11.659608   40000 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0816 13:12:11.659613   40000 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0816 13:12:11.659617   40000 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0816 13:12:11.659621   40000 command_runner.go:130] > # cpuset = 0
	I0816 13:12:11.659624   40000 command_runner.go:130] > # cpushares = "0-1"
	I0816 13:12:11.659628   40000 command_runner.go:130] > # Where:
	I0816 13:12:11.659632   40000 command_runner.go:130] > # The workload name is workload-type.
	I0816 13:12:11.659639   40000 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0816 13:12:11.659644   40000 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0816 13:12:11.659649   40000 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0816 13:12:11.659662   40000 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0816 13:12:11.659668   40000 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0816 13:12:11.659672   40000 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0816 13:12:11.659678   40000 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0816 13:12:11.659682   40000 command_runner.go:130] > # Default value is set to true
	I0816 13:12:11.659687   40000 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0816 13:12:11.659692   40000 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0816 13:12:11.659696   40000 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0816 13:12:11.659702   40000 command_runner.go:130] > # Default value is set to 'false'
	I0816 13:12:11.659706   40000 command_runner.go:130] > # disable_hostport_mapping = false
	I0816 13:12:11.659712   40000 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0816 13:12:11.659715   40000 command_runner.go:130] > #
	I0816 13:12:11.659720   40000 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0816 13:12:11.659726   40000 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0816 13:12:11.659733   40000 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0816 13:12:11.659739   40000 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0816 13:12:11.659745   40000 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0816 13:12:11.659748   40000 command_runner.go:130] > [crio.image]
	I0816 13:12:11.659753   40000 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0816 13:12:11.659758   40000 command_runner.go:130] > # default_transport = "docker://"
	I0816 13:12:11.659764   40000 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0816 13:12:11.659773   40000 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0816 13:12:11.659777   40000 command_runner.go:130] > # global_auth_file = ""
	I0816 13:12:11.659784   40000 command_runner.go:130] > # The image used to instantiate infra containers.
	I0816 13:12:11.659788   40000 command_runner.go:130] > # This option supports live configuration reload.
	I0816 13:12:11.659795   40000 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0816 13:12:11.659801   40000 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0816 13:12:11.659809   40000 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0816 13:12:11.659814   40000 command_runner.go:130] > # This option supports live configuration reload.
	I0816 13:12:11.659820   40000 command_runner.go:130] > # pause_image_auth_file = ""
	I0816 13:12:11.659826   40000 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0816 13:12:11.659835   40000 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0816 13:12:11.659843   40000 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0816 13:12:11.659851   40000 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0816 13:12:11.659857   40000 command_runner.go:130] > # pause_command = "/pause"
	I0816 13:12:11.659863   40000 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0816 13:12:11.659877   40000 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0816 13:12:11.659886   40000 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0816 13:12:11.659894   40000 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0816 13:12:11.659903   40000 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0816 13:12:11.659911   40000 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0816 13:12:11.659917   40000 command_runner.go:130] > # pinned_images = [
	I0816 13:12:11.659921   40000 command_runner.go:130] > # ]
	I0816 13:12:11.659930   40000 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0816 13:12:11.659938   40000 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0816 13:12:11.659947   40000 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0816 13:12:11.659959   40000 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0816 13:12:11.659970   40000 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0816 13:12:11.659978   40000 command_runner.go:130] > # signature_policy = ""
	I0816 13:12:11.659990   40000 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0816 13:12:11.660003   40000 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0816 13:12:11.660015   40000 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0816 13:12:11.660027   40000 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0816 13:12:11.660039   40000 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0816 13:12:11.660049   40000 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0816 13:12:11.660058   40000 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0816 13:12:11.660065   40000 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0816 13:12:11.660072   40000 command_runner.go:130] > # changing them here.
	I0816 13:12:11.660076   40000 command_runner.go:130] > # insecure_registries = [
	I0816 13:12:11.660081   40000 command_runner.go:130] > # ]
	I0816 13:12:11.660087   40000 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0816 13:12:11.660094   40000 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0816 13:12:11.660099   40000 command_runner.go:130] > # image_volumes = "mkdir"
	I0816 13:12:11.660106   40000 command_runner.go:130] > # Temporary directory to use for storing big files
	I0816 13:12:11.660110   40000 command_runner.go:130] > # big_files_temporary_dir = ""
	I0816 13:12:11.660118   40000 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0816 13:12:11.660122   40000 command_runner.go:130] > # CNI plugins.
	I0816 13:12:11.660128   40000 command_runner.go:130] > [crio.network]
	I0816 13:12:11.660134   40000 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0816 13:12:11.660141   40000 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0816 13:12:11.660145   40000 command_runner.go:130] > # cni_default_network = ""
	I0816 13:12:11.660152   40000 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0816 13:12:11.660164   40000 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0816 13:12:11.660172   40000 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0816 13:12:11.660179   40000 command_runner.go:130] > # plugin_dirs = [
	I0816 13:12:11.660183   40000 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0816 13:12:11.660188   40000 command_runner.go:130] > # ]
	I0816 13:12:11.660194   40000 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0816 13:12:11.660197   40000 command_runner.go:130] > [crio.metrics]
	I0816 13:12:11.660203   40000 command_runner.go:130] > # Globally enable or disable metrics support.
	I0816 13:12:11.660207   40000 command_runner.go:130] > enable_metrics = true
	I0816 13:12:11.660214   40000 command_runner.go:130] > # Specify enabled metrics collectors.
	I0816 13:12:11.660219   40000 command_runner.go:130] > # Per default all metrics are enabled.
	I0816 13:12:11.660230   40000 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0816 13:12:11.660238   40000 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0816 13:12:11.660246   40000 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0816 13:12:11.660250   40000 command_runner.go:130] > # metrics_collectors = [
	I0816 13:12:11.660256   40000 command_runner.go:130] > # 	"operations",
	I0816 13:12:11.660261   40000 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0816 13:12:11.660267   40000 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0816 13:12:11.660271   40000 command_runner.go:130] > # 	"operations_errors",
	I0816 13:12:11.660275   40000 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0816 13:12:11.660281   40000 command_runner.go:130] > # 	"image_pulls_by_name",
	I0816 13:12:11.660286   40000 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0816 13:12:11.660293   40000 command_runner.go:130] > # 	"image_pulls_failures",
	I0816 13:12:11.660297   40000 command_runner.go:130] > # 	"image_pulls_successes",
	I0816 13:12:11.660303   40000 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0816 13:12:11.660309   40000 command_runner.go:130] > # 	"image_layer_reuse",
	I0816 13:12:11.660316   40000 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0816 13:12:11.660320   40000 command_runner.go:130] > # 	"containers_oom_total",
	I0816 13:12:11.660326   40000 command_runner.go:130] > # 	"containers_oom",
	I0816 13:12:11.660330   40000 command_runner.go:130] > # 	"processes_defunct",
	I0816 13:12:11.660336   40000 command_runner.go:130] > # 	"operations_total",
	I0816 13:12:11.660340   40000 command_runner.go:130] > # 	"operations_latency_seconds",
	I0816 13:12:11.660346   40000 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0816 13:12:11.660351   40000 command_runner.go:130] > # 	"operations_errors_total",
	I0816 13:12:11.660358   40000 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0816 13:12:11.660362   40000 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0816 13:12:11.660372   40000 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0816 13:12:11.660379   40000 command_runner.go:130] > # 	"image_pulls_success_total",
	I0816 13:12:11.660383   40000 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0816 13:12:11.660389   40000 command_runner.go:130] > # 	"containers_oom_count_total",
	I0816 13:12:11.660393   40000 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0816 13:12:11.660400   40000 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0816 13:12:11.660403   40000 command_runner.go:130] > # ]
	I0816 13:12:11.660410   40000 command_runner.go:130] > # The port on which the metrics server will listen.
	I0816 13:12:11.660414   40000 command_runner.go:130] > # metrics_port = 9090
	I0816 13:12:11.660419   40000 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0816 13:12:11.660425   40000 command_runner.go:130] > # metrics_socket = ""
	I0816 13:12:11.660430   40000 command_runner.go:130] > # The certificate for the secure metrics server.
	I0816 13:12:11.660437   40000 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0816 13:12:11.660443   40000 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0816 13:12:11.660450   40000 command_runner.go:130] > # certificate on any modification event.
	I0816 13:12:11.660454   40000 command_runner.go:130] > # metrics_cert = ""
	I0816 13:12:11.660461   40000 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0816 13:12:11.660466   40000 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0816 13:12:11.660472   40000 command_runner.go:130] > # metrics_key = ""
	I0816 13:12:11.660479   40000 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0816 13:12:11.660485   40000 command_runner.go:130] > [crio.tracing]
	I0816 13:12:11.660491   40000 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0816 13:12:11.660497   40000 command_runner.go:130] > # enable_tracing = false
	I0816 13:12:11.660503   40000 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0816 13:12:11.660509   40000 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0816 13:12:11.660516   40000 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0816 13:12:11.660522   40000 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0816 13:12:11.660526   40000 command_runner.go:130] > # CRI-O NRI configuration.
	I0816 13:12:11.660532   40000 command_runner.go:130] > [crio.nri]
	I0816 13:12:11.660536   40000 command_runner.go:130] > # Globally enable or disable NRI.
	I0816 13:12:11.660542   40000 command_runner.go:130] > # enable_nri = false
	I0816 13:12:11.660546   40000 command_runner.go:130] > # NRI socket to listen on.
	I0816 13:12:11.660553   40000 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0816 13:12:11.660557   40000 command_runner.go:130] > # NRI plugin directory to use.
	I0816 13:12:11.660565   40000 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0816 13:12:11.660570   40000 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0816 13:12:11.660581   40000 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0816 13:12:11.660588   40000 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0816 13:12:11.660593   40000 command_runner.go:130] > # nri_disable_connections = false
	I0816 13:12:11.660600   40000 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0816 13:12:11.660604   40000 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0816 13:12:11.660611   40000 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0816 13:12:11.660615   40000 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0816 13:12:11.660621   40000 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0816 13:12:11.660627   40000 command_runner.go:130] > [crio.stats]
	I0816 13:12:11.660633   40000 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0816 13:12:11.660640   40000 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0816 13:12:11.660645   40000 command_runner.go:130] > # stats_collection_period = 0
	I0816 13:12:11.660799   40000 cni.go:84] Creating CNI manager for ""
	I0816 13:12:11.660814   40000 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0816 13:12:11.660824   40000 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 13:12:11.660845   40000 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.208 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-336982 NodeName:multinode-336982 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.208"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.208 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 13:12:11.660990   40000 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.208
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-336982"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.208
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.208"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 13:12:11.661060   40000 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 13:12:11.671252   40000 command_runner.go:130] > kubeadm
	I0816 13:12:11.671269   40000 command_runner.go:130] > kubectl
	I0816 13:12:11.671276   40000 command_runner.go:130] > kubelet
	I0816 13:12:11.671288   40000 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 13:12:11.671330   40000 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 13:12:11.680410   40000 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0816 13:12:11.697244   40000 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 13:12:11.713522   40000 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0816 13:12:11.730045   40000 ssh_runner.go:195] Run: grep 192.168.39.208	control-plane.minikube.internal$ /etc/hosts
	I0816 13:12:11.733977   40000 command_runner.go:130] > 192.168.39.208	control-plane.minikube.internal
	I0816 13:12:11.734038   40000 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:12:11.886814   40000 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 13:12:11.901857   40000 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/multinode-336982 for IP: 192.168.39.208
	I0816 13:12:11.901881   40000 certs.go:194] generating shared ca certs ...
	I0816 13:12:11.901895   40000 certs.go:226] acquiring lock for ca certs: {Name:mkdb46755373c135acf8239d2d4352f4b0b3d1d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:12:11.902096   40000 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key
	I0816 13:12:11.902217   40000 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key
	I0816 13:12:11.902232   40000 certs.go:256] generating profile certs ...
	I0816 13:12:11.902338   40000 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/multinode-336982/client.key
	I0816 13:12:11.902409   40000 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/multinode-336982/apiserver.key.0d3a4771
	I0816 13:12:11.902462   40000 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/multinode-336982/proxy-client.key
	I0816 13:12:11.902476   40000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0816 13:12:11.902497   40000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0816 13:12:11.902515   40000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0816 13:12:11.902533   40000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0816 13:12:11.902547   40000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/multinode-336982/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0816 13:12:11.902565   40000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/multinode-336982/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0816 13:12:11.902584   40000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/multinode-336982/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0816 13:12:11.902606   40000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/multinode-336982/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0816 13:12:11.902669   40000 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem (1338 bytes)
	W0816 13:12:11.902709   40000 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149_empty.pem, impossibly tiny 0 bytes
	I0816 13:12:11.902724   40000 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 13:12:11.902757   40000 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem (1082 bytes)
	I0816 13:12:11.902787   40000 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem (1123 bytes)
	I0816 13:12:11.902826   40000 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem (1675 bytes)
	I0816 13:12:11.902879   40000 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem (1708 bytes)
	I0816 13:12:11.902917   40000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem -> /usr/share/ca-certificates/111492.pem
	I0816 13:12:11.902936   40000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:12:11.902956   40000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem -> /usr/share/ca-certificates/11149.pem
	I0816 13:12:11.903555   40000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 13:12:11.928501   40000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 13:12:11.951991   40000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 13:12:11.975912   40000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 13:12:11.998856   40000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/multinode-336982/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0816 13:12:12.023369   40000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/multinode-336982/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 13:12:12.046695   40000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/multinode-336982/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 13:12:12.069962   40000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/multinode-336982/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 13:12:12.093558   40000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /usr/share/ca-certificates/111492.pem (1708 bytes)
	I0816 13:12:12.116801   40000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 13:12:12.140641   40000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem --> /usr/share/ca-certificates/11149.pem (1338 bytes)
	I0816 13:12:12.163663   40000 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 13:12:12.180712   40000 ssh_runner.go:195] Run: openssl version
	I0816 13:12:12.186359   40000 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0816 13:12:12.186476   40000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 13:12:12.198403   40000 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:12:12.206047   40000 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 16 12:22 /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:12:12.206313   40000 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 12:22 /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:12:12.206358   40000 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:12:12.215453   40000 command_runner.go:130] > b5213941
	I0816 13:12:12.215708   40000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 13:12:12.242165   40000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11149.pem && ln -fs /usr/share/ca-certificates/11149.pem /etc/ssl/certs/11149.pem"
	I0816 13:12:12.266885   40000 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11149.pem
	I0816 13:12:12.273182   40000 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 16 12:33 /usr/share/ca-certificates/11149.pem
	I0816 13:12:12.273359   40000 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 12:33 /usr/share/ca-certificates/11149.pem
	I0816 13:12:12.273422   40000 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11149.pem
	I0816 13:12:12.281766   40000 command_runner.go:130] > 51391683
	I0816 13:12:12.282058   40000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11149.pem /etc/ssl/certs/51391683.0"
	I0816 13:12:12.310543   40000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111492.pem && ln -fs /usr/share/ca-certificates/111492.pem /etc/ssl/certs/111492.pem"
	I0816 13:12:12.337804   40000 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111492.pem
	I0816 13:12:12.342702   40000 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 16 12:33 /usr/share/ca-certificates/111492.pem
	I0816 13:12:12.342737   40000 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 12:33 /usr/share/ca-certificates/111492.pem
	I0816 13:12:12.342791   40000 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111492.pem
	I0816 13:12:12.353611   40000 command_runner.go:130] > 3ec20f2e
	I0816 13:12:12.353757   40000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111492.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 13:12:12.364036   40000 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 13:12:12.374562   40000 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 13:12:12.374595   40000 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0816 13:12:12.374603   40000 command_runner.go:130] > Device: 253,1	Inode: 5244438     Links: 1
	I0816 13:12:12.374612   40000 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0816 13:12:12.374624   40000 command_runner.go:130] > Access: 2024-08-16 13:05:25.988149491 +0000
	I0816 13:12:12.374635   40000 command_runner.go:130] > Modify: 2024-08-16 13:05:25.988149491 +0000
	I0816 13:12:12.374648   40000 command_runner.go:130] > Change: 2024-08-16 13:05:25.988149491 +0000
	I0816 13:12:12.374659   40000 command_runner.go:130] >  Birth: 2024-08-16 13:05:25.988149491 +0000
	I0816 13:12:12.374726   40000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 13:12:12.383792   40000 command_runner.go:130] > Certificate will not expire
	I0816 13:12:12.383867   40000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 13:12:12.389619   40000 command_runner.go:130] > Certificate will not expire
	I0816 13:12:12.389697   40000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 13:12:12.395595   40000 command_runner.go:130] > Certificate will not expire
	I0816 13:12:12.395668   40000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 13:12:12.403744   40000 command_runner.go:130] > Certificate will not expire
	I0816 13:12:12.403992   40000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 13:12:12.415146   40000 command_runner.go:130] > Certificate will not expire
	I0816 13:12:12.417091   40000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 13:12:12.427767   40000 command_runner.go:130] > Certificate will not expire
	I0816 13:12:12.427837   40000 kubeadm.go:392] StartCluster: {Name:multinode-336982 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:multinode-336982 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.190 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.145 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 13:12:12.427937   40000 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 13:12:12.427987   40000 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 13:12:12.492379   40000 command_runner.go:130] > 1bf884fd123a86f6a94ab5aea8257e3302f8a85a9269f32ebf4329e5e3a47b39
	I0816 13:12:12.492407   40000 command_runner.go:130] > bf650b256082f2286f5edf9635d8701a768b8e0725633fe268a78e645daebefe
	I0816 13:12:12.492416   40000 command_runner.go:130] > 851fbcba07c087e4024905c5d5a1a90eb6c2fdf8219077078d724fa20411b383
	I0816 13:12:12.492431   40000 command_runner.go:130] > 171a9c405c59e0ecaa54d87efd81e8b7ea92feb12d2b0e9e89ad028a749fd793
	I0816 13:12:12.492605   40000 command_runner.go:130] > 212bd68acb7c3f8a5334f4c436c1b81fcfb0240447356d8ca45553de7b2d1463
	I0816 13:12:12.492755   40000 command_runner.go:130] > 65630aa0a16feb60c141b71af051e690c23aef4d2dd50e00f887afe607359e70
	I0816 13:12:12.492845   40000 command_runner.go:130] > 5b58598ac934ca7f1a7851e587a2d1dd3a072d35c90bfedf4580b46cf9c49b23
	I0816 13:12:12.492920   40000 command_runner.go:130] > 99746d40c6523384b1bf891b6e187b4e084acd439343373ecb64225c76096431
	I0816 13:12:12.495704   40000 cri.go:89] found id: "1bf884fd123a86f6a94ab5aea8257e3302f8a85a9269f32ebf4329e5e3a47b39"
	I0816 13:12:12.495721   40000 cri.go:89] found id: "bf650b256082f2286f5edf9635d8701a768b8e0725633fe268a78e645daebefe"
	I0816 13:12:12.495727   40000 cri.go:89] found id: "851fbcba07c087e4024905c5d5a1a90eb6c2fdf8219077078d724fa20411b383"
	I0816 13:12:12.495732   40000 cri.go:89] found id: "171a9c405c59e0ecaa54d87efd81e8b7ea92feb12d2b0e9e89ad028a749fd793"
	I0816 13:12:12.495735   40000 cri.go:89] found id: "212bd68acb7c3f8a5334f4c436c1b81fcfb0240447356d8ca45553de7b2d1463"
	I0816 13:12:12.495740   40000 cri.go:89] found id: "65630aa0a16feb60c141b71af051e690c23aef4d2dd50e00f887afe607359e70"
	I0816 13:12:12.495744   40000 cri.go:89] found id: "5b58598ac934ca7f1a7851e587a2d1dd3a072d35c90bfedf4580b46cf9c49b23"
	I0816 13:12:12.495747   40000 cri.go:89] found id: "99746d40c6523384b1bf891b6e187b4e084acd439343373ecb64225c76096431"
	I0816 13:12:12.495751   40000 cri.go:89] found id: ""
	I0816 13:12:12.495800   40000 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 16 13:16:25 multinode-336982 crio[2748]: time="2024-08-16 13:16:25.364052140Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723814185364030039,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5ec4979f-a79c-4739-91db-0d88575d4571 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 13:16:25 multinode-336982 crio[2748]: time="2024-08-16 13:16:25.364743675Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f35f57fb-4b35-4c7b-ad98-9886261ff0d7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:16:25 multinode-336982 crio[2748]: time="2024-08-16 13:16:25.364800894Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f35f57fb-4b35-4c7b-ad98-9886261ff0d7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:16:25 multinode-336982 crio[2748]: time="2024-08-16 13:16:25.365145652Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a9414759f87719c77eee45e982c56ec13250cff68aba0734f305f822c9fe9b4,PodSandboxId:4a67ef1185ff3fdf7dbb57b9cf130f1f6212b274b5219a5c1d9d1d391dd38b93,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723813970975893003,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-m9dxd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 29ae930d-fd36-432b-bcc5-dabed3bccf88,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e05069c8cc20dd27a545d01b2509dd1a1bcc53d588427b0b631ec8a5bd0cf92e,PodSandboxId:d043b69b8d054de1d1e28b7b5d2cafe491e7da223056f81c4994bc63f44c047a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723813945450948375,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-hlww9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cca73c48-3261-41f5-ab42-eeedd88963e1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8ca000436a5314e0960863ed8b9db6fc418404e98414d0dd4edb8a36cafddde,PodSandboxId:ae2403f8e9d76f3bdabb33554edf3b9f64cc80066f556e4c812288ce9adfcb88,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723813937603021042,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f5nrl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d280e9e-5cf6-4085-b0f8-44f6
6b50d628,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17bb1938edcda574d2bd217180511d384e6a0d85479c79f4c1d31db5029c1d8c,PodSandboxId:d832dc0a065b1be593654f0f4a53a7189f4171989f83e961774cf7f3ef53fb4c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723813937653061342,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6n4gk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c810fd88-0141-4de3-ba4f-df2fc8a80fd7,},Annotations:map[string]s
tring{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:330eb5847d3245030ee44a01d452db0c1f31ddc5727677ecbcd799ae614ebb97,PodSandboxId:a0eb69d1a32b259eac3f51ed898343a647a756f7c0f14837d87860e213b6b6d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723813937460567822,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-336982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6772094a87c90b58c0abaaf77a03215c,},Annotations:map[string]string{io.kuber
netes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9919750845f3d2c3e35c1bdf9ff7dbcc3d1dc5557af45c7985144f0b6a09741,PodSandboxId:b03c7e115b2d36ba00eb1b23a5a8461e1c2166b743d98a68032bff3448c0ad01,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723813937443960933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-336982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8dda651fbb89b3aee00d92913e93916,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes
.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:777e3c00b611eab51e49b9f47ec33b57e3b049a860b59b4e99ba6924ab849b92,PodSandboxId:c93bff5a6499e1b0f4e2794a1140ce6853b91d77fbbf50229d7ed7e3e4a3ece3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723813937476028014,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0b36228-a7b4-472f-84a8-fd741f3ec98f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:326c1e94f5e1fd94cc3c87623d36864f04a7b798119996892336497c0f01ab5f,PodSandboxId:ce022a6356e7f35f41dffa77eafed98ba920b85d4d1d9184bd9710d2d2e931a8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723813937365956311,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-336982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da3ccf05dc75dab1bf425f41781a5a4c,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a0e2cdd1f40cc329c45713d49852c0612cdd99521c075c6211fde473ad0cfdc,PodSandboxId:f173ddb920f3c36f95768ea64edf8b554de4a85232cf40bd9d5a2819359861c5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723813937319149997,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-336982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9801eff48296cf0bd365840495007eee,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a556ba4d113ba15bbf1bbe5329aab9f84a5d66c9d80385f3a5d3ded62054521,PodSandboxId:d043b69b8d054de1d1e28b7b5d2cafe491e7da223056f81c4994bc63f44c047a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723813932422085150,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-hlww9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cca73c48-3261-41f5-ab42-eeedd88963e1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51fce230c5a5add847549e4b80c0af67c7829b5cbc70fae3d0fc0e77df2922fd,PodSandboxId:c40879fe636ed06ebf7e08733c05d197ceccb9f2a7d03263e31f21e861d4eed0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723813612436947929,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-m9dxd,io
.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 29ae930d-fd36-432b-bcc5-dabed3bccf88,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf650b256082f2286f5edf9635d8701a768b8e0725633fe268a78e645daebefe,PodSandboxId:86a68133870d872722d73bd0d0865707a96f664d06848e8bbc5c1caae5c37e89,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723813555841387819,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: e0b36228-a7b4-472f-84a8-fd741f3ec98f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:851fbcba07c087e4024905c5d5a1a90eb6c2fdf8219077078d724fa20411b383,PodSandboxId:f6cd1d65812a6f90dd9f1edcad13b4abe58c7f1bfd1354e589902918e38a0081,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723813544241323796,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6n4gk,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: c810fd88-0141-4de3-ba4f-df2fc8a80fd7,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:171a9c405c59e0ecaa54d87efd81e8b7ea92feb12d2b0e9e89ad028a749fd793,PodSandboxId:1af3d34c2cf06f14f0d6905b12a75635ff29624c535f579cc354ac67c9e38df0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723813540454543415,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f5nrl,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 1d280e9e-5cf6-4085-b0f8-44f66b50d628,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65630aa0a16feb60c141b71af051e690c23aef4d2dd50e00f887afe607359e70,PodSandboxId:a53322d16f113ed80dd70eeb5c6dbcd7d16f4188dbab27f46f87ec40d1cbf585,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723813529636993225,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-336982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8dda651fbb89b3aee00d92913e93916
,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:212bd68acb7c3f8a5334f4c436c1b81fcfb0240447356d8ca45553de7b2d1463,PodSandboxId:032c3a483ab2c52f59e816355da58d5d0663a9ea40398ee3a14f6ca439ccb1e5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723813529673786686,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-336982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6772094a87c90b58c0abaaf77a03215c,},Annotations:
map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b58598ac934ca7f1a7851e587a2d1dd3a072d35c90bfedf4580b46cf9c49b23,PodSandboxId:e961af5b081dac6fdf599d48c2e23a435171b48020c3067f05f0272eaf27aff3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723813529616989104,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-336982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da3ccf05dc75dab1bf425f41781a5a4c,},
Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99746d40c6523384b1bf891b6e187b4e084acd439343373ecb64225c76096431,PodSandboxId:98262cbe870074e579385169515b57d9111d6dfe3690d3e781eb73b5e75f9d76,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723813529589334843,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-336982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9801eff48296cf0bd365840495007eee,},Annotations:map
[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f35f57fb-4b35-4c7b-ad98-9886261ff0d7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:16:25 multinode-336982 crio[2748]: time="2024-08-16 13:16:25.406135679Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=14ad3981-1e23-498f-9022-80ac19be643d name=/runtime.v1.RuntimeService/Version
	Aug 16 13:16:25 multinode-336982 crio[2748]: time="2024-08-16 13:16:25.406210068Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=14ad3981-1e23-498f-9022-80ac19be643d name=/runtime.v1.RuntimeService/Version
	Aug 16 13:16:25 multinode-336982 crio[2748]: time="2024-08-16 13:16:25.408066884Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5bba5b8b-313f-4f28-96f9-43c724dd6aba name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 13:16:25 multinode-336982 crio[2748]: time="2024-08-16 13:16:25.408539209Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723814185408515898,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5bba5b8b-313f-4f28-96f9-43c724dd6aba name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 13:16:25 multinode-336982 crio[2748]: time="2024-08-16 13:16:25.409093666Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4ecf28fa-186d-40ea-ae2a-3ba12a7e9172 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:16:25 multinode-336982 crio[2748]: time="2024-08-16 13:16:25.409143357Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4ecf28fa-186d-40ea-ae2a-3ba12a7e9172 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:16:25 multinode-336982 crio[2748]: time="2024-08-16 13:16:25.409840707Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a9414759f87719c77eee45e982c56ec13250cff68aba0734f305f822c9fe9b4,PodSandboxId:4a67ef1185ff3fdf7dbb57b9cf130f1f6212b274b5219a5c1d9d1d391dd38b93,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723813970975893003,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-m9dxd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 29ae930d-fd36-432b-bcc5-dabed3bccf88,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e05069c8cc20dd27a545d01b2509dd1a1bcc53d588427b0b631ec8a5bd0cf92e,PodSandboxId:d043b69b8d054de1d1e28b7b5d2cafe491e7da223056f81c4994bc63f44c047a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723813945450948375,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-hlww9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cca73c48-3261-41f5-ab42-eeedd88963e1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8ca000436a5314e0960863ed8b9db6fc418404e98414d0dd4edb8a36cafddde,PodSandboxId:ae2403f8e9d76f3bdabb33554edf3b9f64cc80066f556e4c812288ce9adfcb88,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723813937603021042,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f5nrl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d280e9e-5cf6-4085-b0f8-44f6
6b50d628,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17bb1938edcda574d2bd217180511d384e6a0d85479c79f4c1d31db5029c1d8c,PodSandboxId:d832dc0a065b1be593654f0f4a53a7189f4171989f83e961774cf7f3ef53fb4c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723813937653061342,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6n4gk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c810fd88-0141-4de3-ba4f-df2fc8a80fd7,},Annotations:map[string]s
tring{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:330eb5847d3245030ee44a01d452db0c1f31ddc5727677ecbcd799ae614ebb97,PodSandboxId:a0eb69d1a32b259eac3f51ed898343a647a756f7c0f14837d87860e213b6b6d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723813937460567822,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-336982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6772094a87c90b58c0abaaf77a03215c,},Annotations:map[string]string{io.kuber
netes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9919750845f3d2c3e35c1bdf9ff7dbcc3d1dc5557af45c7985144f0b6a09741,PodSandboxId:b03c7e115b2d36ba00eb1b23a5a8461e1c2166b743d98a68032bff3448c0ad01,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723813937443960933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-336982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8dda651fbb89b3aee00d92913e93916,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes
.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:777e3c00b611eab51e49b9f47ec33b57e3b049a860b59b4e99ba6924ab849b92,PodSandboxId:c93bff5a6499e1b0f4e2794a1140ce6853b91d77fbbf50229d7ed7e3e4a3ece3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723813937476028014,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0b36228-a7b4-472f-84a8-fd741f3ec98f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:326c1e94f5e1fd94cc3c87623d36864f04a7b798119996892336497c0f01ab5f,PodSandboxId:ce022a6356e7f35f41dffa77eafed98ba920b85d4d1d9184bd9710d2d2e931a8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723813937365956311,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-336982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da3ccf05dc75dab1bf425f41781a5a4c,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a0e2cdd1f40cc329c45713d49852c0612cdd99521c075c6211fde473ad0cfdc,PodSandboxId:f173ddb920f3c36f95768ea64edf8b554de4a85232cf40bd9d5a2819359861c5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723813937319149997,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-336982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9801eff48296cf0bd365840495007eee,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a556ba4d113ba15bbf1bbe5329aab9f84a5d66c9d80385f3a5d3ded62054521,PodSandboxId:d043b69b8d054de1d1e28b7b5d2cafe491e7da223056f81c4994bc63f44c047a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723813932422085150,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-hlww9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cca73c48-3261-41f5-ab42-eeedd88963e1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51fce230c5a5add847549e4b80c0af67c7829b5cbc70fae3d0fc0e77df2922fd,PodSandboxId:c40879fe636ed06ebf7e08733c05d197ceccb9f2a7d03263e31f21e861d4eed0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723813612436947929,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-m9dxd,io
.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 29ae930d-fd36-432b-bcc5-dabed3bccf88,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf650b256082f2286f5edf9635d8701a768b8e0725633fe268a78e645daebefe,PodSandboxId:86a68133870d872722d73bd0d0865707a96f664d06848e8bbc5c1caae5c37e89,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723813555841387819,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: e0b36228-a7b4-472f-84a8-fd741f3ec98f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:851fbcba07c087e4024905c5d5a1a90eb6c2fdf8219077078d724fa20411b383,PodSandboxId:f6cd1d65812a6f90dd9f1edcad13b4abe58c7f1bfd1354e589902918e38a0081,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723813544241323796,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6n4gk,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: c810fd88-0141-4de3-ba4f-df2fc8a80fd7,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:171a9c405c59e0ecaa54d87efd81e8b7ea92feb12d2b0e9e89ad028a749fd793,PodSandboxId:1af3d34c2cf06f14f0d6905b12a75635ff29624c535f579cc354ac67c9e38df0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723813540454543415,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f5nrl,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 1d280e9e-5cf6-4085-b0f8-44f66b50d628,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65630aa0a16feb60c141b71af051e690c23aef4d2dd50e00f887afe607359e70,PodSandboxId:a53322d16f113ed80dd70eeb5c6dbcd7d16f4188dbab27f46f87ec40d1cbf585,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723813529636993225,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-336982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8dda651fbb89b3aee00d92913e93916
,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:212bd68acb7c3f8a5334f4c436c1b81fcfb0240447356d8ca45553de7b2d1463,PodSandboxId:032c3a483ab2c52f59e816355da58d5d0663a9ea40398ee3a14f6ca439ccb1e5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723813529673786686,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-336982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6772094a87c90b58c0abaaf77a03215c,},Annotations:
map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b58598ac934ca7f1a7851e587a2d1dd3a072d35c90bfedf4580b46cf9c49b23,PodSandboxId:e961af5b081dac6fdf599d48c2e23a435171b48020c3067f05f0272eaf27aff3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723813529616989104,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-336982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da3ccf05dc75dab1bf425f41781a5a4c,},
Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99746d40c6523384b1bf891b6e187b4e084acd439343373ecb64225c76096431,PodSandboxId:98262cbe870074e579385169515b57d9111d6dfe3690d3e781eb73b5e75f9d76,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723813529589334843,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-336982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9801eff48296cf0bd365840495007eee,},Annotations:map
[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4ecf28fa-186d-40ea-ae2a-3ba12a7e9172 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:16:25 multinode-336982 crio[2748]: time="2024-08-16 13:16:25.456783371Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=180dbda5-e474-4e56-bb89-3253fb473926 name=/runtime.v1.RuntimeService/Version
	Aug 16 13:16:25 multinode-336982 crio[2748]: time="2024-08-16 13:16:25.456866424Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=180dbda5-e474-4e56-bb89-3253fb473926 name=/runtime.v1.RuntimeService/Version
	Aug 16 13:16:25 multinode-336982 crio[2748]: time="2024-08-16 13:16:25.458061057Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1b57890d-7323-4bbe-9344-1d99b4a08a2c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 13:16:25 multinode-336982 crio[2748]: time="2024-08-16 13:16:25.458715269Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723814185458656704,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1b57890d-7323-4bbe-9344-1d99b4a08a2c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 13:16:25 multinode-336982 crio[2748]: time="2024-08-16 13:16:25.459573424Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5ca10b0e-50d2-48cf-a653-6b6cec163887 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:16:25 multinode-336982 crio[2748]: time="2024-08-16 13:16:25.459633973Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5ca10b0e-50d2-48cf-a653-6b6cec163887 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:16:25 multinode-336982 crio[2748]: time="2024-08-16 13:16:25.461659557Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a9414759f87719c77eee45e982c56ec13250cff68aba0734f305f822c9fe9b4,PodSandboxId:4a67ef1185ff3fdf7dbb57b9cf130f1f6212b274b5219a5c1d9d1d391dd38b93,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723813970975893003,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-m9dxd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 29ae930d-fd36-432b-bcc5-dabed3bccf88,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e05069c8cc20dd27a545d01b2509dd1a1bcc53d588427b0b631ec8a5bd0cf92e,PodSandboxId:d043b69b8d054de1d1e28b7b5d2cafe491e7da223056f81c4994bc63f44c047a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723813945450948375,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-hlww9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cca73c48-3261-41f5-ab42-eeedd88963e1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8ca000436a5314e0960863ed8b9db6fc418404e98414d0dd4edb8a36cafddde,PodSandboxId:ae2403f8e9d76f3bdabb33554edf3b9f64cc80066f556e4c812288ce9adfcb88,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723813937603021042,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f5nrl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d280e9e-5cf6-4085-b0f8-44f6
6b50d628,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17bb1938edcda574d2bd217180511d384e6a0d85479c79f4c1d31db5029c1d8c,PodSandboxId:d832dc0a065b1be593654f0f4a53a7189f4171989f83e961774cf7f3ef53fb4c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723813937653061342,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6n4gk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c810fd88-0141-4de3-ba4f-df2fc8a80fd7,},Annotations:map[string]s
tring{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:330eb5847d3245030ee44a01d452db0c1f31ddc5727677ecbcd799ae614ebb97,PodSandboxId:a0eb69d1a32b259eac3f51ed898343a647a756f7c0f14837d87860e213b6b6d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723813937460567822,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-336982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6772094a87c90b58c0abaaf77a03215c,},Annotations:map[string]string{io.kuber
netes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9919750845f3d2c3e35c1bdf9ff7dbcc3d1dc5557af45c7985144f0b6a09741,PodSandboxId:b03c7e115b2d36ba00eb1b23a5a8461e1c2166b743d98a68032bff3448c0ad01,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723813937443960933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-336982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8dda651fbb89b3aee00d92913e93916,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes
.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:777e3c00b611eab51e49b9f47ec33b57e3b049a860b59b4e99ba6924ab849b92,PodSandboxId:c93bff5a6499e1b0f4e2794a1140ce6853b91d77fbbf50229d7ed7e3e4a3ece3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723813937476028014,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0b36228-a7b4-472f-84a8-fd741f3ec98f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:326c1e94f5e1fd94cc3c87623d36864f04a7b798119996892336497c0f01ab5f,PodSandboxId:ce022a6356e7f35f41dffa77eafed98ba920b85d4d1d9184bd9710d2d2e931a8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723813937365956311,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-336982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da3ccf05dc75dab1bf425f41781a5a4c,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a0e2cdd1f40cc329c45713d49852c0612cdd99521c075c6211fde473ad0cfdc,PodSandboxId:f173ddb920f3c36f95768ea64edf8b554de4a85232cf40bd9d5a2819359861c5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723813937319149997,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-336982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9801eff48296cf0bd365840495007eee,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a556ba4d113ba15bbf1bbe5329aab9f84a5d66c9d80385f3a5d3ded62054521,PodSandboxId:d043b69b8d054de1d1e28b7b5d2cafe491e7da223056f81c4994bc63f44c047a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723813932422085150,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-hlww9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cca73c48-3261-41f5-ab42-eeedd88963e1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51fce230c5a5add847549e4b80c0af67c7829b5cbc70fae3d0fc0e77df2922fd,PodSandboxId:c40879fe636ed06ebf7e08733c05d197ceccb9f2a7d03263e31f21e861d4eed0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723813612436947929,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-m9dxd,io
.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 29ae930d-fd36-432b-bcc5-dabed3bccf88,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf650b256082f2286f5edf9635d8701a768b8e0725633fe268a78e645daebefe,PodSandboxId:86a68133870d872722d73bd0d0865707a96f664d06848e8bbc5c1caae5c37e89,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723813555841387819,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: e0b36228-a7b4-472f-84a8-fd741f3ec98f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:851fbcba07c087e4024905c5d5a1a90eb6c2fdf8219077078d724fa20411b383,PodSandboxId:f6cd1d65812a6f90dd9f1edcad13b4abe58c7f1bfd1354e589902918e38a0081,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723813544241323796,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6n4gk,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: c810fd88-0141-4de3-ba4f-df2fc8a80fd7,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:171a9c405c59e0ecaa54d87efd81e8b7ea92feb12d2b0e9e89ad028a749fd793,PodSandboxId:1af3d34c2cf06f14f0d6905b12a75635ff29624c535f579cc354ac67c9e38df0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723813540454543415,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f5nrl,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 1d280e9e-5cf6-4085-b0f8-44f66b50d628,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65630aa0a16feb60c141b71af051e690c23aef4d2dd50e00f887afe607359e70,PodSandboxId:a53322d16f113ed80dd70eeb5c6dbcd7d16f4188dbab27f46f87ec40d1cbf585,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723813529636993225,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-336982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8dda651fbb89b3aee00d92913e93916
,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:212bd68acb7c3f8a5334f4c436c1b81fcfb0240447356d8ca45553de7b2d1463,PodSandboxId:032c3a483ab2c52f59e816355da58d5d0663a9ea40398ee3a14f6ca439ccb1e5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723813529673786686,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-336982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6772094a87c90b58c0abaaf77a03215c,},Annotations:
map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b58598ac934ca7f1a7851e587a2d1dd3a072d35c90bfedf4580b46cf9c49b23,PodSandboxId:e961af5b081dac6fdf599d48c2e23a435171b48020c3067f05f0272eaf27aff3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723813529616989104,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-336982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da3ccf05dc75dab1bf425f41781a5a4c,},
Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99746d40c6523384b1bf891b6e187b4e084acd439343373ecb64225c76096431,PodSandboxId:98262cbe870074e579385169515b57d9111d6dfe3690d3e781eb73b5e75f9d76,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723813529589334843,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-336982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9801eff48296cf0bd365840495007eee,},Annotations:map
[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5ca10b0e-50d2-48cf-a653-6b6cec163887 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:16:25 multinode-336982 crio[2748]: time="2024-08-16 13:16:25.512177355Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8c2faef3-bfb3-4228-a509-6717c050681c name=/runtime.v1.RuntimeService/Version
	Aug 16 13:16:25 multinode-336982 crio[2748]: time="2024-08-16 13:16:25.512257885Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8c2faef3-bfb3-4228-a509-6717c050681c name=/runtime.v1.RuntimeService/Version
	Aug 16 13:16:25 multinode-336982 crio[2748]: time="2024-08-16 13:16:25.513260572Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=03e8f7c5-0290-43bf-bfa3-12399a6e588f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 13:16:25 multinode-336982 crio[2748]: time="2024-08-16 13:16:25.513732937Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723814185513708374,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=03e8f7c5-0290-43bf-bfa3-12399a6e588f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 13:16:25 multinode-336982 crio[2748]: time="2024-08-16 13:16:25.514247671Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=127f2397-2732-4a19-a855-0a3f9b89f6b0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:16:25 multinode-336982 crio[2748]: time="2024-08-16 13:16:25.514307842Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=127f2397-2732-4a19-a855-0a3f9b89f6b0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:16:25 multinode-336982 crio[2748]: time="2024-08-16 13:16:25.514863863Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a9414759f87719c77eee45e982c56ec13250cff68aba0734f305f822c9fe9b4,PodSandboxId:4a67ef1185ff3fdf7dbb57b9cf130f1f6212b274b5219a5c1d9d1d391dd38b93,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723813970975893003,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-m9dxd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 29ae930d-fd36-432b-bcc5-dabed3bccf88,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e05069c8cc20dd27a545d01b2509dd1a1bcc53d588427b0b631ec8a5bd0cf92e,PodSandboxId:d043b69b8d054de1d1e28b7b5d2cafe491e7da223056f81c4994bc63f44c047a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723813945450948375,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-hlww9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cca73c48-3261-41f5-ab42-eeedd88963e1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8ca000436a5314e0960863ed8b9db6fc418404e98414d0dd4edb8a36cafddde,PodSandboxId:ae2403f8e9d76f3bdabb33554edf3b9f64cc80066f556e4c812288ce9adfcb88,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723813937603021042,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f5nrl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d280e9e-5cf6-4085-b0f8-44f6
6b50d628,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17bb1938edcda574d2bd217180511d384e6a0d85479c79f4c1d31db5029c1d8c,PodSandboxId:d832dc0a065b1be593654f0f4a53a7189f4171989f83e961774cf7f3ef53fb4c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723813937653061342,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6n4gk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c810fd88-0141-4de3-ba4f-df2fc8a80fd7,},Annotations:map[string]s
tring{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:330eb5847d3245030ee44a01d452db0c1f31ddc5727677ecbcd799ae614ebb97,PodSandboxId:a0eb69d1a32b259eac3f51ed898343a647a756f7c0f14837d87860e213b6b6d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723813937460567822,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-336982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6772094a87c90b58c0abaaf77a03215c,},Annotations:map[string]string{io.kuber
netes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9919750845f3d2c3e35c1bdf9ff7dbcc3d1dc5557af45c7985144f0b6a09741,PodSandboxId:b03c7e115b2d36ba00eb1b23a5a8461e1c2166b743d98a68032bff3448c0ad01,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723813937443960933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-336982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8dda651fbb89b3aee00d92913e93916,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes
.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:777e3c00b611eab51e49b9f47ec33b57e3b049a860b59b4e99ba6924ab849b92,PodSandboxId:c93bff5a6499e1b0f4e2794a1140ce6853b91d77fbbf50229d7ed7e3e4a3ece3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723813937476028014,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0b36228-a7b4-472f-84a8-fd741f3ec98f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:326c1e94f5e1fd94cc3c87623d36864f04a7b798119996892336497c0f01ab5f,PodSandboxId:ce022a6356e7f35f41dffa77eafed98ba920b85d4d1d9184bd9710d2d2e931a8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723813937365956311,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-336982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da3ccf05dc75dab1bf425f41781a5a4c,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a0e2cdd1f40cc329c45713d49852c0612cdd99521c075c6211fde473ad0cfdc,PodSandboxId:f173ddb920f3c36f95768ea64edf8b554de4a85232cf40bd9d5a2819359861c5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723813937319149997,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-336982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9801eff48296cf0bd365840495007eee,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a556ba4d113ba15bbf1bbe5329aab9f84a5d66c9d80385f3a5d3ded62054521,PodSandboxId:d043b69b8d054de1d1e28b7b5d2cafe491e7da223056f81c4994bc63f44c047a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723813932422085150,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-hlww9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cca73c48-3261-41f5-ab42-eeedd88963e1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51fce230c5a5add847549e4b80c0af67c7829b5cbc70fae3d0fc0e77df2922fd,PodSandboxId:c40879fe636ed06ebf7e08733c05d197ceccb9f2a7d03263e31f21e861d4eed0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723813612436947929,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-m9dxd,io
.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 29ae930d-fd36-432b-bcc5-dabed3bccf88,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf650b256082f2286f5edf9635d8701a768b8e0725633fe268a78e645daebefe,PodSandboxId:86a68133870d872722d73bd0d0865707a96f664d06848e8bbc5c1caae5c37e89,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723813555841387819,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: e0b36228-a7b4-472f-84a8-fd741f3ec98f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:851fbcba07c087e4024905c5d5a1a90eb6c2fdf8219077078d724fa20411b383,PodSandboxId:f6cd1d65812a6f90dd9f1edcad13b4abe58c7f1bfd1354e589902918e38a0081,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723813544241323796,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6n4gk,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: c810fd88-0141-4de3-ba4f-df2fc8a80fd7,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:171a9c405c59e0ecaa54d87efd81e8b7ea92feb12d2b0e9e89ad028a749fd793,PodSandboxId:1af3d34c2cf06f14f0d6905b12a75635ff29624c535f579cc354ac67c9e38df0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723813540454543415,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f5nrl,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 1d280e9e-5cf6-4085-b0f8-44f66b50d628,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65630aa0a16feb60c141b71af051e690c23aef4d2dd50e00f887afe607359e70,PodSandboxId:a53322d16f113ed80dd70eeb5c6dbcd7d16f4188dbab27f46f87ec40d1cbf585,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723813529636993225,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-336982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8dda651fbb89b3aee00d92913e93916
,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:212bd68acb7c3f8a5334f4c436c1b81fcfb0240447356d8ca45553de7b2d1463,PodSandboxId:032c3a483ab2c52f59e816355da58d5d0663a9ea40398ee3a14f6ca439ccb1e5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723813529673786686,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-336982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6772094a87c90b58c0abaaf77a03215c,},Annotations:
map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b58598ac934ca7f1a7851e587a2d1dd3a072d35c90bfedf4580b46cf9c49b23,PodSandboxId:e961af5b081dac6fdf599d48c2e23a435171b48020c3067f05f0272eaf27aff3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723813529616989104,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-336982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da3ccf05dc75dab1bf425f41781a5a4c,},
Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99746d40c6523384b1bf891b6e187b4e084acd439343373ecb64225c76096431,PodSandboxId:98262cbe870074e579385169515b57d9111d6dfe3690d3e781eb73b5e75f9d76,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723813529589334843,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-336982,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9801eff48296cf0bd365840495007eee,},Annotations:map
[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=127f2397-2732-4a19-a855-0a3f9b89f6b0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5a9414759f877       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   4a67ef1185ff3       busybox-7dff88458-m9dxd
	e05069c8cc20d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   2                   d043b69b8d054       coredns-6f6b679f8f-hlww9
	17bb1938edcda       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      4 minutes ago       Running             kindnet-cni               1                   d832dc0a065b1       kindnet-6n4gk
	e8ca000436a53       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      4 minutes ago       Running             kube-proxy                1                   ae2403f8e9d76       kube-proxy-f5nrl
	777e3c00b611e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   c93bff5a6499e       storage-provisioner
	330eb5847d324       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      4 minutes ago       Running             kube-scheduler            1                   a0eb69d1a32b2       kube-scheduler-multinode-336982
	d9919750845f3       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      4 minutes ago       Running             etcd                      1                   b03c7e115b2d3       etcd-multinode-336982
	326c1e94f5e1f       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      4 minutes ago       Running             kube-controller-manager   1                   ce022a6356e7f       kube-controller-manager-multinode-336982
	9a0e2cdd1f40c       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      4 minutes ago       Running             kube-apiserver            1                   f173ddb920f3c       kube-apiserver-multinode-336982
	7a556ba4d113b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Exited              coredns                   1                   d043b69b8d054       coredns-6f6b679f8f-hlww9
	51fce230c5a5a       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   c40879fe636ed       busybox-7dff88458-m9dxd
	bf650b256082f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   86a68133870d8       storage-provisioner
	851fbcba07c08       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    10 minutes ago      Exited              kindnet-cni               0                   f6cd1d65812a6       kindnet-6n4gk
	171a9c405c59e       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      10 minutes ago      Exited              kube-proxy                0                   1af3d34c2cf06       kube-proxy-f5nrl
	212bd68acb7c3       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      10 minutes ago      Exited              kube-scheduler            0                   032c3a483ab2c       kube-scheduler-multinode-336982
	65630aa0a16fe       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      10 minutes ago      Exited              etcd                      0                   a53322d16f113       etcd-multinode-336982
	5b58598ac934c       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      10 minutes ago      Exited              kube-controller-manager   0                   e961af5b081da       kube-controller-manager-multinode-336982
	99746d40c6523       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      10 minutes ago      Exited              kube-apiserver            0                   98262cbe87007       kube-apiserver-multinode-336982
	
	
	==> coredns [7a556ba4d113ba15bbf1bbe5329aab9f84a5d66c9d80385f3a5d3ded62054521] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:51821 - 50262 "HINFO IN 6604188459201968296.7220454333062668310. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024604934s
	
	
	==> coredns [e05069c8cc20dd27a545d01b2509dd1a1bcc53d588427b0b631ec8a5bd0cf92e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:56010 - 34265 "HINFO IN 7361033659007836417.3270599287150432285. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015098427s
	
	
	==> describe nodes <==
	Name:               multinode-336982
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-336982
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab84f9bc76071a77c857a14f5c66dccc01002b05
	                    minikube.k8s.io/name=multinode-336982
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_16T13_05_35_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 13:05:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-336982
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 13:16:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 13:12:24 +0000   Fri, 16 Aug 2024 13:05:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 13:12:24 +0000   Fri, 16 Aug 2024 13:05:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 13:12:24 +0000   Fri, 16 Aug 2024 13:05:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 13:12:24 +0000   Fri, 16 Aug 2024 13:05:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.208
	  Hostname:    multinode-336982
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c6f93cbd0c5d47e4b50511cb3c82abea
	  System UUID:                c6f93cbd-0c5d-47e4-b505-11cb3c82abea
	  Boot ID:                    b66d90f4-a0f3-498a-a206-a3b8d9ad2e69
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-m9dxd                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m37s
	  kube-system                 coredns-6f6b679f8f-hlww9                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     10m
	  kube-system                 etcd-multinode-336982                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         10m
	  kube-system                 kindnet-6n4gk                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-apiserver-multinode-336982             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-multinode-336982    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-f5nrl                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-multinode-336982             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m4s                   kube-proxy       
	  Normal   Starting                 10m                    kube-proxy       
	  Normal   Starting                 10m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  10m                    kubelet          Node multinode-336982 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m                    kubelet          Node multinode-336982 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m                    kubelet          Node multinode-336982 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                    node-controller  Node multinode-336982 event: Registered Node multinode-336982 in Controller
	  Normal   NodeReady                10m                    kubelet          Node multinode-336982 status is now: NodeReady
	  Warning  ContainerGCFailed        4m51s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             4m12s (x7 over 5m13s)  kubelet          Node multinode-336982 status is now: NodeNotReady
	  Normal   RegisteredNode           4m1s                   node-controller  Node multinode-336982 event: Registered Node multinode-336982 in Controller
	  Normal   Starting                 4m1s                   kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  4m1s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  4m1s                   kubelet          Node multinode-336982 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m1s                   kubelet          Node multinode-336982 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m1s                   kubelet          Node multinode-336982 status is now: NodeHasSufficientPID
	
	
	Name:               multinode-336982-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-336982-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab84f9bc76071a77c857a14f5c66dccc01002b05
	                    minikube.k8s.io/name=multinode-336982
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_16T13_13_01_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 13:13:00 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-336982-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 13:14:01 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 16 Aug 2024 13:13:31 +0000   Fri, 16 Aug 2024 13:14:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 16 Aug 2024 13:13:31 +0000   Fri, 16 Aug 2024 13:14:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 16 Aug 2024 13:13:31 +0000   Fri, 16 Aug 2024 13:14:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 16 Aug 2024 13:13:31 +0000   Fri, 16 Aug 2024 13:14:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.190
	  Hostname:    multinode-336982-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 899bd1eb956a4d558f6a9f86cd27b24a
	  System UUID:                899bd1eb-956a-4d55-8f6a-9f86cd27b24a
	  Boot ID:                    195247a4-a11a-43f0-9450-d95e14f6c438
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-rllpf    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m29s
	  kube-system                 kindnet-hp65f              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-proxy-p44kb           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m54s                  kube-proxy       
	  Normal  Starting                 3m20s                  kube-proxy       
	  Normal  NodeHasNoDiskPressure    10m (x2 over 10m)      kubelet          Node multinode-336982-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x2 over 10m)      kubelet          Node multinode-336982-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m (x2 over 10m)      kubelet          Node multinode-336982-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                9m39s                  kubelet          Node multinode-336982-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m25s (x2 over 3m25s)  kubelet          Node multinode-336982-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m25s (x2 over 3m25s)  kubelet          Node multinode-336982-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m25s (x2 over 3m25s)  kubelet          Node multinode-336982-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m21s                  node-controller  Node multinode-336982-m02 event: Registered Node multinode-336982-m02 in Controller
	  Normal  NodeReady                3m5s                   kubelet          Node multinode-336982-m02 status is now: NodeReady
	  Normal  NodeNotReady             101s                   node-controller  Node multinode-336982-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.051498] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.197692] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.117742] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.272944] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +3.970134] systemd-fstab-generator[759]: Ignoring "noauto" option for root device
	[  +4.469535] systemd-fstab-generator[896]: Ignoring "noauto" option for root device
	[  +0.060945] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.991922] systemd-fstab-generator[1233]: Ignoring "noauto" option for root device
	[  +0.086982] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.120007] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.096299] systemd-fstab-generator[1336]: Ignoring "noauto" option for root device
	[  +5.085128] kauditd_printk_skb: 59 callbacks suppressed
	[Aug16 13:06] kauditd_printk_skb: 14 callbacks suppressed
	[Aug16 13:12] systemd-fstab-generator[2666]: Ignoring "noauto" option for root device
	[  +0.148332] systemd-fstab-generator[2678]: Ignoring "noauto" option for root device
	[  +0.187933] systemd-fstab-generator[2692]: Ignoring "noauto" option for root device
	[  +0.131211] systemd-fstab-generator[2704]: Ignoring "noauto" option for root device
	[  +0.278116] systemd-fstab-generator[2732]: Ignoring "noauto" option for root device
	[  +0.795771] systemd-fstab-generator[2831]: Ignoring "noauto" option for root device
	[  +5.418072] kauditd_printk_skb: 132 callbacks suppressed
	[  +6.698504] systemd-fstab-generator[3699]: Ignoring "noauto" option for root device
	[  +0.097607] kauditd_printk_skb: 62 callbacks suppressed
	[  +8.244538] kauditd_printk_skb: 21 callbacks suppressed
	[  +3.724894] systemd-fstab-generator[3879]: Ignoring "noauto" option for root device
	[ +14.942388] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [65630aa0a16feb60c141b71af051e690c23aef4d2dd50e00f887afe607359e70] <==
	{"level":"info","ts":"2024-08-16T13:05:30.809860Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T13:05:30.811883Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-16T13:05:30.812473Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-16T13:05:30.812510Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2024-08-16T13:06:25.358304Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"152.563385ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16210303730053671054 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-336982-m02.17ec37528eff62ef\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-336982-m02.17ec37528eff62ef\" value_size:642 lease:6986931693198894650 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-08-16T13:06:25.358616Z","caller":"traceutil/trace.go:171","msg":"trace[246380907] linearizableReadLoop","detail":"{readStateIndex:466; appliedIndex:465; }","duration":"187.795701ms","start":"2024-08-16T13:06:25.170804Z","end":"2024-08-16T13:06:25.358600Z","steps":["trace[246380907] 'read index received'  (duration: 34.389447ms)","trace[246380907] 'applied index is now lower than readState.Index'  (duration: 153.405221ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-16T13:06:25.358683Z","caller":"traceutil/trace.go:171","msg":"trace[1612275950] transaction","detail":"{read_only:false; response_revision:446; number_of_response:1; }","duration":"231.163624ms","start":"2024-08-16T13:06:25.127500Z","end":"2024-08-16T13:06:25.358663Z","steps":["trace[1612275950] 'process raft request'  (duration: 77.734696ms)","trace[1612275950] 'compare'  (duration: 152.438097ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-16T13:06:25.358727Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"187.912968ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-16T13:06:25.358746Z","caller":"traceutil/trace.go:171","msg":"trace[1988710534] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:446; }","duration":"187.940458ms","start":"2024-08-16T13:06:25.170800Z","end":"2024-08-16T13:06:25.358741Z","steps":["trace[1988710534] 'agreement among raft nodes before linearized reading'  (duration: 187.875046ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T13:06:30.623762Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.665083ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-16T13:06:30.624562Z","caller":"traceutil/trace.go:171","msg":"trace[365037968] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:477; }","duration":"102.476047ms","start":"2024-08-16T13:06:30.522073Z","end":"2024-08-16T13:06:30.624549Z","steps":["trace[365037968] 'range keys from in-memory index tree'  (duration: 101.655657ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T13:07:23.575809Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.024393ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16210303730053671573 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-336982-m03.17ec37601ea32b97\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-336982-m03.17ec37601ea32b97\" value_size:646 lease:6986931693198895374 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-08-16T13:07:23.576306Z","caller":"traceutil/trace.go:171","msg":"trace[846683994] transaction","detail":"{read_only:false; response_revision:580; number_of_response:1; }","duration":"209.56128ms","start":"2024-08-16T13:07:23.366670Z","end":"2024-08-16T13:07:23.576232Z","steps":["trace[846683994] 'process raft request'  (duration: 77.921444ms)","trace[846683994] 'compare'  (duration: 130.740012ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-16T13:07:27.755098Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.88044ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-336982-m03\" ","response":"range_response_count:1 size:2887"}
	{"level":"info","ts":"2024-08-16T13:07:27.755161Z","caller":"traceutil/trace.go:171","msg":"trace[729201194] range","detail":"{range_begin:/registry/minions/multinode-336982-m03; range_end:; response_count:1; response_revision:616; }","duration":"140.961034ms","start":"2024-08-16T13:07:27.614189Z","end":"2024-08-16T13:07:27.755150Z","steps":["trace[729201194] 'range keys from in-memory index tree'  (duration: 140.777146ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T13:10:38.919122Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-16T13:10:38.919267Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-336982","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.208:2380"],"advertise-client-urls":["https://192.168.39.208:2379"]}
	{"level":"warn","ts":"2024-08-16T13:10:38.919526Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-16T13:10:38.919637Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-16T13:10:39.008887Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.208:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-16T13:10:39.008925Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.208:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-16T13:10:39.009010Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7fe6bf77aaafe0f6","current-leader-member-id":"7fe6bf77aaafe0f6"}
	{"level":"info","ts":"2024-08-16T13:10:39.011494Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.208:2380"}
	{"level":"info","ts":"2024-08-16T13:10:39.011654Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.208:2380"}
	{"level":"info","ts":"2024-08-16T13:10:39.011701Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-336982","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.208:2380"],"advertise-client-urls":["https://192.168.39.208:2379"]}
	
	
	==> etcd [d9919750845f3d2c3e35c1bdf9ff7dbcc3d1dc5557af45c7985144f0b6a09741] <==
	{"level":"info","ts":"2024-08-16T13:12:18.068244Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-16T13:12:19.904698Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7fe6bf77aaafe0f6 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-16T13:12:19.904760Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7fe6bf77aaafe0f6 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-16T13:12:19.904798Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7fe6bf77aaafe0f6 received MsgPreVoteResp from 7fe6bf77aaafe0f6 at term 2"}
	{"level":"info","ts":"2024-08-16T13:12:19.904817Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7fe6bf77aaafe0f6 became candidate at term 3"}
	{"level":"info","ts":"2024-08-16T13:12:19.904824Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7fe6bf77aaafe0f6 received MsgVoteResp from 7fe6bf77aaafe0f6 at term 3"}
	{"level":"info","ts":"2024-08-16T13:12:19.904841Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7fe6bf77aaafe0f6 became leader at term 3"}
	{"level":"info","ts":"2024-08-16T13:12:19.904848Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7fe6bf77aaafe0f6 elected leader 7fe6bf77aaafe0f6 at term 3"}
	{"level":"info","ts":"2024-08-16T13:12:19.907482Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7fe6bf77aaafe0f6","local-member-attributes":"{Name:multinode-336982 ClientURLs:[https://192.168.39.208:2379]}","request-path":"/0/members/7fe6bf77aaafe0f6/attributes","cluster-id":"fb8a78b66dce1ac7","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-16T13:12:19.907543Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T13:12:19.907728Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-16T13:12:19.907805Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-16T13:12:19.907901Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T13:12:19.909175Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T13:12:19.909686Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T13:12:19.910108Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.208:2379"}
	{"level":"info","ts":"2024-08-16T13:12:19.911335Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-16T13:13:04.794859Z","caller":"traceutil/trace.go:171","msg":"trace[1905363610] transaction","detail":"{read_only:false; response_revision:1129; number_of_response:1; }","duration":"162.716563ms","start":"2024-08-16T13:13:04.632106Z","end":"2024-08-16T13:13:04.794823Z","steps":["trace[1905363610] 'process raft request'  (duration: 162.566632ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T13:13:04.795964Z","caller":"traceutil/trace.go:171","msg":"trace[2106163456] linearizableReadLoop","detail":"{readStateIndex:1238; appliedIndex:1237; }","duration":"133.397624ms","start":"2024-08-16T13:13:04.662548Z","end":"2024-08-16T13:13:04.795945Z","steps":["trace[2106163456] 'read index received'  (duration: 132.829972ms)","trace[2106163456] 'applied index is now lower than readState.Index'  (duration: 566.944µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-16T13:13:04.796152Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"133.550452ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-16T13:13:04.796270Z","caller":"traceutil/trace.go:171","msg":"trace[1279985025] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1130; }","duration":"133.713513ms","start":"2024-08-16T13:13:04.662541Z","end":"2024-08-16T13:13:04.796255Z","steps":["trace[1279985025] 'agreement among raft nodes before linearized reading'  (duration: 133.495098ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T13:13:04.796840Z","caller":"traceutil/trace.go:171","msg":"trace[251242566] transaction","detail":"{read_only:false; response_revision:1130; number_of_response:1; }","duration":"153.821259ms","start":"2024-08-16T13:13:04.643006Z","end":"2024-08-16T13:13:04.796828Z","steps":["trace[251242566] 'process raft request'  (duration: 152.83643ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T13:13:47.809199Z","caller":"traceutil/trace.go:171","msg":"trace[145983938] linearizableReadLoop","detail":"{readStateIndex:1352; appliedIndex:1351; }","duration":"146.990408ms","start":"2024-08-16T13:13:47.662135Z","end":"2024-08-16T13:13:47.809125Z","steps":["trace[145983938] 'read index received'  (duration: 117.340294ms)","trace[145983938] 'applied index is now lower than readState.Index'  (duration: 29.649068ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-16T13:13:47.809370Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.177894ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-16T13:13:47.809598Z","caller":"traceutil/trace.go:171","msg":"trace[1585124629] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1232; }","duration":"147.456121ms","start":"2024-08-16T13:13:47.662129Z","end":"2024-08-16T13:13:47.809585Z","steps":["trace[1585124629] 'agreement among raft nodes before linearized reading'  (duration: 147.142498ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:16:25 up 11 min,  0 users,  load average: 0.09, 0.27, 0.18
	Linux multinode-336982 5.10.207 #1 SMP Wed Aug 14 19:18:01 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [17bb1938edcda574d2bd217180511d384e6a0d85479c79f4c1d31db5029c1d8c] <==
	I0816 13:15:18.730360       1 main.go:322] Node multinode-336982-m02 has CIDR [10.244.1.0/24] 
	I0816 13:15:28.736290       1 main.go:295] Handling node with IPs: map[192.168.39.208:{}]
	I0816 13:15:28.736399       1 main.go:299] handling current node
	I0816 13:15:28.736502       1 main.go:295] Handling node with IPs: map[192.168.39.190:{}]
	I0816 13:15:28.736524       1 main.go:322] Node multinode-336982-m02 has CIDR [10.244.1.0/24] 
	I0816 13:15:38.730242       1 main.go:295] Handling node with IPs: map[192.168.39.208:{}]
	I0816 13:15:38.730395       1 main.go:299] handling current node
	I0816 13:15:38.730558       1 main.go:295] Handling node with IPs: map[192.168.39.190:{}]
	I0816 13:15:38.730589       1 main.go:322] Node multinode-336982-m02 has CIDR [10.244.1.0/24] 
	I0816 13:15:48.738370       1 main.go:295] Handling node with IPs: map[192.168.39.208:{}]
	I0816 13:15:48.738535       1 main.go:299] handling current node
	I0816 13:15:48.738578       1 main.go:295] Handling node with IPs: map[192.168.39.190:{}]
	I0816 13:15:48.738601       1 main.go:322] Node multinode-336982-m02 has CIDR [10.244.1.0/24] 
	I0816 13:15:58.730718       1 main.go:295] Handling node with IPs: map[192.168.39.208:{}]
	I0816 13:15:58.730820       1 main.go:299] handling current node
	I0816 13:15:58.730852       1 main.go:295] Handling node with IPs: map[192.168.39.190:{}]
	I0816 13:15:58.730871       1 main.go:322] Node multinode-336982-m02 has CIDR [10.244.1.0/24] 
	I0816 13:16:08.736335       1 main.go:295] Handling node with IPs: map[192.168.39.208:{}]
	I0816 13:16:08.736467       1 main.go:299] handling current node
	I0816 13:16:08.736482       1 main.go:295] Handling node with IPs: map[192.168.39.190:{}]
	I0816 13:16:08.736487       1 main.go:322] Node multinode-336982-m02 has CIDR [10.244.1.0/24] 
	I0816 13:16:18.730199       1 main.go:295] Handling node with IPs: map[192.168.39.208:{}]
	I0816 13:16:18.730295       1 main.go:299] handling current node
	I0816 13:16:18.730322       1 main.go:295] Handling node with IPs: map[192.168.39.190:{}]
	I0816 13:16:18.730340       1 main.go:322] Node multinode-336982-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [851fbcba07c087e4024905c5d5a1a90eb6c2fdf8219077078d724fa20411b383] <==
	I0816 13:09:55.222600       1 main.go:322] Node multinode-336982-m02 has CIDR [10.244.1.0/24] 
	I0816 13:10:05.221852       1 main.go:295] Handling node with IPs: map[192.168.39.208:{}]
	I0816 13:10:05.221976       1 main.go:299] handling current node
	I0816 13:10:05.222009       1 main.go:295] Handling node with IPs: map[192.168.39.190:{}]
	I0816 13:10:05.222027       1 main.go:322] Node multinode-336982-m02 has CIDR [10.244.1.0/24] 
	I0816 13:10:05.222173       1 main.go:295] Handling node with IPs: map[192.168.39.145:{}]
	I0816 13:10:05.222195       1 main.go:322] Node multinode-336982-m03 has CIDR [10.244.3.0/24] 
	I0816 13:10:15.227865       1 main.go:295] Handling node with IPs: map[192.168.39.145:{}]
	I0816 13:10:15.227966       1 main.go:322] Node multinode-336982-m03 has CIDR [10.244.3.0/24] 
	I0816 13:10:15.228162       1 main.go:295] Handling node with IPs: map[192.168.39.208:{}]
	I0816 13:10:15.228190       1 main.go:299] handling current node
	I0816 13:10:15.228215       1 main.go:295] Handling node with IPs: map[192.168.39.190:{}]
	I0816 13:10:15.228232       1 main.go:322] Node multinode-336982-m02 has CIDR [10.244.1.0/24] 
	I0816 13:10:25.228650       1 main.go:295] Handling node with IPs: map[192.168.39.208:{}]
	I0816 13:10:25.228749       1 main.go:299] handling current node
	I0816 13:10:25.228780       1 main.go:295] Handling node with IPs: map[192.168.39.190:{}]
	I0816 13:10:25.228814       1 main.go:322] Node multinode-336982-m02 has CIDR [10.244.1.0/24] 
	I0816 13:10:25.228962       1 main.go:295] Handling node with IPs: map[192.168.39.145:{}]
	I0816 13:10:25.228985       1 main.go:322] Node multinode-336982-m03 has CIDR [10.244.3.0/24] 
	I0816 13:10:35.222179       1 main.go:295] Handling node with IPs: map[192.168.39.190:{}]
	I0816 13:10:35.222214       1 main.go:322] Node multinode-336982-m02 has CIDR [10.244.1.0/24] 
	I0816 13:10:35.222349       1 main.go:295] Handling node with IPs: map[192.168.39.145:{}]
	I0816 13:10:35.222355       1 main.go:322] Node multinode-336982-m03 has CIDR [10.244.3.0/24] 
	I0816 13:10:35.222601       1 main.go:295] Handling node with IPs: map[192.168.39.208:{}]
	I0816 13:10:35.222611       1 main.go:299] handling current node
	
	
	==> kube-apiserver [99746d40c6523384b1bf891b6e187b4e084acd439343373ecb64225c76096431] <==
	I0816 13:05:33.812633       1 controller.go:615] quota admission added evaluator for: endpoints
	I0816 13:05:33.817025       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0816 13:05:34.160735       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0816 13:05:34.979230       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0816 13:05:35.002966       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0816 13:05:35.013707       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0816 13:05:39.756995       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0816 13:05:39.987320       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0816 13:06:53.744780       1 conn.go:339] Error on socket receive: read tcp 192.168.39.208:8443->192.168.39.1:57522: use of closed network connection
	E0816 13:06:53.914614       1 conn.go:339] Error on socket receive: read tcp 192.168.39.208:8443->192.168.39.1:57544: use of closed network connection
	E0816 13:06:54.097217       1 conn.go:339] Error on socket receive: read tcp 192.168.39.208:8443->192.168.39.1:57560: use of closed network connection
	E0816 13:06:54.263230       1 conn.go:339] Error on socket receive: read tcp 192.168.39.208:8443->192.168.39.1:57576: use of closed network connection
	E0816 13:06:54.445165       1 conn.go:339] Error on socket receive: read tcp 192.168.39.208:8443->192.168.39.1:57596: use of closed network connection
	E0816 13:06:54.603189       1 conn.go:339] Error on socket receive: read tcp 192.168.39.208:8443->192.168.39.1:57610: use of closed network connection
	E0816 13:06:54.889147       1 conn.go:339] Error on socket receive: read tcp 192.168.39.208:8443->192.168.39.1:57622: use of closed network connection
	E0816 13:06:55.066329       1 conn.go:339] Error on socket receive: read tcp 192.168.39.208:8443->192.168.39.1:57632: use of closed network connection
	E0816 13:06:55.246686       1 conn.go:339] Error on socket receive: read tcp 192.168.39.208:8443->192.168.39.1:57642: use of closed network connection
	E0816 13:06:55.403606       1 conn.go:339] Error on socket receive: read tcp 192.168.39.208:8443->192.168.39.1:57660: use of closed network connection
	I0816 13:10:38.917928       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W0816 13:10:38.941961       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:10:38.946810       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:10:38.946980       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:10:38.947054       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:10:38.948084       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:10:38.948834       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [9a0e2cdd1f40cc329c45713d49852c0612cdd99521c075c6211fde473ad0cfdc] <==
	I0816 13:12:21.241928       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0816 13:12:21.243501       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0816 13:12:21.243553       1 shared_informer.go:320] Caches are synced for configmaps
	I0816 13:12:21.260143       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0816 13:12:21.260228       1 policy_source.go:224] refreshing policies
	I0816 13:12:21.279657       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0816 13:12:21.297657       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0816 13:12:21.307902       1 aggregator.go:171] initial CRD sync complete...
	I0816 13:12:21.307973       1 autoregister_controller.go:144] Starting autoregister controller
	I0816 13:12:21.307982       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0816 13:12:21.307991       1 cache.go:39] Caches are synced for autoregister controller
	E0816 13:12:21.336941       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0816 13:12:21.342164       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0816 13:12:21.342502       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0816 13:12:21.342894       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0816 13:12:21.343751       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0816 13:12:21.369233       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0816 13:12:22.146935       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0816 13:12:24.612733       1 controller.go:615] quota admission added evaluator for: endpoints
	I0816 13:12:24.822013       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0816 13:12:24.889371       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0816 13:12:24.981205       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0816 13:12:24.997536       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0816 13:12:25.081355       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0816 13:12:25.089801       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [326c1e94f5e1fd94cc3c87623d36864f04a7b798119996892336497c0f01ab5f] <==
	I0816 13:13:39.655529       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m03"
	I0816 13:13:39.655773       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-336982-m03\" does not exist"
	I0816 13:13:39.660389       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m03"
	I0816 13:13:39.681125       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m03"
	I0816 13:13:39.738367       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m03"
	I0816 13:13:40.024502       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m03"
	I0816 13:13:40.348096       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m03"
	I0816 13:13:49.770979       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m03"
	I0816 13:13:59.279652       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m03"
	I0816 13:13:59.279735       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-336982-m02"
	I0816 13:13:59.295577       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m03"
	I0816 13:13:59.670195       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m03"
	I0816 13:14:03.986504       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m03"
	I0816 13:14:04.001331       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m03"
	I0816 13:14:04.449701       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-336982-m02"
	I0816 13:14:04.449792       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m03"
	I0816 13:14:44.631691       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-lg8jj"
	I0816 13:14:44.658037       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-lg8jj"
	I0816 13:14:44.658208       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-kp5tg"
	I0816 13:14:44.685475       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-kp5tg"
	I0816 13:14:44.689508       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m02"
	I0816 13:14:44.703746       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m02"
	I0816 13:14:44.724016       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="10.066594ms"
	I0816 13:14:44.724130       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="42.719µs"
	I0816 13:14:49.805794       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m02"
	
	
	==> kube-controller-manager [5b58598ac934ca7f1a7851e587a2d1dd3a072d35c90bfedf4580b46cf9c49b23] <==
	I0816 13:08:12.525866       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m03"
	I0816 13:08:12.525981       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-336982-m02"
	I0816 13:08:13.550731       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-336982-m03\" does not exist"
	I0816 13:08:13.551175       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-336982-m02"
	I0816 13:08:13.577722       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-336982-m03" podCIDRs=["10.244.3.0/24"]
	I0816 13:08:13.577764       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m03"
	I0816 13:08:13.577789       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m03"
	I0816 13:08:13.577940       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m03"
	I0816 13:08:13.979334       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m03"
	I0816 13:08:14.034542       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m03"
	I0816 13:08:14.370112       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m03"
	I0816 13:08:23.577608       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m03"
	I0816 13:08:33.443192       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m03"
	I0816 13:08:33.444570       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-336982-m02"
	I0816 13:08:33.454826       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m03"
	I0816 13:08:33.926842       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m03"
	I0816 13:09:18.945841       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m02"
	I0816 13:09:18.946989       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-336982-m03"
	I0816 13:09:18.951284       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m03"
	I0816 13:09:18.970318       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m02"
	I0816 13:09:18.978933       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m03"
	I0816 13:09:19.026697       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="14.187166ms"
	I0816 13:09:19.027045       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="105.289µs"
	I0816 13:09:24.040707       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m02"
	I0816 13:09:34.121041       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-336982-m03"
	
	
	==> kube-proxy [171a9c405c59e0ecaa54d87efd81e8b7ea92feb12d2b0e9e89ad028a749fd793] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0816 13:05:40.859239       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0816 13:05:40.874196       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.208"]
	E0816 13:05:40.874290       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 13:05:40.935503       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0816 13:05:40.935556       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0816 13:05:40.935587       1 server_linux.go:169] "Using iptables Proxier"
	I0816 13:05:40.938984       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 13:05:40.939282       1 server.go:483] "Version info" version="v1.31.0"
	I0816 13:05:40.939297       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 13:05:40.940857       1 config.go:197] "Starting service config controller"
	I0816 13:05:40.940883       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 13:05:40.940926       1 config.go:104] "Starting endpoint slice config controller"
	I0816 13:05:40.940931       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 13:05:40.941372       1 config.go:326] "Starting node config controller"
	I0816 13:05:40.941378       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 13:05:41.045312       1 shared_informer.go:320] Caches are synced for node config
	I0816 13:05:41.045356       1 shared_informer.go:320] Caches are synced for service config
	I0816 13:05:41.045381       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [e8ca000436a5314e0960863ed8b9db6fc418404e98414d0dd4edb8a36cafddde] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0816 13:12:18.856613       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0816 13:12:21.318808       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.208"]
	E0816 13:12:21.318942       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 13:12:21.502089       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0816 13:12:21.504995       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0816 13:12:21.510469       1 server_linux.go:169] "Using iptables Proxier"
	I0816 13:12:21.519063       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 13:12:21.519356       1 server.go:483] "Version info" version="v1.31.0"
	I0816 13:12:21.520254       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 13:12:21.527103       1 config.go:197] "Starting service config controller"
	I0816 13:12:21.527152       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 13:12:21.527172       1 config.go:104] "Starting endpoint slice config controller"
	I0816 13:12:21.527176       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 13:12:21.532096       1 config.go:326] "Starting node config controller"
	I0816 13:12:21.532179       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 13:12:21.627726       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0816 13:12:21.627835       1 shared_informer.go:320] Caches are synced for service config
	I0816 13:12:21.632560       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [212bd68acb7c3f8a5334f4c436c1b81fcfb0240447356d8ca45553de7b2d1463] <==
	E0816 13:05:32.171292       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0816 13:05:33.028631       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0816 13:05:33.028684       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 13:05:33.142102       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0816 13:05:33.142153       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0816 13:05:33.205126       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 13:05:33.205270       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 13:05:33.224780       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 13:05:33.224836       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 13:05:33.229885       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 13:05:33.230966       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 13:05:33.259484       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 13:05:33.259878       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 13:05:33.335485       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 13:05:33.335537       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 13:05:33.439290       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 13:05:33.439345       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 13:05:33.447726       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 13:05:33.447776       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0816 13:05:33.473478       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 13:05:33.473524       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 13:05:33.619444       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 13:05:33.619479       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0816 13:05:36.463088       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0816 13:10:38.912505       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [330eb5847d3245030ee44a01d452db0c1f31ddc5727677ecbcd799ae614ebb97] <==
	I0816 13:12:18.600393       1 serving.go:386] Generated self-signed cert in-memory
	W0816 13:12:21.238154       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0816 13:12:21.238200       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0816 13:12:21.238210       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0816 13:12:21.238221       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0816 13:12:21.276742       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0816 13:12:21.276787       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 13:12:21.285824       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0816 13:12:21.285885       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 13:12:21.286600       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0816 13:12:21.286697       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0816 13:12:21.386282       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 16 13:15:14 multinode-336982 kubelet[3706]: E0816 13:15:14.351670    3706 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723814114351202393,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:15:24 multinode-336982 kubelet[3706]: E0816 13:15:24.313636    3706 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 13:15:24 multinode-336982 kubelet[3706]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 13:15:24 multinode-336982 kubelet[3706]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 13:15:24 multinode-336982 kubelet[3706]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 13:15:24 multinode-336982 kubelet[3706]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 13:15:24 multinode-336982 kubelet[3706]: E0816 13:15:24.354460    3706 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723814124353978224,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:15:24 multinode-336982 kubelet[3706]: E0816 13:15:24.354491    3706 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723814124353978224,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:15:34 multinode-336982 kubelet[3706]: E0816 13:15:34.355683    3706 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723814134355391238,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:15:34 multinode-336982 kubelet[3706]: E0816 13:15:34.355707    3706 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723814134355391238,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:15:44 multinode-336982 kubelet[3706]: E0816 13:15:44.358334    3706 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723814144357513132,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:15:44 multinode-336982 kubelet[3706]: E0816 13:15:44.358380    3706 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723814144357513132,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:15:54 multinode-336982 kubelet[3706]: E0816 13:15:54.361036    3706 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723814154360032158,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:15:54 multinode-336982 kubelet[3706]: E0816 13:15:54.361310    3706 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723814154360032158,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:16:04 multinode-336982 kubelet[3706]: E0816 13:16:04.363669    3706 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723814164363339113,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:16:04 multinode-336982 kubelet[3706]: E0816 13:16:04.363693    3706 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723814164363339113,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:16:14 multinode-336982 kubelet[3706]: E0816 13:16:14.366554    3706 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723814174365694147,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:16:14 multinode-336982 kubelet[3706]: E0816 13:16:14.367223    3706 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723814174365694147,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:16:24 multinode-336982 kubelet[3706]: E0816 13:16:24.311598    3706 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 13:16:24 multinode-336982 kubelet[3706]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 13:16:24 multinode-336982 kubelet[3706]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 13:16:24 multinode-336982 kubelet[3706]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 13:16:24 multinode-336982 kubelet[3706]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 13:16:24 multinode-336982 kubelet[3706]: E0816 13:16:24.370131    3706 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723814184369859269,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:16:24 multinode-336982 kubelet[3706]: E0816 13:16:24.370153    3706 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723814184369859269,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 13:16:25.084723   41858 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19423-3966/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-336982 -n multinode-336982
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-336982 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.32s)

                                                
                                    
x
+
TestPreload (352.82s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-436406 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0816 13:20:40.923420   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/functional-756697/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:23:39.892526   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-436406 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (3m30.188794984s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-436406 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-436406 image pull gcr.io/k8s-minikube/busybox: (2.7848852s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-436406
E0816 13:23:56.826100   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:25:40.923470   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/functional-756697/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-436406: exit status 82 (2m0.453092156s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-436406"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-436406 failed: exit status 82
panic.go:626: *** TestPreload FAILED at 2024-08-16 13:25:53.040566991 +0000 UTC m=+3895.949257615
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-436406 -n test-preload-436406
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-436406 -n test-preload-436406: exit status 3 (18.472216583s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 13:26:11.509233   45431 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.143:22: connect: no route to host
	E0816 13:26:11.509252   45431 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.143:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-436406" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-436406" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-436406
--- FAIL: TestPreload (352.82s)

                                                
                                    
x
+
TestKubernetesUpgrade (442.36s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-759623 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-759623 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m34.227453133s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-759623] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-3966/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-3966/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-759623" primary control-plane node in "kubernetes-upgrade-759623" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 13:28:06.333571   46501 out.go:345] Setting OutFile to fd 1 ...
	I0816 13:28:06.333964   46501 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 13:28:06.333978   46501 out.go:358] Setting ErrFile to fd 2...
	I0816 13:28:06.333986   46501 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 13:28:06.334280   46501 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-3966/.minikube/bin
	I0816 13:28:06.335157   46501 out.go:352] Setting JSON to false
	I0816 13:28:06.336532   46501 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4231,"bootTime":1723810655,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 13:28:06.336618   46501 start.go:139] virtualization: kvm guest
	I0816 13:28:06.338853   46501 out.go:177] * [kubernetes-upgrade-759623] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 13:28:06.340541   46501 notify.go:220] Checking for updates...
	I0816 13:28:06.341961   46501 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 13:28:06.343788   46501 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 13:28:06.345468   46501 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 13:28:06.346929   46501 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-3966/.minikube
	I0816 13:28:06.348073   46501 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 13:28:06.349305   46501 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 13:28:06.350797   46501 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 13:28:06.385491   46501 out.go:177] * Using the kvm2 driver based on user configuration
	I0816 13:28:06.386747   46501 start.go:297] selected driver: kvm2
	I0816 13:28:06.386755   46501 start.go:901] validating driver "kvm2" against <nil>
	I0816 13:28:06.386768   46501 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 13:28:06.387658   46501 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 13:28:09.329140   46501 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-3966/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 13:28:09.344639   46501 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0816 13:28:09.344695   46501 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 13:28:09.344974   46501 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0816 13:28:09.345042   46501 cni.go:84] Creating CNI manager for ""
	I0816 13:28:09.345059   46501 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:28:09.345073   46501 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0816 13:28:09.345140   46501 start.go:340] cluster config:
	{Name:kubernetes-upgrade-759623 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-759623 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 13:28:09.345269   46501 iso.go:125] acquiring lock: {Name:mk00139b359c6fe4c84d80e843a2099303fb7c5b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 13:28:09.347292   46501 out.go:177] * Starting "kubernetes-upgrade-759623" primary control-plane node in "kubernetes-upgrade-759623" cluster
	I0816 13:28:09.348670   46501 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0816 13:28:09.348714   46501 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0816 13:28:09.348735   46501 cache.go:56] Caching tarball of preloaded images
	I0816 13:28:09.348826   46501 preload.go:172] Found /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 13:28:09.348840   46501 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0816 13:28:09.349207   46501 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/kubernetes-upgrade-759623/config.json ...
	I0816 13:28:09.349233   46501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/kubernetes-upgrade-759623/config.json: {Name:mk2f05c2b5aef35a2ce13318451068359bb334d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:28:09.349365   46501 start.go:360] acquireMachinesLock for kubernetes-upgrade-759623: {Name:mk0b4b88ee8893cefe753073bc08069ac50e5641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 13:28:09.349396   46501 start.go:364] duration metric: took 16.422µs to acquireMachinesLock for "kubernetes-upgrade-759623"
	I0816 13:28:09.349412   46501 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-759623 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-759623 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 13:28:09.349484   46501 start.go:125] createHost starting for "" (driver="kvm2")
	I0816 13:28:09.351126   46501 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 13:28:09.351268   46501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 13:28:09.351320   46501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:28:09.366171   46501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41735
	I0816 13:28:09.366614   46501 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:28:09.367200   46501 main.go:141] libmachine: Using API Version  1
	I0816 13:28:09.367245   46501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:28:09.367672   46501 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:28:09.367918   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetMachineName
	I0816 13:28:09.368083   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .DriverName
	I0816 13:28:09.368265   46501 start.go:159] libmachine.API.Create for "kubernetes-upgrade-759623" (driver="kvm2")
	I0816 13:28:09.368294   46501 client.go:168] LocalClient.Create starting
	I0816 13:28:09.368329   46501 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem
	I0816 13:28:09.368362   46501 main.go:141] libmachine: Decoding PEM data...
	I0816 13:28:09.368378   46501 main.go:141] libmachine: Parsing certificate...
	I0816 13:28:09.368445   46501 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem
	I0816 13:28:09.368466   46501 main.go:141] libmachine: Decoding PEM data...
	I0816 13:28:09.368485   46501 main.go:141] libmachine: Parsing certificate...
	I0816 13:28:09.368501   46501 main.go:141] libmachine: Running pre-create checks...
	I0816 13:28:09.368514   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .PreCreateCheck
	I0816 13:28:09.368841   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetConfigRaw
	I0816 13:28:09.369226   46501 main.go:141] libmachine: Creating machine...
	I0816 13:28:09.369241   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .Create
	I0816 13:28:09.369407   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Creating KVM machine...
	I0816 13:28:09.370681   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | found existing default KVM network
	I0816 13:28:09.371471   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | I0816 13:28:09.371325   46585 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012d960}
	I0816 13:28:09.371497   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | created network xml: 
	I0816 13:28:09.371507   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | <network>
	I0816 13:28:09.371513   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG |   <name>mk-kubernetes-upgrade-759623</name>
	I0816 13:28:09.371519   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG |   <dns enable='no'/>
	I0816 13:28:09.371524   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG |   
	I0816 13:28:09.371535   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0816 13:28:09.371546   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG |     <dhcp>
	I0816 13:28:09.371555   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0816 13:28:09.371562   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG |     </dhcp>
	I0816 13:28:09.371627   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG |   </ip>
	I0816 13:28:09.371652   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG |   
	I0816 13:28:09.371665   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | </network>
	I0816 13:28:09.371676   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | 
	I0816 13:28:09.376752   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | trying to create private KVM network mk-kubernetes-upgrade-759623 192.168.39.0/24...
	I0816 13:28:09.450953   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | private KVM network mk-kubernetes-upgrade-759623 192.168.39.0/24 created
	I0816 13:28:09.450982   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Setting up store path in /home/jenkins/minikube-integration/19423-3966/.minikube/machines/kubernetes-upgrade-759623 ...
	I0816 13:28:09.450998   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | I0816 13:28:09.450913   46585 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19423-3966/.minikube
	I0816 13:28:09.451009   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Building disk image from file:///home/jenkins/minikube-integration/19423-3966/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso
	I0816 13:28:09.451090   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Downloading /home/jenkins/minikube-integration/19423-3966/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19423-3966/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso...
	I0816 13:28:09.685791   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | I0816 13:28:09.685654   46585 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/kubernetes-upgrade-759623/id_rsa...
	I0816 13:28:09.872257   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | I0816 13:28:09.872090   46585 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/kubernetes-upgrade-759623/kubernetes-upgrade-759623.rawdisk...
	I0816 13:28:09.872299   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | Writing magic tar header
	I0816 13:28:09.872325   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | Writing SSH key tar header
	I0816 13:28:09.872342   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | I0816 13:28:09.872227   46585 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19423-3966/.minikube/machines/kubernetes-upgrade-759623 ...
	I0816 13:28:09.872359   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/kubernetes-upgrade-759623
	I0816 13:28:09.872417   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Setting executable bit set on /home/jenkins/minikube-integration/19423-3966/.minikube/machines/kubernetes-upgrade-759623 (perms=drwx------)
	I0816 13:28:09.872448   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-3966/.minikube/machines
	I0816 13:28:09.872465   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Setting executable bit set on /home/jenkins/minikube-integration/19423-3966/.minikube/machines (perms=drwxr-xr-x)
	I0816 13:28:09.872481   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Setting executable bit set on /home/jenkins/minikube-integration/19423-3966/.minikube (perms=drwxr-xr-x)
	I0816 13:28:09.872496   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Setting executable bit set on /home/jenkins/minikube-integration/19423-3966 (perms=drwxrwxr-x)
	I0816 13:28:09.872506   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0816 13:28:09.872513   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0816 13:28:09.872524   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Creating domain...
	I0816 13:28:09.872543   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-3966/.minikube
	I0816 13:28:09.872561   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-3966
	I0816 13:28:09.872570   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0816 13:28:09.872579   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | Checking permissions on dir: /home/jenkins
	I0816 13:28:09.872586   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | Checking permissions on dir: /home
	I0816 13:28:09.872593   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | Skipping /home - not owner
	I0816 13:28:09.873826   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) define libvirt domain using xml: 
	I0816 13:28:09.873852   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) <domain type='kvm'>
	I0816 13:28:09.873864   46501 main.go:141] libmachine: (kubernetes-upgrade-759623)   <name>kubernetes-upgrade-759623</name>
	I0816 13:28:09.873870   46501 main.go:141] libmachine: (kubernetes-upgrade-759623)   <memory unit='MiB'>2200</memory>
	I0816 13:28:09.873876   46501 main.go:141] libmachine: (kubernetes-upgrade-759623)   <vcpu>2</vcpu>
	I0816 13:28:09.873892   46501 main.go:141] libmachine: (kubernetes-upgrade-759623)   <features>
	I0816 13:28:09.873900   46501 main.go:141] libmachine: (kubernetes-upgrade-759623)     <acpi/>
	I0816 13:28:09.873905   46501 main.go:141] libmachine: (kubernetes-upgrade-759623)     <apic/>
	I0816 13:28:09.873910   46501 main.go:141] libmachine: (kubernetes-upgrade-759623)     <pae/>
	I0816 13:28:09.873922   46501 main.go:141] libmachine: (kubernetes-upgrade-759623)     
	I0816 13:28:09.873930   46501 main.go:141] libmachine: (kubernetes-upgrade-759623)   </features>
	I0816 13:28:09.873943   46501 main.go:141] libmachine: (kubernetes-upgrade-759623)   <cpu mode='host-passthrough'>
	I0816 13:28:09.873952   46501 main.go:141] libmachine: (kubernetes-upgrade-759623)   
	I0816 13:28:09.873956   46501 main.go:141] libmachine: (kubernetes-upgrade-759623)   </cpu>
	I0816 13:28:09.873963   46501 main.go:141] libmachine: (kubernetes-upgrade-759623)   <os>
	I0816 13:28:09.873968   46501 main.go:141] libmachine: (kubernetes-upgrade-759623)     <type>hvm</type>
	I0816 13:28:09.873976   46501 main.go:141] libmachine: (kubernetes-upgrade-759623)     <boot dev='cdrom'/>
	I0816 13:28:09.873982   46501 main.go:141] libmachine: (kubernetes-upgrade-759623)     <boot dev='hd'/>
	I0816 13:28:09.873988   46501 main.go:141] libmachine: (kubernetes-upgrade-759623)     <bootmenu enable='no'/>
	I0816 13:28:09.873995   46501 main.go:141] libmachine: (kubernetes-upgrade-759623)   </os>
	I0816 13:28:09.874001   46501 main.go:141] libmachine: (kubernetes-upgrade-759623)   <devices>
	I0816 13:28:09.874012   46501 main.go:141] libmachine: (kubernetes-upgrade-759623)     <disk type='file' device='cdrom'>
	I0816 13:28:09.874042   46501 main.go:141] libmachine: (kubernetes-upgrade-759623)       <source file='/home/jenkins/minikube-integration/19423-3966/.minikube/machines/kubernetes-upgrade-759623/boot2docker.iso'/>
	I0816 13:28:09.874060   46501 main.go:141] libmachine: (kubernetes-upgrade-759623)       <target dev='hdc' bus='scsi'/>
	I0816 13:28:09.874072   46501 main.go:141] libmachine: (kubernetes-upgrade-759623)       <readonly/>
	I0816 13:28:09.874083   46501 main.go:141] libmachine: (kubernetes-upgrade-759623)     </disk>
	I0816 13:28:09.874096   46501 main.go:141] libmachine: (kubernetes-upgrade-759623)     <disk type='file' device='disk'>
	I0816 13:28:09.874106   46501 main.go:141] libmachine: (kubernetes-upgrade-759623)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0816 13:28:09.874117   46501 main.go:141] libmachine: (kubernetes-upgrade-759623)       <source file='/home/jenkins/minikube-integration/19423-3966/.minikube/machines/kubernetes-upgrade-759623/kubernetes-upgrade-759623.rawdisk'/>
	I0816 13:28:09.874127   46501 main.go:141] libmachine: (kubernetes-upgrade-759623)       <target dev='hda' bus='virtio'/>
	I0816 13:28:09.874152   46501 main.go:141] libmachine: (kubernetes-upgrade-759623)     </disk>
	I0816 13:28:09.874170   46501 main.go:141] libmachine: (kubernetes-upgrade-759623)     <interface type='network'>
	I0816 13:28:09.874178   46501 main.go:141] libmachine: (kubernetes-upgrade-759623)       <source network='mk-kubernetes-upgrade-759623'/>
	I0816 13:28:09.874184   46501 main.go:141] libmachine: (kubernetes-upgrade-759623)       <model type='virtio'/>
	I0816 13:28:09.874190   46501 main.go:141] libmachine: (kubernetes-upgrade-759623)     </interface>
	I0816 13:28:09.874194   46501 main.go:141] libmachine: (kubernetes-upgrade-759623)     <interface type='network'>
	I0816 13:28:09.874201   46501 main.go:141] libmachine: (kubernetes-upgrade-759623)       <source network='default'/>
	I0816 13:28:09.874206   46501 main.go:141] libmachine: (kubernetes-upgrade-759623)       <model type='virtio'/>
	I0816 13:28:09.874214   46501 main.go:141] libmachine: (kubernetes-upgrade-759623)     </interface>
	I0816 13:28:09.874221   46501 main.go:141] libmachine: (kubernetes-upgrade-759623)     <serial type='pty'>
	I0816 13:28:09.874227   46501 main.go:141] libmachine: (kubernetes-upgrade-759623)       <target port='0'/>
	I0816 13:28:09.874234   46501 main.go:141] libmachine: (kubernetes-upgrade-759623)     </serial>
	I0816 13:28:09.874240   46501 main.go:141] libmachine: (kubernetes-upgrade-759623)     <console type='pty'>
	I0816 13:28:09.874248   46501 main.go:141] libmachine: (kubernetes-upgrade-759623)       <target type='serial' port='0'/>
	I0816 13:28:09.874264   46501 main.go:141] libmachine: (kubernetes-upgrade-759623)     </console>
	I0816 13:28:09.874272   46501 main.go:141] libmachine: (kubernetes-upgrade-759623)     <rng model='virtio'>
	I0816 13:28:09.874278   46501 main.go:141] libmachine: (kubernetes-upgrade-759623)       <backend model='random'>/dev/random</backend>
	I0816 13:28:09.874284   46501 main.go:141] libmachine: (kubernetes-upgrade-759623)     </rng>
	I0816 13:28:09.874290   46501 main.go:141] libmachine: (kubernetes-upgrade-759623)     
	I0816 13:28:09.874296   46501 main.go:141] libmachine: (kubernetes-upgrade-759623)     
	I0816 13:28:09.874302   46501 main.go:141] libmachine: (kubernetes-upgrade-759623)   </devices>
	I0816 13:28:09.874309   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) </domain>
	I0816 13:28:09.874316   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) 
	I0816 13:28:09.878520   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined MAC address 52:54:00:8e:9d:64 in network default
	I0816 13:28:09.879080   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Ensuring networks are active...
	I0816 13:28:09.879107   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:28:09.879714   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Ensuring network default is active
	I0816 13:28:09.879988   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Ensuring network mk-kubernetes-upgrade-759623 is active
	I0816 13:28:09.880523   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Getting domain xml...
	I0816 13:28:09.881231   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Creating domain...
	I0816 13:28:11.086451   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Waiting to get IP...
	I0816 13:28:11.087300   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:28:11.087698   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | unable to find current IP address of domain kubernetes-upgrade-759623 in network mk-kubernetes-upgrade-759623
	I0816 13:28:11.087718   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | I0816 13:28:11.087659   46585 retry.go:31] will retry after 301.458873ms: waiting for machine to come up
	I0816 13:28:11.391181   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:28:11.391518   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | unable to find current IP address of domain kubernetes-upgrade-759623 in network mk-kubernetes-upgrade-759623
	I0816 13:28:11.391539   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | I0816 13:28:11.391466   46585 retry.go:31] will retry after 259.57275ms: waiting for machine to come up
	I0816 13:28:11.652965   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:28:11.653399   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | unable to find current IP address of domain kubernetes-upgrade-759623 in network mk-kubernetes-upgrade-759623
	I0816 13:28:11.653428   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | I0816 13:28:11.653338   46585 retry.go:31] will retry after 400.260362ms: waiting for machine to come up
	I0816 13:28:12.054795   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:28:12.055182   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | unable to find current IP address of domain kubernetes-upgrade-759623 in network mk-kubernetes-upgrade-759623
	I0816 13:28:12.055209   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | I0816 13:28:12.055119   46585 retry.go:31] will retry after 450.289525ms: waiting for machine to come up
	I0816 13:28:12.506729   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:28:12.507172   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | unable to find current IP address of domain kubernetes-upgrade-759623 in network mk-kubernetes-upgrade-759623
	I0816 13:28:12.507200   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | I0816 13:28:12.507122   46585 retry.go:31] will retry after 545.022776ms: waiting for machine to come up
	I0816 13:28:13.053818   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:28:13.054254   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | unable to find current IP address of domain kubernetes-upgrade-759623 in network mk-kubernetes-upgrade-759623
	I0816 13:28:13.054282   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | I0816 13:28:13.054189   46585 retry.go:31] will retry after 868.275201ms: waiting for machine to come up
	I0816 13:28:13.924310   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:28:13.924728   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | unable to find current IP address of domain kubernetes-upgrade-759623 in network mk-kubernetes-upgrade-759623
	I0816 13:28:13.924765   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | I0816 13:28:13.924691   46585 retry.go:31] will retry after 1.080481362s: waiting for machine to come up
	I0816 13:28:15.007246   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:28:15.007668   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | unable to find current IP address of domain kubernetes-upgrade-759623 in network mk-kubernetes-upgrade-759623
	I0816 13:28:15.007696   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | I0816 13:28:15.007616   46585 retry.go:31] will retry after 1.244796355s: waiting for machine to come up
	I0816 13:28:16.254166   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:28:16.254812   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | unable to find current IP address of domain kubernetes-upgrade-759623 in network mk-kubernetes-upgrade-759623
	I0816 13:28:16.254861   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | I0816 13:28:16.254778   46585 retry.go:31] will retry after 1.63912249s: waiting for machine to come up
	I0816 13:28:17.895144   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:28:17.895538   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | unable to find current IP address of domain kubernetes-upgrade-759623 in network mk-kubernetes-upgrade-759623
	I0816 13:28:17.895582   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | I0816 13:28:17.895524   46585 retry.go:31] will retry after 2.209940317s: waiting for machine to come up
	I0816 13:28:20.107263   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:28:20.107755   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | unable to find current IP address of domain kubernetes-upgrade-759623 in network mk-kubernetes-upgrade-759623
	I0816 13:28:20.107777   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | I0816 13:28:20.107709   46585 retry.go:31] will retry after 2.38631477s: waiting for machine to come up
	I0816 13:28:22.497145   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:28:22.497638   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | unable to find current IP address of domain kubernetes-upgrade-759623 in network mk-kubernetes-upgrade-759623
	I0816 13:28:22.497673   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | I0816 13:28:22.497609   46585 retry.go:31] will retry after 2.246212674s: waiting for machine to come up
	I0816 13:28:24.745734   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:28:24.746210   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | unable to find current IP address of domain kubernetes-upgrade-759623 in network mk-kubernetes-upgrade-759623
	I0816 13:28:24.746232   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | I0816 13:28:24.746163   46585 retry.go:31] will retry after 3.64316753s: waiting for machine to come up
	I0816 13:28:28.393838   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:28:28.394303   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | unable to find current IP address of domain kubernetes-upgrade-759623 in network mk-kubernetes-upgrade-759623
	I0816 13:28:28.394332   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | I0816 13:28:28.394257   46585 retry.go:31] will retry after 4.4749065s: waiting for machine to come up
	I0816 13:28:32.871066   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:28:32.871655   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Found IP for machine: 192.168.39.57
	I0816 13:28:32.871685   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has current primary IP address 192.168.39.57 and MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:28:32.871711   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Reserving static IP address...
	I0816 13:28:32.872074   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-759623", mac: "52:54:00:2b:14:2a", ip: "192.168.39.57"} in network mk-kubernetes-upgrade-759623
	I0816 13:28:32.950407   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | Getting to WaitForSSH function...
	I0816 13:28:32.950442   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Reserved static IP address: 192.168.39.57
	I0816 13:28:32.950469   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Waiting for SSH to be available...
	I0816 13:28:32.952817   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:28:32.953243   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:14:2a", ip: ""} in network mk-kubernetes-upgrade-759623: {Iface:virbr1 ExpiryTime:2024-08-16 14:28:24 +0000 UTC Type:0 Mac:52:54:00:2b:14:2a Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2b:14:2a}
	I0816 13:28:32.953276   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined IP address 192.168.39.57 and MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:28:32.953362   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | Using SSH client type: external
	I0816 13:28:32.953387   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/kubernetes-upgrade-759623/id_rsa (-rw-------)
	I0816 13:28:32.953431   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.57 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-3966/.minikube/machines/kubernetes-upgrade-759623/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 13:28:32.953445   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | About to run SSH command:
	I0816 13:28:32.953479   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | exit 0
	I0816 13:28:33.077115   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | SSH cmd err, output: <nil>: 
	I0816 13:28:33.077401   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) KVM machine creation complete!
	I0816 13:28:33.077739   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetConfigRaw
	I0816 13:28:33.078287   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .DriverName
	I0816 13:28:33.078446   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .DriverName
	I0816 13:28:33.078612   46501 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0816 13:28:33.078628   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetState
	I0816 13:28:33.079965   46501 main.go:141] libmachine: Detecting operating system of created instance...
	I0816 13:28:33.079977   46501 main.go:141] libmachine: Waiting for SSH to be available...
	I0816 13:28:33.079982   46501 main.go:141] libmachine: Getting to WaitForSSH function...
	I0816 13:28:33.079988   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHHostname
	I0816 13:28:33.082157   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:28:33.082509   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:14:2a", ip: ""} in network mk-kubernetes-upgrade-759623: {Iface:virbr1 ExpiryTime:2024-08-16 14:28:24 +0000 UTC Type:0 Mac:52:54:00:2b:14:2a Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:kubernetes-upgrade-759623 Clientid:01:52:54:00:2b:14:2a}
	I0816 13:28:33.082537   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined IP address 192.168.39.57 and MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:28:33.082656   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHPort
	I0816 13:28:33.082823   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHKeyPath
	I0816 13:28:33.083001   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHKeyPath
	I0816 13:28:33.083134   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHUsername
	I0816 13:28:33.083301   46501 main.go:141] libmachine: Using SSH client type: native
	I0816 13:28:33.083533   46501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0816 13:28:33.083547   46501 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0816 13:28:33.184314   46501 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 13:28:33.184335   46501 main.go:141] libmachine: Detecting the provisioner...
	I0816 13:28:33.184343   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHHostname
	I0816 13:28:33.186881   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:28:33.187223   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:14:2a", ip: ""} in network mk-kubernetes-upgrade-759623: {Iface:virbr1 ExpiryTime:2024-08-16 14:28:24 +0000 UTC Type:0 Mac:52:54:00:2b:14:2a Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:kubernetes-upgrade-759623 Clientid:01:52:54:00:2b:14:2a}
	I0816 13:28:33.187250   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined IP address 192.168.39.57 and MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:28:33.187513   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHPort
	I0816 13:28:33.187716   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHKeyPath
	I0816 13:28:33.187881   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHKeyPath
	I0816 13:28:33.188011   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHUsername
	I0816 13:28:33.188170   46501 main.go:141] libmachine: Using SSH client type: native
	I0816 13:28:33.188346   46501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0816 13:28:33.188357   46501 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0816 13:28:33.293581   46501 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0816 13:28:33.293653   46501 main.go:141] libmachine: found compatible host: buildroot
	I0816 13:28:33.293663   46501 main.go:141] libmachine: Provisioning with buildroot...
	I0816 13:28:33.293672   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetMachineName
	I0816 13:28:33.293925   46501 buildroot.go:166] provisioning hostname "kubernetes-upgrade-759623"
	I0816 13:28:33.293956   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetMachineName
	I0816 13:28:33.294124   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHHostname
	I0816 13:28:33.296992   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:28:33.297393   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:14:2a", ip: ""} in network mk-kubernetes-upgrade-759623: {Iface:virbr1 ExpiryTime:2024-08-16 14:28:24 +0000 UTC Type:0 Mac:52:54:00:2b:14:2a Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:kubernetes-upgrade-759623 Clientid:01:52:54:00:2b:14:2a}
	I0816 13:28:33.297423   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined IP address 192.168.39.57 and MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:28:33.297590   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHPort
	I0816 13:28:33.297766   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHKeyPath
	I0816 13:28:33.297933   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHKeyPath
	I0816 13:28:33.298083   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHUsername
	I0816 13:28:33.298278   46501 main.go:141] libmachine: Using SSH client type: native
	I0816 13:28:33.298522   46501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0816 13:28:33.298542   46501 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-759623 && echo "kubernetes-upgrade-759623" | sudo tee /etc/hostname
	I0816 13:28:33.417586   46501 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-759623
	
	I0816 13:28:33.417613   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHHostname
	I0816 13:28:33.420655   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:28:33.420982   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:14:2a", ip: ""} in network mk-kubernetes-upgrade-759623: {Iface:virbr1 ExpiryTime:2024-08-16 14:28:24 +0000 UTC Type:0 Mac:52:54:00:2b:14:2a Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:kubernetes-upgrade-759623 Clientid:01:52:54:00:2b:14:2a}
	I0816 13:28:33.421023   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined IP address 192.168.39.57 and MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:28:33.421245   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHPort
	I0816 13:28:33.421420   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHKeyPath
	I0816 13:28:33.421598   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHKeyPath
	I0816 13:28:33.421757   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHUsername
	I0816 13:28:33.421899   46501 main.go:141] libmachine: Using SSH client type: native
	I0816 13:28:33.422060   46501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0816 13:28:33.422076   46501 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-759623' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-759623/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-759623' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 13:28:33.533852   46501 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 13:28:33.533878   46501 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-3966/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-3966/.minikube}
	I0816 13:28:33.533893   46501 buildroot.go:174] setting up certificates
	I0816 13:28:33.533904   46501 provision.go:84] configureAuth start
	I0816 13:28:33.533912   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetMachineName
	I0816 13:28:33.534194   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetIP
	I0816 13:28:33.536799   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:28:33.537140   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:14:2a", ip: ""} in network mk-kubernetes-upgrade-759623: {Iface:virbr1 ExpiryTime:2024-08-16 14:28:24 +0000 UTC Type:0 Mac:52:54:00:2b:14:2a Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:kubernetes-upgrade-759623 Clientid:01:52:54:00:2b:14:2a}
	I0816 13:28:33.537166   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined IP address 192.168.39.57 and MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:28:33.537341   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHHostname
	I0816 13:28:33.539221   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:28:33.539529   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:14:2a", ip: ""} in network mk-kubernetes-upgrade-759623: {Iface:virbr1 ExpiryTime:2024-08-16 14:28:24 +0000 UTC Type:0 Mac:52:54:00:2b:14:2a Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:kubernetes-upgrade-759623 Clientid:01:52:54:00:2b:14:2a}
	I0816 13:28:33.539550   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined IP address 192.168.39.57 and MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:28:33.539674   46501 provision.go:143] copyHostCerts
	I0816 13:28:33.539732   46501 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem, removing ...
	I0816 13:28:33.539754   46501 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem
	I0816 13:28:33.539841   46501 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem (1082 bytes)
	I0816 13:28:33.539960   46501 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem, removing ...
	I0816 13:28:33.539970   46501 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem
	I0816 13:28:33.540011   46501 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem (1123 bytes)
	I0816 13:28:33.540110   46501 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem, removing ...
	I0816 13:28:33.540120   46501 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem
	I0816 13:28:33.540159   46501 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem (1675 bytes)
	I0816 13:28:33.540243   46501 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-759623 san=[127.0.0.1 192.168.39.57 kubernetes-upgrade-759623 localhost minikube]
	I0816 13:28:33.668556   46501 provision.go:177] copyRemoteCerts
	I0816 13:28:33.668613   46501 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 13:28:33.668638   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHHostname
	I0816 13:28:33.671582   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:28:33.671884   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:14:2a", ip: ""} in network mk-kubernetes-upgrade-759623: {Iface:virbr1 ExpiryTime:2024-08-16 14:28:24 +0000 UTC Type:0 Mac:52:54:00:2b:14:2a Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:kubernetes-upgrade-759623 Clientid:01:52:54:00:2b:14:2a}
	I0816 13:28:33.671906   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined IP address 192.168.39.57 and MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:28:33.672111   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHPort
	I0816 13:28:33.672413   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHKeyPath
	I0816 13:28:33.672599   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHUsername
	I0816 13:28:33.672734   46501 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/kubernetes-upgrade-759623/id_rsa Username:docker}
	I0816 13:28:33.757511   46501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 13:28:33.784377   46501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 13:28:33.810619   46501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0816 13:28:33.837222   46501 provision.go:87] duration metric: took 303.305608ms to configureAuth
	I0816 13:28:33.837253   46501 buildroot.go:189] setting minikube options for container-runtime
	I0816 13:28:33.837410   46501 config.go:182] Loaded profile config "kubernetes-upgrade-759623": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0816 13:28:33.837476   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHHostname
	I0816 13:28:33.840188   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:28:33.840501   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:14:2a", ip: ""} in network mk-kubernetes-upgrade-759623: {Iface:virbr1 ExpiryTime:2024-08-16 14:28:24 +0000 UTC Type:0 Mac:52:54:00:2b:14:2a Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:kubernetes-upgrade-759623 Clientid:01:52:54:00:2b:14:2a}
	I0816 13:28:33.840532   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined IP address 192.168.39.57 and MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:28:33.840676   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHPort
	I0816 13:28:33.840884   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHKeyPath
	I0816 13:28:33.841086   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHKeyPath
	I0816 13:28:33.841255   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHUsername
	I0816 13:28:33.841444   46501 main.go:141] libmachine: Using SSH client type: native
	I0816 13:28:33.841604   46501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0816 13:28:33.841619   46501 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 13:28:34.095842   46501 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 13:28:34.095878   46501 main.go:141] libmachine: Checking connection to Docker...
	I0816 13:28:34.095890   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetURL
	I0816 13:28:34.097415   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | Using libvirt version 6000000
	I0816 13:28:34.099743   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:28:34.100196   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:14:2a", ip: ""} in network mk-kubernetes-upgrade-759623: {Iface:virbr1 ExpiryTime:2024-08-16 14:28:24 +0000 UTC Type:0 Mac:52:54:00:2b:14:2a Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:kubernetes-upgrade-759623 Clientid:01:52:54:00:2b:14:2a}
	I0816 13:28:34.100214   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined IP address 192.168.39.57 and MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:28:34.100527   46501 main.go:141] libmachine: Docker is up and running!
	I0816 13:28:34.100549   46501 main.go:141] libmachine: Reticulating splines...
	I0816 13:28:34.100558   46501 client.go:171] duration metric: took 24.73225544s to LocalClient.Create
	I0816 13:28:34.100585   46501 start.go:167] duration metric: took 24.732320832s to libmachine.API.Create "kubernetes-upgrade-759623"
	I0816 13:28:34.100597   46501 start.go:293] postStartSetup for "kubernetes-upgrade-759623" (driver="kvm2")
	I0816 13:28:34.100611   46501 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 13:28:34.100634   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .DriverName
	I0816 13:28:34.100889   46501 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 13:28:34.100929   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHHostname
	I0816 13:28:34.103462   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:28:34.103802   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:14:2a", ip: ""} in network mk-kubernetes-upgrade-759623: {Iface:virbr1 ExpiryTime:2024-08-16 14:28:24 +0000 UTC Type:0 Mac:52:54:00:2b:14:2a Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:kubernetes-upgrade-759623 Clientid:01:52:54:00:2b:14:2a}
	I0816 13:28:34.103831   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined IP address 192.168.39.57 and MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:28:34.104003   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHPort
	I0816 13:28:34.104204   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHKeyPath
	I0816 13:28:34.104367   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHUsername
	I0816 13:28:34.104490   46501 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/kubernetes-upgrade-759623/id_rsa Username:docker}
	I0816 13:28:34.183804   46501 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 13:28:34.188245   46501 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 13:28:34.188280   46501 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/addons for local assets ...
	I0816 13:28:34.188348   46501 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/files for local assets ...
	I0816 13:28:34.188422   46501 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem -> 111492.pem in /etc/ssl/certs
	I0816 13:28:34.188552   46501 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 13:28:34.198532   46501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /etc/ssl/certs/111492.pem (1708 bytes)
	I0816 13:28:34.223351   46501 start.go:296] duration metric: took 122.738495ms for postStartSetup
	I0816 13:28:34.223410   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetConfigRaw
	I0816 13:28:34.224060   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetIP
	I0816 13:28:34.226778   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:28:34.227123   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:14:2a", ip: ""} in network mk-kubernetes-upgrade-759623: {Iface:virbr1 ExpiryTime:2024-08-16 14:28:24 +0000 UTC Type:0 Mac:52:54:00:2b:14:2a Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:kubernetes-upgrade-759623 Clientid:01:52:54:00:2b:14:2a}
	I0816 13:28:34.227154   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined IP address 192.168.39.57 and MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:28:34.227354   46501 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/kubernetes-upgrade-759623/config.json ...
	I0816 13:28:34.227580   46501 start.go:128] duration metric: took 24.878086326s to createHost
	I0816 13:28:34.227606   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHHostname
	I0816 13:28:34.229656   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:28:34.229967   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:14:2a", ip: ""} in network mk-kubernetes-upgrade-759623: {Iface:virbr1 ExpiryTime:2024-08-16 14:28:24 +0000 UTC Type:0 Mac:52:54:00:2b:14:2a Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:kubernetes-upgrade-759623 Clientid:01:52:54:00:2b:14:2a}
	I0816 13:28:34.229995   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined IP address 192.168.39.57 and MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:28:34.230098   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHPort
	I0816 13:28:34.230283   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHKeyPath
	I0816 13:28:34.230419   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHKeyPath
	I0816 13:28:34.230548   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHUsername
	I0816 13:28:34.230724   46501 main.go:141] libmachine: Using SSH client type: native
	I0816 13:28:34.230936   46501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0816 13:28:34.230954   46501 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 13:28:34.333858   46501 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723814914.309892086
	
	I0816 13:28:34.333882   46501 fix.go:216] guest clock: 1723814914.309892086
	I0816 13:28:34.333892   46501 fix.go:229] Guest: 2024-08-16 13:28:34.309892086 +0000 UTC Remote: 2024-08-16 13:28:34.227593152 +0000 UTC m=+27.936446058 (delta=82.298934ms)
	I0816 13:28:34.333948   46501 fix.go:200] guest clock delta is within tolerance: 82.298934ms
	I0816 13:28:34.333954   46501 start.go:83] releasing machines lock for "kubernetes-upgrade-759623", held for 24.984548896s
	I0816 13:28:34.333986   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .DriverName
	I0816 13:28:34.334310   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetIP
	I0816 13:28:34.337491   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:28:34.338014   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:14:2a", ip: ""} in network mk-kubernetes-upgrade-759623: {Iface:virbr1 ExpiryTime:2024-08-16 14:28:24 +0000 UTC Type:0 Mac:52:54:00:2b:14:2a Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:kubernetes-upgrade-759623 Clientid:01:52:54:00:2b:14:2a}
	I0816 13:28:34.338044   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined IP address 192.168.39.57 and MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:28:34.338292   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .DriverName
	I0816 13:28:34.338881   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .DriverName
	I0816 13:28:34.339115   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .DriverName
	I0816 13:28:34.339210   46501 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 13:28:34.339250   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHHostname
	I0816 13:28:34.339324   46501 ssh_runner.go:195] Run: cat /version.json
	I0816 13:28:34.339346   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHHostname
	I0816 13:28:34.341870   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:28:34.342235   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:14:2a", ip: ""} in network mk-kubernetes-upgrade-759623: {Iface:virbr1 ExpiryTime:2024-08-16 14:28:24 +0000 UTC Type:0 Mac:52:54:00:2b:14:2a Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:kubernetes-upgrade-759623 Clientid:01:52:54:00:2b:14:2a}
	I0816 13:28:34.342325   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined IP address 192.168.39.57 and MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:28:34.342350   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:28:34.342431   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHPort
	I0816 13:28:34.342615   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHKeyPath
	I0816 13:28:34.342700   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:14:2a", ip: ""} in network mk-kubernetes-upgrade-759623: {Iface:virbr1 ExpiryTime:2024-08-16 14:28:24 +0000 UTC Type:0 Mac:52:54:00:2b:14:2a Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:kubernetes-upgrade-759623 Clientid:01:52:54:00:2b:14:2a}
	I0816 13:28:34.342723   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined IP address 192.168.39.57 and MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:28:34.342791   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHUsername
	I0816 13:28:34.342899   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHPort
	I0816 13:28:34.342958   46501 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/kubernetes-upgrade-759623/id_rsa Username:docker}
	I0816 13:28:34.343034   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHKeyPath
	I0816 13:28:34.343176   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHUsername
	I0816 13:28:34.343320   46501 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/kubernetes-upgrade-759623/id_rsa Username:docker}
	I0816 13:28:34.426782   46501 ssh_runner.go:195] Run: systemctl --version
	I0816 13:28:34.455431   46501 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 13:28:34.632410   46501 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 13:28:34.638808   46501 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 13:28:34.638879   46501 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 13:28:34.659296   46501 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 13:28:34.659324   46501 start.go:495] detecting cgroup driver to use...
	I0816 13:28:34.659405   46501 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 13:28:34.681268   46501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 13:28:34.695852   46501 docker.go:217] disabling cri-docker service (if available) ...
	I0816 13:28:34.695910   46501 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 13:28:34.710196   46501 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 13:28:34.725009   46501 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 13:28:34.858636   46501 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 13:28:35.024272   46501 docker.go:233] disabling docker service ...
	I0816 13:28:35.024352   46501 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 13:28:35.039211   46501 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 13:28:35.053247   46501 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 13:28:35.195204   46501 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 13:28:35.325766   46501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 13:28:35.340854   46501 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 13:28:35.366677   46501 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0816 13:28:35.366739   46501 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:28:35.380666   46501 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 13:28:35.380735   46501 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:28:35.393912   46501 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:28:35.404685   46501 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:28:35.415648   46501 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 13:28:35.426841   46501 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 13:28:35.436734   46501 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 13:28:35.436807   46501 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 13:28:35.450760   46501 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 13:28:35.460485   46501 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:28:35.571450   46501 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 13:28:35.706692   46501 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 13:28:35.706803   46501 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 13:28:35.711767   46501 start.go:563] Will wait 60s for crictl version
	I0816 13:28:35.711825   46501 ssh_runner.go:195] Run: which crictl
	I0816 13:28:35.715612   46501 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 13:28:35.761409   46501 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 13:28:35.761517   46501 ssh_runner.go:195] Run: crio --version
	I0816 13:28:35.791619   46501 ssh_runner.go:195] Run: crio --version
	I0816 13:28:35.823785   46501 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0816 13:28:35.824860   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetIP
	I0816 13:28:35.827954   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:28:35.828349   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:14:2a", ip: ""} in network mk-kubernetes-upgrade-759623: {Iface:virbr1 ExpiryTime:2024-08-16 14:28:24 +0000 UTC Type:0 Mac:52:54:00:2b:14:2a Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:kubernetes-upgrade-759623 Clientid:01:52:54:00:2b:14:2a}
	I0816 13:28:35.828375   46501 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined IP address 192.168.39.57 and MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:28:35.828578   46501 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0816 13:28:35.832888   46501 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 13:28:35.847648   46501 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-759623 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-759623 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.57 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 13:28:35.847767   46501 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0816 13:28:35.847824   46501 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 13:28:35.884763   46501 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0816 13:28:35.884822   46501 ssh_runner.go:195] Run: which lz4
	I0816 13:28:35.889083   46501 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 13:28:35.893406   46501 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 13:28:35.893441   46501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0816 13:28:37.550159   46501 crio.go:462] duration metric: took 1.661113771s to copy over tarball
	I0816 13:28:37.550245   46501 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 13:28:40.067263   46501 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.516983219s)
	I0816 13:28:40.067293   46501 crio.go:469] duration metric: took 2.517102967s to extract the tarball
	I0816 13:28:40.067303   46501 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 13:28:40.109966   46501 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 13:28:40.153456   46501 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0816 13:28:40.153486   46501 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0816 13:28:40.153557   46501 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:28:40.153603   46501 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:28:40.153621   46501 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:28:40.153627   46501 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:28:40.153660   46501 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0816 13:28:40.153664   46501 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0816 13:28:40.153603   46501 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:28:40.153696   46501 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0816 13:28:40.155024   46501 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0816 13:28:40.155034   46501 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:28:40.155119   46501 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:28:40.155328   46501 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:28:40.155366   46501 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:28:40.155331   46501 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0816 13:28:40.155419   46501 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:28:40.155641   46501 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0816 13:28:40.345551   46501 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:28:40.349694   46501 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:28:40.368852   46501 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0816 13:28:40.374718   46501 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:28:40.382589   46501 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:28:40.384135   46501 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0816 13:28:40.391803   46501 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0816 13:28:40.446684   46501 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0816 13:28:40.446723   46501 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:28:40.446778   46501 ssh_runner.go:195] Run: which crictl
	I0816 13:28:40.486715   46501 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0816 13:28:40.486754   46501 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:28:40.486800   46501 ssh_runner.go:195] Run: which crictl
	I0816 13:28:40.537981   46501 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0816 13:28:40.538013   46501 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0816 13:28:40.538032   46501 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0816 13:28:40.538052   46501 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:28:40.538080   46501 ssh_runner.go:195] Run: which crictl
	I0816 13:28:40.538102   46501 ssh_runner.go:195] Run: which crictl
	I0816 13:28:40.547768   46501 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0816 13:28:40.547804   46501 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0816 13:28:40.547831   46501 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0816 13:28:40.547851   46501 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0816 13:28:40.547856   46501 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0816 13:28:40.547877   46501 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:28:40.547891   46501 ssh_runner.go:195] Run: which crictl
	I0816 13:28:40.547892   46501 ssh_runner.go:195] Run: which crictl
	I0816 13:28:40.547809   46501 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:28:40.547927   46501 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:28:40.547942   46501 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 13:28:40.547932   46501 ssh_runner.go:195] Run: which crictl
	I0816 13:28:40.567813   46501 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:28:40.586571   46501 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:28:40.666107   46501 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 13:28:40.666170   46501 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 13:28:40.666217   46501 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 13:28:40.666277   46501 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:28:40.666282   46501 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:28:40.666411   46501 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:28:40.707383   46501 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:28:40.815458   46501 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 13:28:40.821017   46501 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 13:28:40.821038   46501 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:28:40.821136   46501 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:28:40.821198   46501 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:28:40.821304   46501 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 13:28:40.872178   46501 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:28:40.957765   46501 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 13:28:40.972118   46501 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0816 13:28:40.972200   46501 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 13:28:40.978933   46501 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0816 13:28:40.979141   46501 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0816 13:28:40.979494   46501 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0816 13:28:41.022162   46501 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0816 13:28:41.027435   46501 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:28:41.033612   46501 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0816 13:28:41.041413   46501 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0816 13:28:41.177799   46501 cache_images.go:92] duration metric: took 1.024293323s to LoadCachedImages
	W0816 13:28:41.177910   46501 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0816 13:28:41.177931   46501 kubeadm.go:934] updating node { 192.168.39.57 8443 v1.20.0 crio true true} ...
	I0816 13:28:41.178114   46501 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-759623 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.57
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-759623 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 13:28:41.178215   46501 ssh_runner.go:195] Run: crio config
	I0816 13:28:41.222978   46501 cni.go:84] Creating CNI manager for ""
	I0816 13:28:41.223003   46501 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:28:41.223017   46501 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 13:28:41.223039   46501 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.57 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-759623 NodeName:kubernetes-upgrade-759623 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.57"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.57 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0816 13:28:41.223374   46501 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.57
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-759623"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.57
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.57"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 13:28:41.223479   46501 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0816 13:28:41.233263   46501 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 13:28:41.233338   46501 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 13:28:41.243038   46501 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0816 13:28:41.260099   46501 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 13:28:41.277173   46501 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0816 13:28:41.294530   46501 ssh_runner.go:195] Run: grep 192.168.39.57	control-plane.minikube.internal$ /etc/hosts
	I0816 13:28:41.298525   46501 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.57	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 13:28:41.310707   46501 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:28:41.436270   46501 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 13:28:41.454079   46501 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/kubernetes-upgrade-759623 for IP: 192.168.39.57
	I0816 13:28:41.454106   46501 certs.go:194] generating shared ca certs ...
	I0816 13:28:41.454135   46501 certs.go:226] acquiring lock for ca certs: {Name:mkdb46755373c135acf8239d2d4352f4b0b3d1d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:28:41.454319   46501 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key
	I0816 13:28:41.454394   46501 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key
	I0816 13:28:41.454407   46501 certs.go:256] generating profile certs ...
	I0816 13:28:41.454491   46501 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/kubernetes-upgrade-759623/client.key
	I0816 13:28:41.454517   46501 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/kubernetes-upgrade-759623/client.crt with IP's: []
	I0816 13:28:41.510687   46501 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/kubernetes-upgrade-759623/client.crt ...
	I0816 13:28:41.510719   46501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/kubernetes-upgrade-759623/client.crt: {Name:mke75753f58dc7dcd35b99f75bc684bcce25af7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:28:41.510896   46501 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/kubernetes-upgrade-759623/client.key ...
	I0816 13:28:41.510913   46501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/kubernetes-upgrade-759623/client.key: {Name:mk0b652086e5f38be0ce9a847b1413a106e0a6ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:28:41.511021   46501 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/kubernetes-upgrade-759623/apiserver.key.d43cdebc
	I0816 13:28:41.511045   46501 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/kubernetes-upgrade-759623/apiserver.crt.d43cdebc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.57]
	I0816 13:28:41.786699   46501 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/kubernetes-upgrade-759623/apiserver.crt.d43cdebc ...
	I0816 13:28:41.786735   46501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/kubernetes-upgrade-759623/apiserver.crt.d43cdebc: {Name:mkc04734b20a5fa54cd7fe50949029ed927c305f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:28:41.786919   46501 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/kubernetes-upgrade-759623/apiserver.key.d43cdebc ...
	I0816 13:28:41.786940   46501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/kubernetes-upgrade-759623/apiserver.key.d43cdebc: {Name:mk633e324b49b75c993d6773fffcc1266ff4eff9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:28:41.787039   46501 certs.go:381] copying /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/kubernetes-upgrade-759623/apiserver.crt.d43cdebc -> /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/kubernetes-upgrade-759623/apiserver.crt
	I0816 13:28:41.787122   46501 certs.go:385] copying /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/kubernetes-upgrade-759623/apiserver.key.d43cdebc -> /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/kubernetes-upgrade-759623/apiserver.key
	I0816 13:28:41.787173   46501 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/kubernetes-upgrade-759623/proxy-client.key
	I0816 13:28:41.787189   46501 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/kubernetes-upgrade-759623/proxy-client.crt with IP's: []
	I0816 13:28:42.184768   46501 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/kubernetes-upgrade-759623/proxy-client.crt ...
	I0816 13:28:42.184796   46501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/kubernetes-upgrade-759623/proxy-client.crt: {Name:mk2538c36dc18d2e37b2065a23edbbed14b160b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:28:42.184984   46501 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/kubernetes-upgrade-759623/proxy-client.key ...
	I0816 13:28:42.185001   46501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/kubernetes-upgrade-759623/proxy-client.key: {Name:mk0ddd6b332c2925f3751e8b6be45c06b982818f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:28:42.185206   46501 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem (1338 bytes)
	W0816 13:28:42.185244   46501 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149_empty.pem, impossibly tiny 0 bytes
	I0816 13:28:42.185254   46501 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 13:28:42.185274   46501 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem (1082 bytes)
	I0816 13:28:42.185299   46501 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem (1123 bytes)
	I0816 13:28:42.185317   46501 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem (1675 bytes)
	I0816 13:28:42.185351   46501 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem (1708 bytes)
	I0816 13:28:42.185916   46501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 13:28:42.213300   46501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 13:28:42.237447   46501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 13:28:42.263062   46501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 13:28:42.286150   46501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/kubernetes-upgrade-759623/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0816 13:28:42.323321   46501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/kubernetes-upgrade-759623/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 13:28:42.360587   46501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/kubernetes-upgrade-759623/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 13:28:42.385848   46501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/kubernetes-upgrade-759623/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 13:28:42.411195   46501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 13:28:42.440038   46501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem --> /usr/share/ca-certificates/11149.pem (1338 bytes)
	I0816 13:28:42.465307   46501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /usr/share/ca-certificates/111492.pem (1708 bytes)
	I0816 13:28:42.489518   46501 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 13:28:42.506740   46501 ssh_runner.go:195] Run: openssl version
	I0816 13:28:42.512866   46501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11149.pem && ln -fs /usr/share/ca-certificates/11149.pem /etc/ssl/certs/11149.pem"
	I0816 13:28:42.523895   46501 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11149.pem
	I0816 13:28:42.528582   46501 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 12:33 /usr/share/ca-certificates/11149.pem
	I0816 13:28:42.528636   46501 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11149.pem
	I0816 13:28:42.534795   46501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11149.pem /etc/ssl/certs/51391683.0"
	I0816 13:28:42.545812   46501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111492.pem && ln -fs /usr/share/ca-certificates/111492.pem /etc/ssl/certs/111492.pem"
	I0816 13:28:42.561282   46501 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111492.pem
	I0816 13:28:42.566216   46501 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 12:33 /usr/share/ca-certificates/111492.pem
	I0816 13:28:42.566269   46501 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111492.pem
	I0816 13:28:42.572626   46501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111492.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 13:28:42.583989   46501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 13:28:42.594913   46501 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:28:42.599996   46501 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 12:22 /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:28:42.600044   46501 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:28:42.606208   46501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 13:28:42.617349   46501 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 13:28:42.621816   46501 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0816 13:28:42.621874   46501 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-759623 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-759623 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.57 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 13:28:42.621945   46501 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 13:28:42.621985   46501 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 13:28:42.665141   46501 cri.go:89] found id: ""
	I0816 13:28:42.665223   46501 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 13:28:42.675547   46501 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 13:28:42.685676   46501 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 13:28:42.696567   46501 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 13:28:42.696591   46501 kubeadm.go:157] found existing configuration files:
	
	I0816 13:28:42.696662   46501 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 13:28:42.706247   46501 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 13:28:42.706319   46501 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 13:28:42.716235   46501 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 13:28:42.725797   46501 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 13:28:42.725867   46501 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 13:28:42.735937   46501 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 13:28:42.745807   46501 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 13:28:42.745870   46501 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 13:28:42.755668   46501 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 13:28:42.765300   46501 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 13:28:42.765367   46501 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 13:28:42.774988   46501 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 13:28:42.899927   46501 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0816 13:28:42.900054   46501 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 13:28:43.050033   46501 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 13:28:43.050212   46501 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 13:28:43.050367   46501 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0816 13:28:43.266115   46501 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 13:28:43.360939   46501 out.go:235]   - Generating certificates and keys ...
	I0816 13:28:43.361083   46501 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 13:28:43.361202   46501 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 13:28:43.388244   46501 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0816 13:28:43.724425   46501 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0816 13:28:43.857778   46501 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0816 13:28:44.073560   46501 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0816 13:28:44.554864   46501 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0816 13:28:44.555114   46501 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-759623 localhost] and IPs [192.168.39.57 127.0.0.1 ::1]
	I0816 13:28:44.755778   46501 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0816 13:28:44.756102   46501 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-759623 localhost] and IPs [192.168.39.57 127.0.0.1 ::1]
	I0816 13:28:44.966654   46501 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0816 13:28:45.107378   46501 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0816 13:28:45.234278   46501 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0816 13:28:45.234649   46501 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 13:28:45.354898   46501 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 13:28:45.638348   46501 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 13:28:45.801606   46501 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 13:28:45.965756   46501 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 13:28:45.985124   46501 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 13:28:45.986292   46501 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 13:28:45.986351   46501 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 13:28:46.138304   46501 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 13:28:46.140374   46501 out.go:235]   - Booting up control plane ...
	I0816 13:28:46.140508   46501 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 13:28:46.148133   46501 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 13:28:46.148250   46501 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 13:28:46.148815   46501 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 13:28:46.153556   46501 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0816 13:29:26.147322   46501 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0816 13:29:26.148462   46501 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:29:26.148669   46501 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:29:31.148890   46501 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:29:31.149157   46501 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:29:41.148659   46501 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:29:41.148843   46501 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:30:01.148577   46501 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:30:01.148834   46501 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:30:41.150118   46501 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:30:41.150377   46501 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:30:41.150396   46501 kubeadm.go:310] 
	I0816 13:30:41.150458   46501 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0816 13:30:41.150538   46501 kubeadm.go:310] 		timed out waiting for the condition
	I0816 13:30:41.150561   46501 kubeadm.go:310] 
	I0816 13:30:41.150602   46501 kubeadm.go:310] 	This error is likely caused by:
	I0816 13:30:41.150670   46501 kubeadm.go:310] 		- The kubelet is not running
	I0816 13:30:41.150811   46501 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0816 13:30:41.150819   46501 kubeadm.go:310] 
	I0816 13:30:41.150969   46501 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0816 13:30:41.151025   46501 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0816 13:30:41.151079   46501 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0816 13:30:41.151091   46501 kubeadm.go:310] 
	I0816 13:30:41.151268   46501 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0816 13:30:41.151386   46501 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0816 13:30:41.151400   46501 kubeadm.go:310] 
	I0816 13:30:41.151572   46501 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0816 13:30:41.151707   46501 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0816 13:30:41.151830   46501 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0816 13:30:41.151947   46501 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0816 13:30:41.151965   46501 kubeadm.go:310] 
	I0816 13:30:41.152742   46501 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 13:30:41.152856   46501 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0816 13:30:41.152970   46501 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0816 13:30:41.153112   46501 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-759623 localhost] and IPs [192.168.39.57 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-759623 localhost] and IPs [192.168.39.57 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-759623 localhost] and IPs [192.168.39.57 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-759623 localhost] and IPs [192.168.39.57 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0816 13:30:41.153158   46501 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 13:30:42.450216   46501 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.297018541s)
	I0816 13:30:42.450314   46501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 13:30:42.471631   46501 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 13:30:42.482551   46501 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 13:30:42.482570   46501 kubeadm.go:157] found existing configuration files:
	
	I0816 13:30:42.482611   46501 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 13:30:42.492625   46501 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 13:30:42.492713   46501 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 13:30:42.504149   46501 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 13:30:42.515004   46501 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 13:30:42.515068   46501 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 13:30:42.525427   46501 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 13:30:42.540684   46501 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 13:30:42.540752   46501 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 13:30:42.551610   46501 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 13:30:42.561335   46501 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 13:30:42.561400   46501 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 13:30:42.572353   46501 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 13:30:42.648009   46501 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0816 13:30:42.648148   46501 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 13:30:42.805501   46501 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 13:30:42.805641   46501 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 13:30:42.805763   46501 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0816 13:30:43.025780   46501 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 13:30:43.029069   46501 out.go:235]   - Generating certificates and keys ...
	I0816 13:30:43.029188   46501 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 13:30:43.029312   46501 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 13:30:43.029433   46501 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 13:30:43.029530   46501 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 13:30:43.029672   46501 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 13:30:43.029770   46501 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 13:30:43.029859   46501 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 13:30:43.029943   46501 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 13:30:43.030063   46501 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 13:30:43.030180   46501 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 13:30:43.030233   46501 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 13:30:43.030318   46501 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 13:30:43.245694   46501 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 13:30:43.956230   46501 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 13:30:44.141944   46501 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 13:30:44.524361   46501 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 13:30:44.545829   46501 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 13:30:44.546807   46501 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 13:30:44.546859   46501 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 13:30:44.718857   46501 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 13:30:44.720673   46501 out.go:235]   - Booting up control plane ...
	I0816 13:30:44.720811   46501 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 13:30:44.725189   46501 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 13:30:44.727406   46501 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 13:30:44.729835   46501 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 13:30:44.734393   46501 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0816 13:31:24.741098   46501 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0816 13:31:24.741667   46501 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:31:24.741942   46501 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:31:29.742859   46501 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:31:29.743130   46501 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:31:39.744068   46501 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:31:39.744483   46501 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:31:59.743419   46501 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:31:59.743640   46501 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:32:39.743283   46501 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:32:39.743557   46501 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:32:39.743587   46501 kubeadm.go:310] 
	I0816 13:32:39.743644   46501 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0816 13:32:39.743746   46501 kubeadm.go:310] 		timed out waiting for the condition
	I0816 13:32:39.743764   46501 kubeadm.go:310] 
	I0816 13:32:39.743814   46501 kubeadm.go:310] 	This error is likely caused by:
	I0816 13:32:39.743859   46501 kubeadm.go:310] 		- The kubelet is not running
	I0816 13:32:39.743996   46501 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0816 13:32:39.744008   46501 kubeadm.go:310] 
	I0816 13:32:39.744180   46501 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0816 13:32:39.744235   46501 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0816 13:32:39.744293   46501 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0816 13:32:39.744310   46501 kubeadm.go:310] 
	I0816 13:32:39.744451   46501 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0816 13:32:39.744593   46501 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0816 13:32:39.744605   46501 kubeadm.go:310] 
	I0816 13:32:39.744741   46501 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0816 13:32:39.744854   46501 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0816 13:32:39.744971   46501 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0816 13:32:39.745065   46501 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0816 13:32:39.745076   46501 kubeadm.go:310] 
	I0816 13:32:39.747018   46501 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 13:32:39.747159   46501 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0816 13:32:39.747283   46501 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0816 13:32:39.747366   46501 kubeadm.go:394] duration metric: took 3m57.125496112s to StartCluster
	I0816 13:32:39.747428   46501 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:32:39.747488   46501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:32:39.799002   46501 cri.go:89] found id: ""
	I0816 13:32:39.799039   46501 logs.go:276] 0 containers: []
	W0816 13:32:39.799050   46501 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:32:39.799065   46501 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:32:39.799132   46501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:32:39.838934   46501 cri.go:89] found id: ""
	I0816 13:32:39.838960   46501 logs.go:276] 0 containers: []
	W0816 13:32:39.838971   46501 logs.go:278] No container was found matching "etcd"
	I0816 13:32:39.838978   46501 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:32:39.839037   46501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:32:39.899422   46501 cri.go:89] found id: ""
	I0816 13:32:39.899448   46501 logs.go:276] 0 containers: []
	W0816 13:32:39.899456   46501 logs.go:278] No container was found matching "coredns"
	I0816 13:32:39.899461   46501 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:32:39.899513   46501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:32:39.945886   46501 cri.go:89] found id: ""
	I0816 13:32:39.945916   46501 logs.go:276] 0 containers: []
	W0816 13:32:39.945927   46501 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:32:39.945935   46501 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:32:39.945993   46501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:32:39.987366   46501 cri.go:89] found id: ""
	I0816 13:32:39.987399   46501 logs.go:276] 0 containers: []
	W0816 13:32:39.987411   46501 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:32:39.987418   46501 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:32:39.987556   46501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:32:40.037994   46501 cri.go:89] found id: ""
	I0816 13:32:40.038019   46501 logs.go:276] 0 containers: []
	W0816 13:32:40.038029   46501 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:32:40.038036   46501 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:32:40.038094   46501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:32:40.078647   46501 cri.go:89] found id: ""
	I0816 13:32:40.078679   46501 logs.go:276] 0 containers: []
	W0816 13:32:40.078690   46501 logs.go:278] No container was found matching "kindnet"
	I0816 13:32:40.078700   46501 logs.go:123] Gathering logs for kubelet ...
	I0816 13:32:40.078714   46501 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:32:40.149105   46501 logs.go:123] Gathering logs for dmesg ...
	I0816 13:32:40.149152   46501 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:32:40.167406   46501 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:32:40.167439   46501 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:32:40.333388   46501 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:32:40.333408   46501 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:32:40.333423   46501 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:32:40.450414   46501 logs.go:123] Gathering logs for container status ...
	I0816 13:32:40.450451   46501 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0816 13:32:40.502224   46501 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0816 13:32:40.502294   46501 out.go:270] * 
	* 
	W0816 13:32:40.502361   46501 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0816 13:32:40.502379   46501 out.go:270] * 
	* 
	W0816 13:32:40.503524   46501 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 13:32:40.507037   46501 out.go:201] 
	W0816 13:32:40.508292   46501 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0816 13:32:40.508345   46501 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0816 13:32:40.508369   46501 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0816 13:32:40.510061   46501 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-759623 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-759623
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-759623: (7.153443585s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-759623 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-759623 status --format={{.Host}}: exit status 7 (66.330254ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-759623 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-759623 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m17.02741989s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-759623 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-759623 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-759623 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (85.118105ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-759623] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-3966/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-3966/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-759623
	    minikube start -p kubernetes-upgrade-759623 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7596232 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-759623 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-759623 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-759623 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m20.224206205s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-08-16 13:35:25.193358077 +0000 UTC m=+4468.102048713
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-759623 -n kubernetes-upgrade-759623
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-759623 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-759623 logs -n 25: (1.621996235s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-251866 sudo                 | cilium-251866             | jenkins | v1.33.1 | 16 Aug 24 13:31 UTC |                     |
	|         | systemctl cat crio --no-pager         |                           |         |         |                     |                     |
	| ssh     | -p cilium-251866 sudo find            | cilium-251866             | jenkins | v1.33.1 | 16 Aug 24 13:31 UTC |                     |
	|         | /etc/crio -type f -exec sh -c         |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                  |                           |         |         |                     |                     |
	| ssh     | -p cilium-251866 sudo crio            | cilium-251866             | jenkins | v1.33.1 | 16 Aug 24 13:31 UTC |                     |
	|         | config                                |                           |         |         |                     |                     |
	| delete  | -p cilium-251866                      | cilium-251866             | jenkins | v1.33.1 | 16 Aug 24 13:31 UTC | 16 Aug 24 13:31 UTC |
	| start   | -p force-systemd-flag-981990          | force-systemd-flag-981990 | jenkins | v1.33.1 | 16 Aug 24 13:31 UTC | 16 Aug 24 13:32 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-760817 stop           | minikube                  | jenkins | v1.26.0 | 16 Aug 24 13:32 UTC | 16 Aug 24 13:32 UTC |
	| ssh     | -p NoKubernetes-169820 sudo           | NoKubernetes-169820       | jenkins | v1.33.1 | 16 Aug 24 13:32 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-169820                | NoKubernetes-169820       | jenkins | v1.33.1 | 16 Aug 24 13:32 UTC | 16 Aug 24 13:32 UTC |
	| stop    | -p kubernetes-upgrade-759623          | kubernetes-upgrade-759623 | jenkins | v1.33.1 | 16 Aug 24 13:32 UTC | 16 Aug 24 13:32 UTC |
	| start   | -p stopped-upgrade-760817             | stopped-upgrade-760817    | jenkins | v1.33.1 | 16 Aug 24 13:32 UTC | 16 Aug 24 13:33 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p force-systemd-env-741583           | force-systemd-env-741583  | jenkins | v1.33.1 | 16 Aug 24 13:32 UTC | 16 Aug 24 13:33 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-759623          | kubernetes-upgrade-759623 | jenkins | v1.33.1 | 16 Aug 24 13:32 UTC | 16 Aug 24 13:34 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-981990 ssh cat     | force-systemd-flag-981990 | jenkins | v1.33.1 | 16 Aug 24 13:32 UTC | 16 Aug 24 13:32 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-981990          | force-systemd-flag-981990 | jenkins | v1.33.1 | 16 Aug 24 13:32 UTC | 16 Aug 24 13:33 UTC |
	| start   | -p cert-expiration-050553             | cert-expiration-050553    | jenkins | v1.33.1 | 16 Aug 24 13:33 UTC | 16 Aug 24 13:34 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-760817             | stopped-upgrade-760817    | jenkins | v1.33.1 | 16 Aug 24 13:33 UTC | 16 Aug 24 13:33 UTC |
	| start   | -p cert-options-779306                | cert-options-779306       | jenkins | v1.33.1 | 16 Aug 24 13:33 UTC | 16 Aug 24 13:34 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-741583           | force-systemd-env-741583  | jenkins | v1.33.1 | 16 Aug 24 13:33 UTC | 16 Aug 24 13:33 UTC |
	| start   | -p old-k8s-version-882237             | old-k8s-version-882237    | jenkins | v1.33.1 | 16 Aug 24 13:33 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --kvm-network=default                 |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system         |                           |         |         |                     |                     |
	|         | --disable-driver-mounts               |                           |         |         |                     |                     |
	|         | --keep-context=false                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-759623          | kubernetes-upgrade-759623 | jenkins | v1.33.1 | 16 Aug 24 13:34 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-759623          | kubernetes-upgrade-759623 | jenkins | v1.33.1 | 16 Aug 24 13:34 UTC | 16 Aug 24 13:35 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-779306 ssh               | cert-options-779306       | jenkins | v1.33.1 | 16 Aug 24 13:34 UTC | 16 Aug 24 13:34 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-779306 -- sudo        | cert-options-779306       | jenkins | v1.33.1 | 16 Aug 24 13:34 UTC | 16 Aug 24 13:34 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-779306                | cert-options-779306       | jenkins | v1.33.1 | 16 Aug 24 13:34 UTC | 16 Aug 24 13:34 UTC |
	| start   | -p no-preload-311070                  | no-preload-311070         | jenkins | v1.33.1 | 16 Aug 24 13:34 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2         |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0          |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 13:34:54
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 13:34:54.356571   54744 out.go:345] Setting OutFile to fd 1 ...
	I0816 13:34:54.356671   54744 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 13:34:54.356679   54744 out.go:358] Setting ErrFile to fd 2...
	I0816 13:34:54.356683   54744 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 13:34:54.356842   54744 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-3966/.minikube/bin
	I0816 13:34:54.357500   54744 out.go:352] Setting JSON to false
	I0816 13:34:54.358477   54744 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4639,"bootTime":1723810655,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 13:34:54.358532   54744 start.go:139] virtualization: kvm guest
	I0816 13:34:54.360735   54744 out.go:177] * [no-preload-311070] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 13:34:54.362127   54744 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 13:34:54.362172   54744 notify.go:220] Checking for updates...
	I0816 13:34:54.364651   54744 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 13:34:54.365925   54744 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 13:34:54.367096   54744 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-3966/.minikube
	I0816 13:34:54.368697   54744 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 13:34:54.369862   54744 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 13:34:54.371521   54744 config.go:182] Loaded profile config "cert-expiration-050553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 13:34:54.371664   54744 config.go:182] Loaded profile config "kubernetes-upgrade-759623": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 13:34:54.371807   54744 config.go:182] Loaded profile config "old-k8s-version-882237": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0816 13:34:54.371905   54744 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 13:34:54.408681   54744 out.go:177] * Using the kvm2 driver based on user configuration
	I0816 13:34:53.410300   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:53.410713   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:34:53.410735   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:34:53.410666   54366 retry.go:31] will retry after 3.706660755s: waiting for machine to come up
	I0816 13:34:54.409953   54744 start.go:297] selected driver: kvm2
	I0816 13:34:54.409980   54744 start.go:901] validating driver "kvm2" against <nil>
	I0816 13:34:54.409995   54744 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 13:34:54.411141   54744 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 13:34:54.411228   54744 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-3966/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 13:34:54.426418   54744 install.go:137] /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0816 13:34:54.426470   54744 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 13:34:54.426678   54744 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 13:34:54.426749   54744 cni.go:84] Creating CNI manager for ""
	I0816 13:34:54.426765   54744 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:34:54.426777   54744 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0816 13:34:54.426839   54744 start.go:340] cluster config:
	{Name:no-preload-311070 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-311070 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 13:34:54.426954   54744 iso.go:125] acquiring lock: {Name:mk00139b359c6fe4c84d80e843a2099303fb7c5b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 13:34:54.428703   54744 out.go:177] * Starting "no-preload-311070" primary control-plane node in "no-preload-311070" cluster
	I0816 13:34:58.758014   53986 start.go:364] duration metric: took 53.651489078s to acquireMachinesLock for "kubernetes-upgrade-759623"
	I0816 13:34:58.758079   53986 start.go:96] Skipping create...Using existing machine configuration
	I0816 13:34:58.758095   53986 fix.go:54] fixHost starting: 
	I0816 13:34:58.758521   53986 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:34:58.758565   53986 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:34:58.775881   53986 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38527
	I0816 13:34:58.776223   53986 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:34:58.776668   53986 main.go:141] libmachine: Using API Version  1
	I0816 13:34:58.776693   53986 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:34:58.776999   53986 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:34:58.777197   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .DriverName
	I0816 13:34:58.777375   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetState
	I0816 13:34:58.778779   53986 fix.go:112] recreateIfNeeded on kubernetes-upgrade-759623: state=Running err=<nil>
	W0816 13:34:58.778801   53986 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 13:34:58.781059   53986 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-759623" VM ...
	I0816 13:34:54.429907   54744 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 13:34:54.430019   54744 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070/config.json ...
	I0816 13:34:54.430047   54744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070/config.json: {Name:mkf545ae11f30f3cbfd4a04840f4272f5d268cde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:34:54.430111   54744 cache.go:107] acquiring lock: {Name:mkcf36ff956ed1ac5a8a40ebfe67a89bb5cb1135 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 13:34:54.430174   54744 cache.go:107] acquiring lock: {Name:mkea4e94ab84306ba648a9b45eb9cb682b0abf4c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 13:34:54.430169   54744 cache.go:107] acquiring lock: {Name:mkc4278dc56d270025867dbcb101eb8b74f3c2fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 13:34:54.430203   54744 cache.go:107] acquiring lock: {Name:mk830f8ad1b2e654296124ab22cee2eb5247567f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 13:34:54.430207   54744 cache.go:107] acquiring lock: {Name:mkff6da27ba519239aed781d0a122e53d11ae525 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 13:34:54.430249   54744 cache.go:107] acquiring lock: {Name:mkf72bd417b7d27cda17ed7726d0df17d5b3f8c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 13:34:54.430298   54744 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 13:34:54.430319   54744 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0816 13:34:54.430350   54744 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0816 13:34:54.430364   54744 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0816 13:34:54.430347   54744 cache.go:107] acquiring lock: {Name:mkbfea3dc60ee40dc15d67efb14017850feec05f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 13:34:54.430201   54744 cache.go:115] /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0816 13:34:54.430390   54744 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 283.003µs
	I0816 13:34:54.430408   54744 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0816 13:34:54.430433   54744 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0816 13:34:54.430418   54744 start.go:360] acquireMachinesLock for no-preload-311070: {Name:mk0b4b88ee8893cefe753073bc08069ac50e5641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 13:34:54.430472   54744 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0816 13:34:54.430177   54744 cache.go:107] acquiring lock: {Name:mka00f8ab77c26482f404dbc01e7b20dbac9eca6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 13:34:54.430713   54744 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0816 13:34:54.431730   54744 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 13:34:54.431743   54744 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0816 13:34:54.431765   54744 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0816 13:34:54.431738   54744 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0816 13:34:54.431786   54744 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0816 13:34:54.431983   54744 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0816 13:34:54.432253   54744 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0816 13:34:54.600407   54744 cache.go:162] opening:  /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0816 13:34:54.600447   54744 cache.go:162] opening:  /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0816 13:34:54.607690   54744 cache.go:162] opening:  /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0816 13:34:54.608225   54744 cache.go:162] opening:  /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0816 13:34:54.619869   54744 cache.go:162] opening:  /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0816 13:34:54.627861   54744 cache.go:162] opening:  /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0816 13:34:54.642166   54744 cache.go:162] opening:  /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10
	I0816 13:34:54.762948   54744 cache.go:157] /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I0816 13:34:54.762973   54744 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 332.804415ms
	I0816 13:34:54.762984   54744 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I0816 13:34:54.995803   54744 cache.go:157] /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 exists
	I0816 13:34:54.995830   54744 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0" -> "/home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0" took 565.692222ms
	I0816 13:34:54.995842   54744 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0 -> /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 succeeded
	I0816 13:34:56.154676   54744 cache.go:157] /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 exists
	I0816 13:34:56.154956   54744 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0" -> "/home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0" took 1.724699468s
	I0816 13:34:56.154992   54744 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0 -> /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 succeeded
	I0816 13:34:56.167461   54744 cache.go:157] /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0816 13:34:56.167490   54744 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1" took 1.737280483s
	I0816 13:34:56.167502   54744 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0816 13:34:56.223350   54744 cache.go:157] /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 exists
	I0816 13:34:56.223383   54744 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0" -> "/home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0" took 1.793218892s
	I0816 13:34:56.223412   54744 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0 -> /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 succeeded
	I0816 13:34:56.239690   54744 cache.go:157] /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 exists
	I0816 13:34:56.239751   54744 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0" -> "/home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0" took 1.809403516s
	I0816 13:34:56.239826   54744 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0 -> /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 succeeded
	I0816 13:34:56.473316   54744 cache.go:157] /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 exists
	I0816 13:34:56.473342   54744 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0" took 2.043149443s
	I0816 13:34:56.473353   54744 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0816 13:34:56.473367   54744 cache.go:87] Successfully saved all images to host disk.
	I0816 13:34:57.121176   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:57.121748   53711 main.go:141] libmachine: (old-k8s-version-882237) Found IP for machine: 192.168.72.105
	I0816 13:34:57.121779   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has current primary IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:57.121789   53711 main.go:141] libmachine: (old-k8s-version-882237) Reserving static IP address...
	I0816 13:34:57.122155   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-882237", mac: "52:54:00:ce:02:bd", ip: "192.168.72.105"} in network mk-old-k8s-version-882237
	I0816 13:34:57.195448   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | Getting to WaitForSSH function...
	I0816 13:34:57.195476   53711 main.go:141] libmachine: (old-k8s-version-882237) Reserved static IP address: 192.168.72.105
	I0816 13:34:57.195488   53711 main.go:141] libmachine: (old-k8s-version-882237) Waiting for SSH to be available...
	I0816 13:34:57.198202   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:57.198734   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:34:49 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ce:02:bd}
	I0816 13:34:57.198764   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:57.198892   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | Using SSH client type: external
	I0816 13:34:57.198921   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/old-k8s-version-882237/id_rsa (-rw-------)
	I0816 13:34:57.198962   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.105 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-3966/.minikube/machines/old-k8s-version-882237/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 13:34:57.198976   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | About to run SSH command:
	I0816 13:34:57.198991   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | exit 0
	I0816 13:34:57.329005   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | SSH cmd err, output: <nil>: 
	I0816 13:34:57.329301   53711 main.go:141] libmachine: (old-k8s-version-882237) KVM machine creation complete!
	I0816 13:34:57.329565   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetConfigRaw
	I0816 13:34:57.330207   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:34:57.330419   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:34:57.330592   53711 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0816 13:34:57.330604   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetState
	I0816 13:34:57.331893   53711 main.go:141] libmachine: Detecting operating system of created instance...
	I0816 13:34:57.331906   53711 main.go:141] libmachine: Waiting for SSH to be available...
	I0816 13:34:57.331913   53711 main.go:141] libmachine: Getting to WaitForSSH function...
	I0816 13:34:57.331922   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:34:57.334652   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:57.335087   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:34:49 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:34:57.335125   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:57.335299   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:34:57.335457   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:34:57.335598   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:34:57.335723   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:34:57.335909   53711 main.go:141] libmachine: Using SSH client type: native
	I0816 13:34:57.336157   53711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.105 22 <nil> <nil>}
	I0816 13:34:57.336173   53711 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0816 13:34:57.448252   53711 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 13:34:57.448272   53711 main.go:141] libmachine: Detecting the provisioner...
	I0816 13:34:57.448280   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:34:57.451473   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:57.451908   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:34:49 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:34:57.451935   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:57.452107   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:34:57.452439   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:34:57.452591   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:34:57.452775   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:34:57.452960   53711 main.go:141] libmachine: Using SSH client type: native
	I0816 13:34:57.453153   53711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.105 22 <nil> <nil>}
	I0816 13:34:57.453172   53711 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0816 13:34:57.569932   53711 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0816 13:34:57.570001   53711 main.go:141] libmachine: found compatible host: buildroot
	I0816 13:34:57.570014   53711 main.go:141] libmachine: Provisioning with buildroot...
	I0816 13:34:57.570025   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetMachineName
	I0816 13:34:57.570297   53711 buildroot.go:166] provisioning hostname "old-k8s-version-882237"
	I0816 13:34:57.570326   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetMachineName
	I0816 13:34:57.570564   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:34:57.573141   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:57.573547   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:34:49 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:34:57.573576   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:57.573743   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:34:57.573917   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:34:57.574087   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:34:57.574246   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:34:57.574406   53711 main.go:141] libmachine: Using SSH client type: native
	I0816 13:34:57.574561   53711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.105 22 <nil> <nil>}
	I0816 13:34:57.574573   53711 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-882237 && echo "old-k8s-version-882237" | sudo tee /etc/hostname
	I0816 13:34:57.705485   53711 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-882237
	
	I0816 13:34:57.705532   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:34:57.708686   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:57.709090   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:34:49 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:34:57.709150   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:57.709329   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:34:57.709536   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:34:57.709699   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:34:57.709857   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:34:57.710038   53711 main.go:141] libmachine: Using SSH client type: native
	I0816 13:34:57.710273   53711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.105 22 <nil> <nil>}
	I0816 13:34:57.710299   53711 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-882237' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-882237/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-882237' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 13:34:57.838160   53711 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 13:34:57.838185   53711 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-3966/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-3966/.minikube}
	I0816 13:34:57.838229   53711 buildroot.go:174] setting up certificates
	I0816 13:34:57.838241   53711 provision.go:84] configureAuth start
	I0816 13:34:57.838254   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetMachineName
	I0816 13:34:57.838563   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetIP
	I0816 13:34:57.841000   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:57.841392   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:34:49 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:34:57.841421   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:57.841548   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:34:57.843913   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:57.844296   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:34:49 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:34:57.844331   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:57.844433   53711 provision.go:143] copyHostCerts
	I0816 13:34:57.844493   53711 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem, removing ...
	I0816 13:34:57.844514   53711 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem
	I0816 13:34:57.844585   53711 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem (1082 bytes)
	I0816 13:34:57.844693   53711 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem, removing ...
	I0816 13:34:57.844703   53711 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem
	I0816 13:34:57.844734   53711 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem (1123 bytes)
	I0816 13:34:57.844811   53711 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem, removing ...
	I0816 13:34:57.844822   53711 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem
	I0816 13:34:57.844850   53711 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem (1675 bytes)
	I0816 13:34:57.844937   53711 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-882237 san=[127.0.0.1 192.168.72.105 localhost minikube old-k8s-version-882237]
	I0816 13:34:58.064405   53711 provision.go:177] copyRemoteCerts
	I0816 13:34:58.064456   53711 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 13:34:58.064485   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:34:58.067554   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:58.067899   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:34:49 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:34:58.067927   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:58.068052   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:34:58.068270   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:34:58.068399   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:34:58.068550   53711 sshutil.go:53] new ssh client: &{IP:192.168.72.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/old-k8s-version-882237/id_rsa Username:docker}
	I0816 13:34:58.155575   53711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 13:34:58.179075   53711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0816 13:34:58.201147   53711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 13:34:58.224748   53711 provision.go:87] duration metric: took 386.493505ms to configureAuth
	I0816 13:34:58.224776   53711 buildroot.go:189] setting minikube options for container-runtime
	I0816 13:34:58.224959   53711 config.go:182] Loaded profile config "old-k8s-version-882237": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0816 13:34:58.225028   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:34:58.227780   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:58.228089   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:34:49 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:34:58.228114   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:58.228260   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:34:58.228477   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:34:58.228659   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:34:58.228815   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:34:58.229002   53711 main.go:141] libmachine: Using SSH client type: native
	I0816 13:34:58.229166   53711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.105 22 <nil> <nil>}
	I0816 13:34:58.229186   53711 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 13:34:58.506517   53711 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 13:34:58.506541   53711 main.go:141] libmachine: Checking connection to Docker...
	I0816 13:34:58.506561   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetURL
	I0816 13:34:58.507727   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | Using libvirt version 6000000
	I0816 13:34:58.510310   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:58.510682   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:34:49 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:34:58.510713   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:58.510840   53711 main.go:141] libmachine: Docker is up and running!
	I0816 13:34:58.510853   53711 main.go:141] libmachine: Reticulating splines...
	I0816 13:34:58.510861   53711 client.go:171] duration metric: took 24.271575481s to LocalClient.Create
	I0816 13:34:58.510889   53711 start.go:167] duration metric: took 24.271653175s to libmachine.API.Create "old-k8s-version-882237"
	I0816 13:34:58.510918   53711 start.go:293] postStartSetup for "old-k8s-version-882237" (driver="kvm2")
	I0816 13:34:58.510935   53711 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 13:34:58.510958   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:34:58.511199   53711 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 13:34:58.511225   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:34:58.513287   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:58.513545   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:34:49 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:34:58.513563   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:58.513660   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:34:58.513828   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:34:58.513982   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:34:58.514110   53711 sshutil.go:53] new ssh client: &{IP:192.168.72.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/old-k8s-version-882237/id_rsa Username:docker}
	I0816 13:34:58.598806   53711 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 13:34:58.603081   53711 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 13:34:58.603107   53711 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/addons for local assets ...
	I0816 13:34:58.603179   53711 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/files for local assets ...
	I0816 13:34:58.603247   53711 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem -> 111492.pem in /etc/ssl/certs
	I0816 13:34:58.603332   53711 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 13:34:58.612634   53711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /etc/ssl/certs/111492.pem (1708 bytes)
	I0816 13:34:58.637919   53711 start.go:296] duration metric: took 126.985371ms for postStartSetup
	I0816 13:34:58.637970   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetConfigRaw
	I0816 13:34:58.638518   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetIP
	I0816 13:34:58.641270   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:58.641589   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:34:49 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:34:58.641624   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:58.641843   53711 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/config.json ...
	I0816 13:34:58.642092   53711 start.go:128] duration metric: took 24.42354905s to createHost
	I0816 13:34:58.642170   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:34:58.644268   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:58.644603   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:34:49 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:34:58.644630   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:58.644750   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:34:58.644936   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:34:58.645078   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:34:58.645251   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:34:58.645425   53711 main.go:141] libmachine: Using SSH client type: native
	I0816 13:34:58.645571   53711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.105 22 <nil> <nil>}
	I0816 13:34:58.645587   53711 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 13:34:58.757829   53711 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723815298.729789702
	
	I0816 13:34:58.757851   53711 fix.go:216] guest clock: 1723815298.729789702
	I0816 13:34:58.757861   53711 fix.go:229] Guest: 2024-08-16 13:34:58.729789702 +0000 UTC Remote: 2024-08-16 13:34:58.642108832 +0000 UTC m=+74.269001423 (delta=87.68087ms)
	I0816 13:34:58.757907   53711 fix.go:200] guest clock delta is within tolerance: 87.68087ms
	I0816 13:34:58.757915   53711 start.go:83] releasing machines lock for "old-k8s-version-882237", held for 24.53957757s
	I0816 13:34:58.757946   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:34:58.758281   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetIP
	I0816 13:34:58.762379   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:58.762858   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:34:49 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:34:58.762884   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:58.763027   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:34:58.763540   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:34:58.763710   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:34:58.763782   53711 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 13:34:58.763834   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:34:58.763918   53711 ssh_runner.go:195] Run: cat /version.json
	I0816 13:34:58.763932   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:34:58.766698   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:58.766833   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:58.767054   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:34:49 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:34:58.767114   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:34:49 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:34:58.767159   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:58.767189   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:58.767324   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:34:58.767417   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:34:58.767525   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:34:58.767633   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:34:58.767723   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:34:58.767755   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:34:58.767904   53711 sshutil.go:53] new ssh client: &{IP:192.168.72.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/old-k8s-version-882237/id_rsa Username:docker}
	I0816 13:34:58.767918   53711 sshutil.go:53] new ssh client: &{IP:192.168.72.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/old-k8s-version-882237/id_rsa Username:docker}
	I0816 13:34:58.850216   53711 ssh_runner.go:195] Run: systemctl --version
	I0816 13:34:58.872443   53711 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 13:34:59.035631   53711 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 13:34:59.042839   53711 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 13:34:59.042892   53711 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 13:34:59.060582   53711 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 13:34:59.060606   53711 start.go:495] detecting cgroup driver to use...
	I0816 13:34:59.060663   53711 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 13:34:59.078211   53711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 13:34:59.092212   53711 docker.go:217] disabling cri-docker service (if available) ...
	I0816 13:34:59.092267   53711 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 13:34:59.106310   53711 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 13:34:59.120574   53711 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 13:34:59.250306   53711 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 13:34:58.782766   53986 machine.go:93] provisionDockerMachine start ...
	I0816 13:34:58.782789   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .DriverName
	I0816 13:34:58.782982   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHHostname
	I0816 13:34:58.785642   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:34:58.786083   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:14:2a", ip: ""} in network mk-kubernetes-upgrade-759623: {Iface:virbr1 ExpiryTime:2024-08-16 14:33:35 +0000 UTC Type:0 Mac:52:54:00:2b:14:2a Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:kubernetes-upgrade-759623 Clientid:01:52:54:00:2b:14:2a}
	I0816 13:34:58.786111   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined IP address 192.168.39.57 and MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:34:58.786249   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHPort
	I0816 13:34:58.786419   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHKeyPath
	I0816 13:34:58.786604   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHKeyPath
	I0816 13:34:58.786761   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHUsername
	I0816 13:34:58.786931   53986 main.go:141] libmachine: Using SSH client type: native
	I0816 13:34:58.787208   53986 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0816 13:34:58.787225   53986 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 13:34:58.903485   53986 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-759623
	
	I0816 13:34:58.903518   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetMachineName
	I0816 13:34:58.903794   53986 buildroot.go:166] provisioning hostname "kubernetes-upgrade-759623"
	I0816 13:34:58.903819   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetMachineName
	I0816 13:34:58.904013   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHHostname
	I0816 13:34:58.906945   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:34:58.907401   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:14:2a", ip: ""} in network mk-kubernetes-upgrade-759623: {Iface:virbr1 ExpiryTime:2024-08-16 14:33:35 +0000 UTC Type:0 Mac:52:54:00:2b:14:2a Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:kubernetes-upgrade-759623 Clientid:01:52:54:00:2b:14:2a}
	I0816 13:34:58.907429   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined IP address 192.168.39.57 and MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:34:58.907611   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHPort
	I0816 13:34:58.907817   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHKeyPath
	I0816 13:34:58.907987   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHKeyPath
	I0816 13:34:58.908137   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHUsername
	I0816 13:34:58.908338   53986 main.go:141] libmachine: Using SSH client type: native
	I0816 13:34:58.908563   53986 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0816 13:34:58.908578   53986 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-759623 && echo "kubernetes-upgrade-759623" | sudo tee /etc/hostname
	I0816 13:34:59.036961   53986 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-759623
	
	I0816 13:34:59.036990   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHHostname
	I0816 13:34:59.039575   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:34:59.039935   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:14:2a", ip: ""} in network mk-kubernetes-upgrade-759623: {Iface:virbr1 ExpiryTime:2024-08-16 14:33:35 +0000 UTC Type:0 Mac:52:54:00:2b:14:2a Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:kubernetes-upgrade-759623 Clientid:01:52:54:00:2b:14:2a}
	I0816 13:34:59.039966   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined IP address 192.168.39.57 and MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:34:59.040158   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHPort
	I0816 13:34:59.040364   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHKeyPath
	I0816 13:34:59.040588   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHKeyPath
	I0816 13:34:59.040750   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHUsername
	I0816 13:34:59.040978   53986 main.go:141] libmachine: Using SSH client type: native
	I0816 13:34:59.041192   53986 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0816 13:34:59.041216   53986 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-759623' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-759623/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-759623' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 13:34:59.150271   53986 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 13:34:59.150304   53986 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-3966/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-3966/.minikube}
	I0816 13:34:59.150343   53986 buildroot.go:174] setting up certificates
	I0816 13:34:59.150354   53986 provision.go:84] configureAuth start
	I0816 13:34:59.150372   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetMachineName
	I0816 13:34:59.150671   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetIP
	I0816 13:34:59.153450   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:34:59.153805   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:14:2a", ip: ""} in network mk-kubernetes-upgrade-759623: {Iface:virbr1 ExpiryTime:2024-08-16 14:33:35 +0000 UTC Type:0 Mac:52:54:00:2b:14:2a Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:kubernetes-upgrade-759623 Clientid:01:52:54:00:2b:14:2a}
	I0816 13:34:59.153837   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined IP address 192.168.39.57 and MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:34:59.153997   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHHostname
	I0816 13:34:59.156488   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:34:59.156870   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:14:2a", ip: ""} in network mk-kubernetes-upgrade-759623: {Iface:virbr1 ExpiryTime:2024-08-16 14:33:35 +0000 UTC Type:0 Mac:52:54:00:2b:14:2a Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:kubernetes-upgrade-759623 Clientid:01:52:54:00:2b:14:2a}
	I0816 13:34:59.156899   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined IP address 192.168.39.57 and MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:34:59.157067   53986 provision.go:143] copyHostCerts
	I0816 13:34:59.157121   53986 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem, removing ...
	I0816 13:34:59.157142   53986 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem
	I0816 13:34:59.157212   53986 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem (1082 bytes)
	I0816 13:34:59.157341   53986 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem, removing ...
	I0816 13:34:59.157350   53986 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem
	I0816 13:34:59.157372   53986 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem (1123 bytes)
	I0816 13:34:59.157445   53986 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem, removing ...
	I0816 13:34:59.157453   53986 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem
	I0816 13:34:59.157472   53986 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem (1675 bytes)
	I0816 13:34:59.157530   53986 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-759623 san=[127.0.0.1 192.168.39.57 kubernetes-upgrade-759623 localhost minikube]
	I0816 13:34:59.229737   53986 provision.go:177] copyRemoteCerts
	I0816 13:34:59.229791   53986 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 13:34:59.229813   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHHostname
	I0816 13:34:59.232687   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:34:59.233061   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:14:2a", ip: ""} in network mk-kubernetes-upgrade-759623: {Iface:virbr1 ExpiryTime:2024-08-16 14:33:35 +0000 UTC Type:0 Mac:52:54:00:2b:14:2a Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:kubernetes-upgrade-759623 Clientid:01:52:54:00:2b:14:2a}
	I0816 13:34:59.233085   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined IP address 192.168.39.57 and MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:34:59.233267   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHPort
	I0816 13:34:59.233485   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHKeyPath
	I0816 13:34:59.233644   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHUsername
	I0816 13:34:59.233781   53986 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/kubernetes-upgrade-759623/id_rsa Username:docker}
	I0816 13:34:59.327796   53986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 13:34:59.355900   53986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0816 13:34:59.390703   53986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 13:34:59.418113   53986 provision.go:87] duration metric: took 267.741496ms to configureAuth
	I0816 13:34:59.418154   53986 buildroot.go:189] setting minikube options for container-runtime
	I0816 13:34:59.418348   53986 config.go:182] Loaded profile config "kubernetes-upgrade-759623": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 13:34:59.418424   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHHostname
	I0816 13:34:59.421430   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:34:59.421795   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:14:2a", ip: ""} in network mk-kubernetes-upgrade-759623: {Iface:virbr1 ExpiryTime:2024-08-16 14:33:35 +0000 UTC Type:0 Mac:52:54:00:2b:14:2a Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:kubernetes-upgrade-759623 Clientid:01:52:54:00:2b:14:2a}
	I0816 13:34:59.421829   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined IP address 192.168.39.57 and MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:34:59.422070   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHPort
	I0816 13:34:59.422293   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHKeyPath
	I0816 13:34:59.422469   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHKeyPath
	I0816 13:34:59.422642   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHUsername
	I0816 13:34:59.422797   53986 main.go:141] libmachine: Using SSH client type: native
	I0816 13:34:59.422957   53986 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0816 13:34:59.422970   53986 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 13:34:59.442694   53711 docker.go:233] disabling docker service ...
	I0816 13:34:59.442758   53711 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 13:34:59.458241   53711 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 13:34:59.471761   53711 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 13:34:59.626497   53711 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 13:34:59.750518   53711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 13:34:59.764995   53711 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 13:34:59.783327   53711 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0816 13:34:59.783417   53711 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:34:59.793730   53711 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 13:34:59.793793   53711 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:34:59.804302   53711 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:34:59.814819   53711 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:34:59.825277   53711 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 13:34:59.836486   53711 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 13:34:59.846152   53711 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 13:34:59.846210   53711 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 13:34:59.859922   53711 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 13:34:59.869090   53711 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:34:59.981748   53711 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 13:35:00.122606   53711 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 13:35:00.122689   53711 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 13:35:00.128485   53711 start.go:563] Will wait 60s for crictl version
	I0816 13:35:00.128555   53711 ssh_runner.go:195] Run: which crictl
	I0816 13:35:00.132597   53711 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 13:35:00.175998   53711 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 13:35:00.176074   53711 ssh_runner.go:195] Run: crio --version
	I0816 13:35:00.205444   53711 ssh_runner.go:195] Run: crio --version
	I0816 13:35:00.234496   53711 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0816 13:35:00.235753   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetIP
	I0816 13:35:00.239045   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:35:00.239400   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:34:49 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:35:00.239422   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:35:00.239612   53711 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0816 13:35:00.243917   53711 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 13:35:00.257355   53711 kubeadm.go:883] updating cluster {Name:old-k8s-version-882237 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-882237 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.105 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 13:35:00.257470   53711 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0816 13:35:00.257530   53711 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 13:35:00.290346   53711 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0816 13:35:00.290419   53711 ssh_runner.go:195] Run: which lz4
	I0816 13:35:00.294620   53711 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 13:35:00.298953   53711 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 13:35:00.298991   53711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0816 13:35:01.935214   53711 crio.go:462] duration metric: took 1.640622471s to copy over tarball
	I0816 13:35:01.935291   53711 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 13:35:04.384829   53711 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.449514781s)
	I0816 13:35:04.384857   53711 crio.go:469] duration metric: took 2.449613683s to extract the tarball
	I0816 13:35:04.384864   53711 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 13:35:05.843075   54744 start.go:364] duration metric: took 11.41262278s to acquireMachinesLock for "no-preload-311070"
	I0816 13:35:05.843124   54744 start.go:93] Provisioning new machine with config: &{Name:no-preload-311070 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0 ClusterName:no-preload-311070 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 13:35:05.843261   54744 start.go:125] createHost starting for "" (driver="kvm2")
	I0816 13:35:04.427565   53711 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 13:35:04.473700   53711 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0816 13:35:04.473722   53711 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0816 13:35:04.473791   53711 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:35:04.473837   53711 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:35:04.473855   53711 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0816 13:35:04.473861   53711 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0816 13:35:04.473819   53711 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:35:04.473926   53711 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0816 13:35:04.473923   53711 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:35:04.473799   53711 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:35:04.475025   53711 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:35:04.475178   53711 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:35:04.475206   53711 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:35:04.475234   53711 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0816 13:35:04.475268   53711 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:35:04.475271   53711 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0816 13:35:04.475280   53711 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0816 13:35:04.475268   53711 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:35:04.634782   53711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0816 13:35:04.658625   53711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0816 13:35:04.661891   53711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:35:04.668677   53711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:35:04.672009   53711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:35:04.682629   53711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:35:04.692347   53711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0816 13:35:04.710938   53711 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0816 13:35:04.710988   53711 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0816 13:35:04.711037   53711 ssh_runner.go:195] Run: which crictl
	I0816 13:35:04.799542   53711 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0816 13:35:04.799585   53711 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0816 13:35:04.799593   53711 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:35:04.799616   53711 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0816 13:35:04.799644   53711 ssh_runner.go:195] Run: which crictl
	I0816 13:35:04.799657   53711 ssh_runner.go:195] Run: which crictl
	I0816 13:35:04.799724   53711 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0816 13:35:04.799765   53711 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:35:04.799798   53711 ssh_runner.go:195] Run: which crictl
	I0816 13:35:04.833801   53711 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0816 13:35:04.833838   53711 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0816 13:35:04.833845   53711 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:35:04.833855   53711 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:35:04.833895   53711 ssh_runner.go:195] Run: which crictl
	I0816 13:35:04.833895   53711 ssh_runner.go:195] Run: which crictl
	I0816 13:35:04.846807   53711 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0816 13:35:04.846856   53711 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0816 13:35:04.846895   53711 ssh_runner.go:195] Run: which crictl
	I0816 13:35:04.846897   53711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 13:35:04.846917   53711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:35:04.846951   53711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 13:35:04.846973   53711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:35:04.847041   53711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:35:04.847041   53711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:35:04.997860   53711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:35:04.997860   53711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:35:04.997937   53711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 13:35:04.997973   53711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 13:35:04.998019   53711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:35:04.998047   53711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 13:35:04.998093   53711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:35:05.152528   53711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:35:05.152609   53711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:35:05.166733   53711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 13:35:05.166803   53711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 13:35:05.166852   53711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:35:05.166954   53711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:35:05.166968   53711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 13:35:05.305180   53711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0816 13:35:05.305407   53711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0816 13:35:05.321669   53711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0816 13:35:05.332541   53711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0816 13:35:05.333945   53711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0816 13:35:05.333978   53711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0816 13:35:05.334083   53711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 13:35:05.341815   53711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:35:05.380730   53711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0816 13:35:05.498648   53711 cache_images.go:92] duration metric: took 1.024910728s to LoadCachedImages
	W0816 13:35:05.498780   53711 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0816 13:35:05.498810   53711 kubeadm.go:934] updating node { 192.168.72.105 8443 v1.20.0 crio true true} ...
	I0816 13:35:05.498935   53711 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-882237 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.105
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-882237 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 13:35:05.499016   53711 ssh_runner.go:195] Run: crio config
	I0816 13:35:05.570096   53711 cni.go:84] Creating CNI manager for ""
	I0816 13:35:05.570118   53711 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:35:05.570130   53711 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 13:35:05.570152   53711 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.105 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-882237 NodeName:old-k8s-version-882237 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.105"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.105 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0816 13:35:05.570311   53711 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.105
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-882237"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.105
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.105"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 13:35:05.570374   53711 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0816 13:35:05.582463   53711 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 13:35:05.582536   53711 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 13:35:05.594526   53711 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0816 13:35:05.614392   53711 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 13:35:05.632339   53711 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0816 13:35:05.650063   53711 ssh_runner.go:195] Run: grep 192.168.72.105	control-plane.minikube.internal$ /etc/hosts
	I0816 13:35:05.654505   53711 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.105	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 13:35:05.667261   53711 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:35:05.794746   53711 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 13:35:05.812919   53711 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237 for IP: 192.168.72.105
	I0816 13:35:05.812938   53711 certs.go:194] generating shared ca certs ...
	I0816 13:35:05.812951   53711 certs.go:226] acquiring lock for ca certs: {Name:mkdb46755373c135acf8239d2d4352f4b0b3d1d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:35:05.813111   53711 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key
	I0816 13:35:05.813192   53711 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key
	I0816 13:35:05.813204   53711 certs.go:256] generating profile certs ...
	I0816 13:35:05.813266   53711 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/client.key
	I0816 13:35:05.813283   53711 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/client.crt with IP's: []
	I0816 13:35:05.899586   53711 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/client.crt ...
	I0816 13:35:05.899616   53711 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/client.crt: {Name:mkb8ad7deb29a0014c885f5dd3b2339661a5f1ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:35:05.899770   53711 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/client.key ...
	I0816 13:35:05.899783   53711 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/client.key: {Name:mk770a549b659846f110d19a24ba4442cf7bc258 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:35:05.899855   53711 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/apiserver.key.e63f19d8
	I0816 13:35:05.899878   53711 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/apiserver.crt.e63f19d8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.105]
	I0816 13:35:06.086072   53711 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/apiserver.crt.e63f19d8 ...
	I0816 13:35:06.086098   53711 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/apiserver.crt.e63f19d8: {Name:mkacec3a3ad6fe417dd5c97ef6e2a1bdb6b021bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:35:06.086231   53711 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/apiserver.key.e63f19d8 ...
	I0816 13:35:06.086250   53711 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/apiserver.key.e63f19d8: {Name:mkb38dabae87f5f624dad03f3ba3ce14d833fa38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:35:06.086314   53711 certs.go:381] copying /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/apiserver.crt.e63f19d8 -> /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/apiserver.crt
	I0816 13:35:06.086384   53711 certs.go:385] copying /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/apiserver.key.e63f19d8 -> /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/apiserver.key
	I0816 13:35:06.086440   53711 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/proxy-client.key
	I0816 13:35:06.086455   53711 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/proxy-client.crt with IP's: []
	I0816 13:35:06.145219   53711 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/proxy-client.crt ...
	I0816 13:35:06.145241   53711 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/proxy-client.crt: {Name:mk9bf8840b3de3673b5ab193a6173b7c35470d3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:35:06.145387   53711 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/proxy-client.key ...
	I0816 13:35:06.145403   53711 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/proxy-client.key: {Name:mk7153ded6f2def9061b6e4db01262050549c214 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:35:06.145591   53711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem (1338 bytes)
	W0816 13:35:06.145625   53711 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149_empty.pem, impossibly tiny 0 bytes
	I0816 13:35:06.145639   53711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 13:35:06.145662   53711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem (1082 bytes)
	I0816 13:35:06.145687   53711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem (1123 bytes)
	I0816 13:35:06.145711   53711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem (1675 bytes)
	I0816 13:35:06.145756   53711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem (1708 bytes)
	I0816 13:35:06.146398   53711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 13:35:06.178670   53711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 13:35:06.208175   53711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 13:35:06.238389   53711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 13:35:06.281924   53711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0816 13:35:06.309605   53711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 13:35:06.336991   53711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 13:35:06.364518   53711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 13:35:06.391596   53711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem --> /usr/share/ca-certificates/11149.pem (1338 bytes)
	I0816 13:35:06.427798   53711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /usr/share/ca-certificates/111492.pem (1708 bytes)
	I0816 13:35:06.457117   53711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 13:35:06.492950   53711 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 13:35:06.512623   53711 ssh_runner.go:195] Run: openssl version
	I0816 13:35:06.519471   53711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11149.pem && ln -fs /usr/share/ca-certificates/11149.pem /etc/ssl/certs/11149.pem"
	I0816 13:35:06.536102   53711 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11149.pem
	I0816 13:35:06.541621   53711 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 12:33 /usr/share/ca-certificates/11149.pem
	I0816 13:35:06.541685   53711 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11149.pem
	I0816 13:35:06.548197   53711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11149.pem /etc/ssl/certs/51391683.0"
	I0816 13:35:06.561299   53711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111492.pem && ln -fs /usr/share/ca-certificates/111492.pem /etc/ssl/certs/111492.pem"
	I0816 13:35:06.573610   53711 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111492.pem
	I0816 13:35:06.578287   53711 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 12:33 /usr/share/ca-certificates/111492.pem
	I0816 13:35:06.578343   53711 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111492.pem
	I0816 13:35:06.586521   53711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111492.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 13:35:06.599697   53711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 13:35:06.612616   53711 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:35:06.620194   53711 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 12:22 /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:35:06.620266   53711 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:35:06.627787   53711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 13:35:06.642597   53711 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 13:35:06.647944   53711 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0816 13:35:06.648000   53711 kubeadm.go:392] StartCluster: {Name:old-k8s-version-882237 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-882237 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.105 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 13:35:06.648067   53711 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 13:35:06.648118   53711 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 13:35:06.715446   53711 cri.go:89] found id: ""
	I0816 13:35:06.715523   53711 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 13:35:06.731071   53711 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 13:35:06.750235   53711 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 13:35:06.768053   53711 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 13:35:06.768068   53711 kubeadm.go:157] found existing configuration files:
	
	I0816 13:35:06.768113   53711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 13:35:06.778751   53711 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 13:35:06.778815   53711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 13:35:06.791285   53711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 13:35:06.801308   53711 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 13:35:06.801378   53711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 13:35:06.811475   53711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 13:35:06.820924   53711 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 13:35:06.820981   53711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 13:35:06.831409   53711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 13:35:06.841061   53711 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 13:35:06.841132   53711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 13:35:06.851070   53711 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 13:35:06.971768   53711 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0816 13:35:06.971889   53711 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 13:35:07.122602   53711 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 13:35:07.122793   53711 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 13:35:07.122959   53711 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0816 13:35:07.321792   53711 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 13:35:06.049445   54744 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 13:35:06.049676   54744 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:35:06.049716   54744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:35:06.064814   54744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41387
	I0816 13:35:06.065319   54744 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:35:06.065852   54744 main.go:141] libmachine: Using API Version  1
	I0816 13:35:06.065876   54744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:35:06.066261   54744 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:35:06.066423   54744 main.go:141] libmachine: (no-preload-311070) Calling .GetMachineName
	I0816 13:35:06.066576   54744 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	I0816 13:35:06.066727   54744 start.go:159] libmachine.API.Create for "no-preload-311070" (driver="kvm2")
	I0816 13:35:06.066746   54744 client.go:168] LocalClient.Create starting
	I0816 13:35:06.066772   54744 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem
	I0816 13:35:06.066810   54744 main.go:141] libmachine: Decoding PEM data...
	I0816 13:35:06.066830   54744 main.go:141] libmachine: Parsing certificate...
	I0816 13:35:06.066904   54744 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem
	I0816 13:35:06.066933   54744 main.go:141] libmachine: Decoding PEM data...
	I0816 13:35:06.066945   54744 main.go:141] libmachine: Parsing certificate...
	I0816 13:35:06.066968   54744 main.go:141] libmachine: Running pre-create checks...
	I0816 13:35:06.066981   54744 main.go:141] libmachine: (no-preload-311070) Calling .PreCreateCheck
	I0816 13:35:06.067357   54744 main.go:141] libmachine: (no-preload-311070) Calling .GetConfigRaw
	I0816 13:35:06.067702   54744 main.go:141] libmachine: Creating machine...
	I0816 13:35:06.067715   54744 main.go:141] libmachine: (no-preload-311070) Calling .Create
	I0816 13:35:06.067846   54744 main.go:141] libmachine: (no-preload-311070) Creating KVM machine...
	I0816 13:35:06.069244   54744 main.go:141] libmachine: (no-preload-311070) DBG | found existing default KVM network
	I0816 13:35:06.070611   54744 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:35:06.070469   54892 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:5d:d7:42} reservation:<nil>}
	I0816 13:35:06.071698   54744 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:35:06.071610   54892 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:cc:5c:97} reservation:<nil>}
	I0816 13:35:06.072723   54744 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:35:06.072655   54892 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a2fc0}
	I0816 13:35:06.072743   54744 main.go:141] libmachine: (no-preload-311070) DBG | created network xml: 
	I0816 13:35:06.072756   54744 main.go:141] libmachine: (no-preload-311070) DBG | <network>
	I0816 13:35:06.072765   54744 main.go:141] libmachine: (no-preload-311070) DBG |   <name>mk-no-preload-311070</name>
	I0816 13:35:06.072777   54744 main.go:141] libmachine: (no-preload-311070) DBG |   <dns enable='no'/>
	I0816 13:35:06.072787   54744 main.go:141] libmachine: (no-preload-311070) DBG |   
	I0816 13:35:06.072799   54744 main.go:141] libmachine: (no-preload-311070) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0816 13:35:06.072810   54744 main.go:141] libmachine: (no-preload-311070) DBG |     <dhcp>
	I0816 13:35:06.072835   54744 main.go:141] libmachine: (no-preload-311070) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0816 13:35:06.072848   54744 main.go:141] libmachine: (no-preload-311070) DBG |     </dhcp>
	I0816 13:35:06.072853   54744 main.go:141] libmachine: (no-preload-311070) DBG |   </ip>
	I0816 13:35:06.072859   54744 main.go:141] libmachine: (no-preload-311070) DBG |   
	I0816 13:35:06.072866   54744 main.go:141] libmachine: (no-preload-311070) DBG | </network>
	I0816 13:35:06.072873   54744 main.go:141] libmachine: (no-preload-311070) DBG | 
	I0816 13:35:06.230961   54744 main.go:141] libmachine: (no-preload-311070) DBG | trying to create private KVM network mk-no-preload-311070 192.168.61.0/24...
	I0816 13:35:06.308685   54744 main.go:141] libmachine: (no-preload-311070) DBG | private KVM network mk-no-preload-311070 192.168.61.0/24 created
	I0816 13:35:06.308843   54744 main.go:141] libmachine: (no-preload-311070) Setting up store path in /home/jenkins/minikube-integration/19423-3966/.minikube/machines/no-preload-311070 ...
	I0816 13:35:06.308873   54744 main.go:141] libmachine: (no-preload-311070) Building disk image from file:///home/jenkins/minikube-integration/19423-3966/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso
	I0816 13:35:06.308983   54744 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:35:06.308791   54892 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19423-3966/.minikube
	I0816 13:35:06.309049   54744 main.go:141] libmachine: (no-preload-311070) Downloading /home/jenkins/minikube-integration/19423-3966/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19423-3966/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso...
	I0816 13:35:06.561557   54744 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:35:06.561461   54892 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/no-preload-311070/id_rsa...
	I0816 13:35:06.664498   54744 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:35:06.664353   54892 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/no-preload-311070/no-preload-311070.rawdisk...
	I0816 13:35:06.664537   54744 main.go:141] libmachine: (no-preload-311070) DBG | Writing magic tar header
	I0816 13:35:06.664557   54744 main.go:141] libmachine: (no-preload-311070) DBG | Writing SSH key tar header
	I0816 13:35:06.664572   54744 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:35:06.664464   54892 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19423-3966/.minikube/machines/no-preload-311070 ...
	I0816 13:35:06.664590   54744 main.go:141] libmachine: (no-preload-311070) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/no-preload-311070
	I0816 13:35:06.664673   54744 main.go:141] libmachine: (no-preload-311070) Setting executable bit set on /home/jenkins/minikube-integration/19423-3966/.minikube/machines/no-preload-311070 (perms=drwx------)
	I0816 13:35:06.664703   54744 main.go:141] libmachine: (no-preload-311070) Setting executable bit set on /home/jenkins/minikube-integration/19423-3966/.minikube/machines (perms=drwxr-xr-x)
	I0816 13:35:06.664719   54744 main.go:141] libmachine: (no-preload-311070) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-3966/.minikube/machines
	I0816 13:35:06.664734   54744 main.go:141] libmachine: (no-preload-311070) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-3966/.minikube
	I0816 13:35:06.664744   54744 main.go:141] libmachine: (no-preload-311070) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-3966
	I0816 13:35:06.664761   54744 main.go:141] libmachine: (no-preload-311070) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0816 13:35:06.664772   54744 main.go:141] libmachine: (no-preload-311070) DBG | Checking permissions on dir: /home/jenkins
	I0816 13:35:06.664826   54744 main.go:141] libmachine: (no-preload-311070) Setting executable bit set on /home/jenkins/minikube-integration/19423-3966/.minikube (perms=drwxr-xr-x)
	I0816 13:35:06.664863   54744 main.go:141] libmachine: (no-preload-311070) Setting executable bit set on /home/jenkins/minikube-integration/19423-3966 (perms=drwxrwxr-x)
	I0816 13:35:06.664873   54744 main.go:141] libmachine: (no-preload-311070) DBG | Checking permissions on dir: /home
	I0816 13:35:06.664886   54744 main.go:141] libmachine: (no-preload-311070) DBG | Skipping /home - not owner
	I0816 13:35:06.664901   54744 main.go:141] libmachine: (no-preload-311070) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0816 13:35:06.664926   54744 main.go:141] libmachine: (no-preload-311070) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0816 13:35:06.664937   54744 main.go:141] libmachine: (no-preload-311070) Creating domain...
	I0816 13:35:06.666031   54744 main.go:141] libmachine: (no-preload-311070) define libvirt domain using xml: 
	I0816 13:35:06.666052   54744 main.go:141] libmachine: (no-preload-311070) <domain type='kvm'>
	I0816 13:35:06.666063   54744 main.go:141] libmachine: (no-preload-311070)   <name>no-preload-311070</name>
	I0816 13:35:06.666071   54744 main.go:141] libmachine: (no-preload-311070)   <memory unit='MiB'>2200</memory>
	I0816 13:35:06.666108   54744 main.go:141] libmachine: (no-preload-311070)   <vcpu>2</vcpu>
	I0816 13:35:06.666120   54744 main.go:141] libmachine: (no-preload-311070)   <features>
	I0816 13:35:06.666134   54744 main.go:141] libmachine: (no-preload-311070)     <acpi/>
	I0816 13:35:06.666177   54744 main.go:141] libmachine: (no-preload-311070)     <apic/>
	I0816 13:35:06.666192   54744 main.go:141] libmachine: (no-preload-311070)     <pae/>
	I0816 13:35:06.666202   54744 main.go:141] libmachine: (no-preload-311070)     
	I0816 13:35:06.666214   54744 main.go:141] libmachine: (no-preload-311070)   </features>
	I0816 13:35:06.666238   54744 main.go:141] libmachine: (no-preload-311070)   <cpu mode='host-passthrough'>
	I0816 13:35:06.666244   54744 main.go:141] libmachine: (no-preload-311070)   
	I0816 13:35:06.666252   54744 main.go:141] libmachine: (no-preload-311070)   </cpu>
	I0816 13:35:06.666258   54744 main.go:141] libmachine: (no-preload-311070)   <os>
	I0816 13:35:06.666267   54744 main.go:141] libmachine: (no-preload-311070)     <type>hvm</type>
	I0816 13:35:06.666274   54744 main.go:141] libmachine: (no-preload-311070)     <boot dev='cdrom'/>
	I0816 13:35:06.666281   54744 main.go:141] libmachine: (no-preload-311070)     <boot dev='hd'/>
	I0816 13:35:06.666288   54744 main.go:141] libmachine: (no-preload-311070)     <bootmenu enable='no'/>
	I0816 13:35:06.666304   54744 main.go:141] libmachine: (no-preload-311070)   </os>
	I0816 13:35:06.666319   54744 main.go:141] libmachine: (no-preload-311070)   <devices>
	I0816 13:35:06.666331   54744 main.go:141] libmachine: (no-preload-311070)     <disk type='file' device='cdrom'>
	I0816 13:35:06.666345   54744 main.go:141] libmachine: (no-preload-311070)       <source file='/home/jenkins/minikube-integration/19423-3966/.minikube/machines/no-preload-311070/boot2docker.iso'/>
	I0816 13:35:06.666363   54744 main.go:141] libmachine: (no-preload-311070)       <target dev='hdc' bus='scsi'/>
	I0816 13:35:06.666372   54744 main.go:141] libmachine: (no-preload-311070)       <readonly/>
	I0816 13:35:06.666382   54744 main.go:141] libmachine: (no-preload-311070)     </disk>
	I0816 13:35:06.666391   54744 main.go:141] libmachine: (no-preload-311070)     <disk type='file' device='disk'>
	I0816 13:35:06.666401   54744 main.go:141] libmachine: (no-preload-311070)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0816 13:35:06.666421   54744 main.go:141] libmachine: (no-preload-311070)       <source file='/home/jenkins/minikube-integration/19423-3966/.minikube/machines/no-preload-311070/no-preload-311070.rawdisk'/>
	I0816 13:35:06.666433   54744 main.go:141] libmachine: (no-preload-311070)       <target dev='hda' bus='virtio'/>
	I0816 13:35:06.666441   54744 main.go:141] libmachine: (no-preload-311070)     </disk>
	I0816 13:35:06.666451   54744 main.go:141] libmachine: (no-preload-311070)     <interface type='network'>
	I0816 13:35:06.666476   54744 main.go:141] libmachine: (no-preload-311070)       <source network='mk-no-preload-311070'/>
	I0816 13:35:06.666498   54744 main.go:141] libmachine: (no-preload-311070)       <model type='virtio'/>
	I0816 13:35:06.666515   54744 main.go:141] libmachine: (no-preload-311070)     </interface>
	I0816 13:35:06.666528   54744 main.go:141] libmachine: (no-preload-311070)     <interface type='network'>
	I0816 13:35:06.666538   54744 main.go:141] libmachine: (no-preload-311070)       <source network='default'/>
	I0816 13:35:06.666563   54744 main.go:141] libmachine: (no-preload-311070)       <model type='virtio'/>
	I0816 13:35:06.666574   54744 main.go:141] libmachine: (no-preload-311070)     </interface>
	I0816 13:35:06.666581   54744 main.go:141] libmachine: (no-preload-311070)     <serial type='pty'>
	I0816 13:35:06.666591   54744 main.go:141] libmachine: (no-preload-311070)       <target port='0'/>
	I0816 13:35:06.666599   54744 main.go:141] libmachine: (no-preload-311070)     </serial>
	I0816 13:35:06.666607   54744 main.go:141] libmachine: (no-preload-311070)     <console type='pty'>
	I0816 13:35:06.666616   54744 main.go:141] libmachine: (no-preload-311070)       <target type='serial' port='0'/>
	I0816 13:35:06.666628   54744 main.go:141] libmachine: (no-preload-311070)     </console>
	I0816 13:35:06.666642   54744 main.go:141] libmachine: (no-preload-311070)     <rng model='virtio'>
	I0816 13:35:06.666657   54744 main.go:141] libmachine: (no-preload-311070)       <backend model='random'>/dev/random</backend>
	I0816 13:35:06.666667   54744 main.go:141] libmachine: (no-preload-311070)     </rng>
	I0816 13:35:06.666678   54744 main.go:141] libmachine: (no-preload-311070)     
	I0816 13:35:06.666687   54744 main.go:141] libmachine: (no-preload-311070)     
	I0816 13:35:06.666696   54744 main.go:141] libmachine: (no-preload-311070)   </devices>
	I0816 13:35:06.666704   54744 main.go:141] libmachine: (no-preload-311070) </domain>
	I0816 13:35:06.666717   54744 main.go:141] libmachine: (no-preload-311070) 
	I0816 13:35:06.761414   54744 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:ec:cf:e6 in network default
	I0816 13:35:06.763334   54744 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:35:06.763369   54744 main.go:141] libmachine: (no-preload-311070) Ensuring networks are active...
	I0816 13:35:06.764991   54744 main.go:141] libmachine: (no-preload-311070) Ensuring network default is active
	I0816 13:35:06.765015   54744 main.go:141] libmachine: (no-preload-311070) Ensuring network mk-no-preload-311070 is active
	I0816 13:35:06.765581   54744 main.go:141] libmachine: (no-preload-311070) Getting domain xml...
	I0816 13:35:06.766698   54744 main.go:141] libmachine: (no-preload-311070) Creating domain...
	I0816 13:35:08.390695   54744 main.go:141] libmachine: (no-preload-311070) Waiting to get IP...
	I0816 13:35:08.391782   54744 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:35:08.392178   54744 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:35:08.392241   54744 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:35:08.392182   54892 retry.go:31] will retry after 203.102659ms: waiting for machine to come up
	I0816 13:35:08.596640   54744 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:35:08.597140   54744 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:35:08.597184   54744 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:35:08.597118   54892 retry.go:31] will retry after 235.792446ms: waiting for machine to come up
	I0816 13:35:08.834514   54744 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:35:08.835052   54744 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:35:08.835083   54744 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:35:08.835010   54892 retry.go:31] will retry after 346.213966ms: waiting for machine to come up
	I0816 13:35:09.182468   54744 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:35:09.183063   54744 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:35:09.183093   54744 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:35:09.183009   54892 retry.go:31] will retry after 401.638751ms: waiting for machine to come up
	I0816 13:35:07.408811   53711 out.go:235]   - Generating certificates and keys ...
	I0816 13:35:07.408963   53711 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 13:35:07.409065   53711 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 13:35:07.583136   53711 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0816 13:35:07.659833   53711 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0816 13:35:07.873225   53711 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0816 13:35:08.185445   53711 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0816 13:35:08.304854   53711 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0816 13:35:08.305142   53711 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-882237] and IPs [192.168.72.105 127.0.0.1 ::1]
	I0816 13:35:08.455565   53711 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0816 13:35:08.455867   53711 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-882237] and IPs [192.168.72.105 127.0.0.1 ::1]
	I0816 13:35:08.830735   53711 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0816 13:35:05.588639   53986 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 13:35:05.588668   53986 machine.go:96] duration metric: took 6.805886726s to provisionDockerMachine
	I0816 13:35:05.588683   53986 start.go:293] postStartSetup for "kubernetes-upgrade-759623" (driver="kvm2")
	I0816 13:35:05.588697   53986 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 13:35:05.588721   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .DriverName
	I0816 13:35:05.589272   53986 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 13:35:05.589307   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHHostname
	I0816 13:35:05.592274   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:35:05.592776   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:14:2a", ip: ""} in network mk-kubernetes-upgrade-759623: {Iface:virbr1 ExpiryTime:2024-08-16 14:33:35 +0000 UTC Type:0 Mac:52:54:00:2b:14:2a Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:kubernetes-upgrade-759623 Clientid:01:52:54:00:2b:14:2a}
	I0816 13:35:05.592805   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined IP address 192.168.39.57 and MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:35:05.593001   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHPort
	I0816 13:35:05.593209   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHKeyPath
	I0816 13:35:05.593407   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHUsername
	I0816 13:35:05.593576   53986 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/kubernetes-upgrade-759623/id_rsa Username:docker}
	I0816 13:35:05.681033   53986 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 13:35:05.685737   53986 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 13:35:05.685770   53986 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/addons for local assets ...
	I0816 13:35:05.685852   53986 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/files for local assets ...
	I0816 13:35:05.685970   53986 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem -> 111492.pem in /etc/ssl/certs
	I0816 13:35:05.686093   53986 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 13:35:05.698599   53986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /etc/ssl/certs/111492.pem (1708 bytes)
	I0816 13:35:05.723937   53986 start.go:296] duration metric: took 135.239666ms for postStartSetup
	I0816 13:35:05.723983   53986 fix.go:56] duration metric: took 6.965896631s for fixHost
	I0816 13:35:05.724006   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHHostname
	I0816 13:35:05.726822   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:35:05.727118   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:14:2a", ip: ""} in network mk-kubernetes-upgrade-759623: {Iface:virbr1 ExpiryTime:2024-08-16 14:33:35 +0000 UTC Type:0 Mac:52:54:00:2b:14:2a Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:kubernetes-upgrade-759623 Clientid:01:52:54:00:2b:14:2a}
	I0816 13:35:05.727149   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined IP address 192.168.39.57 and MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:35:05.727350   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHPort
	I0816 13:35:05.727545   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHKeyPath
	I0816 13:35:05.727723   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHKeyPath
	I0816 13:35:05.727881   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHUsername
	I0816 13:35:05.728064   53986 main.go:141] libmachine: Using SSH client type: native
	I0816 13:35:05.728239   53986 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0816 13:35:05.728251   53986 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 13:35:05.842870   53986 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723815305.832118411
	
	I0816 13:35:05.842892   53986 fix.go:216] guest clock: 1723815305.832118411
	I0816 13:35:05.842902   53986 fix.go:229] Guest: 2024-08-16 13:35:05.832118411 +0000 UTC Remote: 2024-08-16 13:35:05.723987231 +0000 UTC m=+60.751805278 (delta=108.13118ms)
	I0816 13:35:05.842926   53986 fix.go:200] guest clock delta is within tolerance: 108.13118ms
	I0816 13:35:05.842933   53986 start.go:83] releasing machines lock for "kubernetes-upgrade-759623", held for 7.084878317s
	I0816 13:35:05.842961   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .DriverName
	I0816 13:35:05.843527   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetIP
	I0816 13:35:05.847387   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:35:05.847798   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:14:2a", ip: ""} in network mk-kubernetes-upgrade-759623: {Iface:virbr1 ExpiryTime:2024-08-16 14:33:35 +0000 UTC Type:0 Mac:52:54:00:2b:14:2a Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:kubernetes-upgrade-759623 Clientid:01:52:54:00:2b:14:2a}
	I0816 13:35:05.847838   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined IP address 192.168.39.57 and MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:35:05.848022   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .DriverName
	I0816 13:35:05.848688   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .DriverName
	I0816 13:35:05.848903   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .DriverName
	I0816 13:35:05.848999   53986 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 13:35:05.849038   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHHostname
	I0816 13:35:05.849162   53986 ssh_runner.go:195] Run: cat /version.json
	I0816 13:35:05.849191   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHHostname
	I0816 13:35:05.852181   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:35:05.852449   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:35:05.852607   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:14:2a", ip: ""} in network mk-kubernetes-upgrade-759623: {Iface:virbr1 ExpiryTime:2024-08-16 14:33:35 +0000 UTC Type:0 Mac:52:54:00:2b:14:2a Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:kubernetes-upgrade-759623 Clientid:01:52:54:00:2b:14:2a}
	I0816 13:35:05.852639   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined IP address 192.168.39.57 and MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:35:05.852867   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:14:2a", ip: ""} in network mk-kubernetes-upgrade-759623: {Iface:virbr1 ExpiryTime:2024-08-16 14:33:35 +0000 UTC Type:0 Mac:52:54:00:2b:14:2a Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:kubernetes-upgrade-759623 Clientid:01:52:54:00:2b:14:2a}
	I0816 13:35:05.852875   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHPort
	I0816 13:35:05.852888   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined IP address 192.168.39.57 and MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:35:05.853072   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHPort
	I0816 13:35:05.853105   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHKeyPath
	I0816 13:35:05.853218   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHKeyPath
	I0816 13:35:05.853259   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHUsername
	I0816 13:35:05.853413   53986 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/kubernetes-upgrade-759623/id_rsa Username:docker}
	I0816 13:35:05.853421   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetSSHUsername
	I0816 13:35:05.853590   53986 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/kubernetes-upgrade-759623/id_rsa Username:docker}
	I0816 13:35:05.963973   53986 ssh_runner.go:195] Run: systemctl --version
	I0816 13:35:05.973029   53986 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 13:35:06.145081   53986 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 13:35:06.153686   53986 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 13:35:06.153751   53986 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 13:35:06.163802   53986 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0816 13:35:06.163828   53986 start.go:495] detecting cgroup driver to use...
	I0816 13:35:06.163896   53986 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 13:35:06.186855   53986 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 13:35:06.204138   53986 docker.go:217] disabling cri-docker service (if available) ...
	I0816 13:35:06.204219   53986 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 13:35:06.221259   53986 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 13:35:06.245877   53986 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 13:35:06.420640   53986 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 13:35:06.593169   53986 docker.go:233] disabling docker service ...
	I0816 13:35:06.593229   53986 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 13:35:06.611669   53986 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 13:35:06.628594   53986 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 13:35:06.795534   53986 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 13:35:06.969933   53986 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 13:35:06.985316   53986 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 13:35:07.006495   53986 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 13:35:07.006571   53986 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:35:07.018529   53986 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 13:35:07.018629   53986 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:35:07.032547   53986 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:35:07.044047   53986 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:35:07.056223   53986 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 13:35:07.068046   53986 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:35:07.080554   53986 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:35:07.093677   53986 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:35:07.105347   53986 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 13:35:07.116151   53986 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 13:35:07.127499   53986 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:35:07.287162   53986 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 13:35:09.528588   53711 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0816 13:35:09.636503   53711 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0816 13:35:09.636821   53711 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 13:35:09.817796   53711 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 13:35:10.095902   53711 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 13:35:10.355911   53711 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 13:35:10.474564   53711 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 13:35:10.495265   53711 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 13:35:10.495443   53711 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 13:35:10.495521   53711 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 13:35:10.629692   53711 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 13:35:09.586294   54744 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:35:09.586707   54744 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:35:09.586735   54744 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:35:09.586666   54892 retry.go:31] will retry after 710.400009ms: waiting for machine to come up
	I0816 13:35:10.299126   54744 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:35:10.299552   54744 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:35:10.299576   54744 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:35:10.299517   54892 retry.go:31] will retry after 885.778788ms: waiting for machine to come up
	I0816 13:35:11.186978   54744 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:35:11.187405   54744 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:35:11.187441   54744 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:35:11.187372   54892 retry.go:31] will retry after 1.006356406s: waiting for machine to come up
	I0816 13:35:12.195478   54744 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:35:12.195886   54744 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:35:12.195907   54744 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:35:12.195845   54892 retry.go:31] will retry after 1.324744933s: waiting for machine to come up
	I0816 13:35:13.522486   54744 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:35:13.522889   54744 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:35:13.522918   54744 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:35:13.522844   54892 retry.go:31] will retry after 1.648017121s: waiting for machine to come up
	I0816 13:35:10.631500   53711 out.go:235]   - Booting up control plane ...
	I0816 13:35:10.631632   53711 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 13:35:10.635885   53711 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 13:35:10.637781   53711 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 13:35:10.638679   53711 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 13:35:10.655477   53711 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0816 13:35:14.437142   53986 ssh_runner.go:235] Completed: sudo systemctl restart crio: (7.14993384s)
	I0816 13:35:14.437189   53986 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 13:35:14.437246   53986 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 13:35:14.442974   53986 start.go:563] Will wait 60s for crictl version
	I0816 13:35:14.443039   53986 ssh_runner.go:195] Run: which crictl
	I0816 13:35:14.447465   53986 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 13:35:14.499780   53986 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 13:35:14.499917   53986 ssh_runner.go:195] Run: crio --version
	I0816 13:35:14.533027   53986 ssh_runner.go:195] Run: crio --version
	I0816 13:35:14.566042   53986 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 13:35:14.567433   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) Calling .GetIP
	I0816 13:35:14.570473   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:35:14.570799   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:14:2a", ip: ""} in network mk-kubernetes-upgrade-759623: {Iface:virbr1 ExpiryTime:2024-08-16 14:33:35 +0000 UTC Type:0 Mac:52:54:00:2b:14:2a Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:kubernetes-upgrade-759623 Clientid:01:52:54:00:2b:14:2a}
	I0816 13:35:14.570830   53986 main.go:141] libmachine: (kubernetes-upgrade-759623) DBG | domain kubernetes-upgrade-759623 has defined IP address 192.168.39.57 and MAC address 52:54:00:2b:14:2a in network mk-kubernetes-upgrade-759623
	I0816 13:35:14.571027   53986 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0816 13:35:14.575754   53986 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-759623 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:kubernetes-upgrade-759623 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.57 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 13:35:14.575844   53986 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 13:35:14.575880   53986 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 13:35:14.627139   53986 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 13:35:14.627162   53986 crio.go:433] Images already preloaded, skipping extraction
	I0816 13:35:14.627209   53986 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 13:35:14.662458   53986 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 13:35:14.662482   53986 cache_images.go:84] Images are preloaded, skipping loading
	I0816 13:35:14.662489   53986 kubeadm.go:934] updating node { 192.168.39.57 8443 v1.31.0 crio true true} ...
	I0816 13:35:14.662585   53986 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-759623 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.57
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:kubernetes-upgrade-759623 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 13:35:14.662655   53986 ssh_runner.go:195] Run: crio config
	I0816 13:35:14.715632   53986 cni.go:84] Creating CNI manager for ""
	I0816 13:35:14.715667   53986 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:35:14.715684   53986 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 13:35:14.715713   53986 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.57 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-759623 NodeName:kubernetes-upgrade-759623 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.57"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.57 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 13:35:14.715904   53986 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.57
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-759623"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.57
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.57"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 13:35:14.715988   53986 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 13:35:14.728103   53986 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 13:35:14.728177   53986 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 13:35:14.738441   53986 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0816 13:35:14.757316   53986 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 13:35:14.774020   53986 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0816 13:35:14.792785   53986 ssh_runner.go:195] Run: grep 192.168.39.57	control-plane.minikube.internal$ /etc/hosts
	I0816 13:35:14.796956   53986 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:35:14.941640   53986 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 13:35:14.957689   53986 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/kubernetes-upgrade-759623 for IP: 192.168.39.57
	I0816 13:35:14.957717   53986 certs.go:194] generating shared ca certs ...
	I0816 13:35:14.957742   53986 certs.go:226] acquiring lock for ca certs: {Name:mkdb46755373c135acf8239d2d4352f4b0b3d1d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:35:14.957911   53986 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key
	I0816 13:35:14.957985   53986 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key
	I0816 13:35:14.957997   53986 certs.go:256] generating profile certs ...
	I0816 13:35:14.958130   53986 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/kubernetes-upgrade-759623/client.key
	I0816 13:35:14.958193   53986 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/kubernetes-upgrade-759623/apiserver.key.d43cdebc
	I0816 13:35:14.958249   53986 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/kubernetes-upgrade-759623/proxy-client.key
	I0816 13:35:14.958406   53986 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem (1338 bytes)
	W0816 13:35:14.958448   53986 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149_empty.pem, impossibly tiny 0 bytes
	I0816 13:35:14.958461   53986 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 13:35:14.958491   53986 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem (1082 bytes)
	I0816 13:35:14.958529   53986 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem (1123 bytes)
	I0816 13:35:14.958560   53986 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem (1675 bytes)
	I0816 13:35:14.958624   53986 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem (1708 bytes)
	I0816 13:35:14.959452   53986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 13:35:14.986014   53986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 13:35:15.172294   54744 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:35:15.172778   54744 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:35:15.172807   54744 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:35:15.172726   54892 retry.go:31] will retry after 1.550263758s: waiting for machine to come up
	I0816 13:35:16.724384   54744 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:35:16.724960   54744 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:35:16.724990   54744 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:35:16.724915   54892 retry.go:31] will retry after 2.526291952s: waiting for machine to come up
	I0816 13:35:19.253907   54744 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:35:19.254314   54744 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:35:19.254344   54744 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:35:19.254272   54892 retry.go:31] will retry after 3.480332661s: waiting for machine to come up
	I0816 13:35:15.011573   53986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 13:35:15.040503   53986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 13:35:15.068444   53986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/kubernetes-upgrade-759623/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0816 13:35:15.096297   53986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/kubernetes-upgrade-759623/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 13:35:15.122396   53986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/kubernetes-upgrade-759623/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 13:35:15.147499   53986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/kubernetes-upgrade-759623/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 13:35:15.173378   53986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /usr/share/ca-certificates/111492.pem (1708 bytes)
	I0816 13:35:15.197591   53986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 13:35:15.224679   53986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem --> /usr/share/ca-certificates/11149.pem (1338 bytes)
	I0816 13:35:15.250603   53986 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 13:35:15.268966   53986 ssh_runner.go:195] Run: openssl version
	I0816 13:35:15.277478   53986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111492.pem && ln -fs /usr/share/ca-certificates/111492.pem /etc/ssl/certs/111492.pem"
	I0816 13:35:15.291364   53986 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111492.pem
	I0816 13:35:15.297118   53986 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 12:33 /usr/share/ca-certificates/111492.pem
	I0816 13:35:15.297179   53986 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111492.pem
	I0816 13:35:15.303172   53986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111492.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 13:35:15.312532   53986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 13:35:15.323254   53986 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:35:15.327658   53986 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 12:22 /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:35:15.327714   53986 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:35:15.333417   53986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 13:35:15.345736   53986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11149.pem && ln -fs /usr/share/ca-certificates/11149.pem /etc/ssl/certs/11149.pem"
	I0816 13:35:15.360070   53986 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11149.pem
	I0816 13:35:15.364803   53986 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 12:33 /usr/share/ca-certificates/11149.pem
	I0816 13:35:15.364871   53986 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11149.pem
	I0816 13:35:15.370897   53986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11149.pem /etc/ssl/certs/51391683.0"
	I0816 13:35:15.380404   53986 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 13:35:15.385024   53986 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 13:35:15.390640   53986 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 13:35:15.396155   53986 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 13:35:15.401637   53986 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 13:35:15.407139   53986 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 13:35:15.412954   53986 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 13:35:15.418764   53986 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-759623 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.0 ClusterName:kubernetes-upgrade-759623 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.57 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 13:35:15.418840   53986 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 13:35:15.418911   53986 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 13:35:15.458849   53986 cri.go:89] found id: "216477333f98684dbfddd6483c9dd9c856bcfd4f3adf60e398a6dbc6f4660a54"
	I0816 13:35:15.458876   53986 cri.go:89] found id: "a4360310fa01d248ec85833d065ce5576102ada91df04d6be122b7603727715c"
	I0816 13:35:15.458882   53986 cri.go:89] found id: "a0967fd24cbb441eaa2b5c854faee4427d1abd3f7d6a33906c7e0239c9e155a6"
	I0816 13:35:15.458886   53986 cri.go:89] found id: "84b1a84ce8b21f2457a869ae6de1bd871b554fbf24daf7b0917ecb9f5be730ee"
	I0816 13:35:15.458888   53986 cri.go:89] found id: "02c394608769f4cb881fd1e0ccad5b28dfa13863932a19fd179bea7743cc9dc6"
	I0816 13:35:15.458892   53986 cri.go:89] found id: "3a3bd5a844fa8eab4293d8c034bc6e1e4c7c53d18f1ad34a7d872abc477cebe9"
	I0816 13:35:15.458895   53986 cri.go:89] found id: "7b09f6899adef85ec16c2bab0c545d8e6b72cb753bdc63e40f7dfa46242739f4"
	I0816 13:35:15.458899   53986 cri.go:89] found id: "ae8dfb53e0d4957cbb74cfb3c137bf7f0332b0263b1028502597aed9a43702e4"
	I0816 13:35:15.458903   53986 cri.go:89] found id: "f6c9fba1f618a01d8b5ca59760f1006dd53662c62262ba2fc253cff8ef7d57be"
	I0816 13:35:15.458911   53986 cri.go:89] found id: ""
	I0816 13:35:15.458963   53986 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 16 13:35:25 kubernetes-upgrade-759623 crio[2319]: time="2024-08-16 13:35:25.907821090Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723815325907798338,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=95492c7d-2f05-4d44-882c-55adec204f60 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 13:35:25 kubernetes-upgrade-759623 crio[2319]: time="2024-08-16 13:35:25.908494395Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0b9cedeb-5c2b-4cbb-a411-2c9b605cfe9d name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:35:25 kubernetes-upgrade-759623 crio[2319]: time="2024-08-16 13:35:25.908565263Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0b9cedeb-5c2b-4cbb-a411-2c9b605cfe9d name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:35:25 kubernetes-upgrade-759623 crio[2319]: time="2024-08-16 13:35:25.908984552Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2c409d9361b0ae473c5bb4c4d462a5ff0fbe6b56a2130b72055c9d538361a288,PodSandboxId:0684a1dc2577a449add997d6ec1557334938c7391d39793b9f5b3cf7df5ea964,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723815323404477277,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mv64l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea0a4cad-6f1f-4325-b061-758fa7b86f56,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eb3b042c4248705e4cfa986d7951647953f3162d39db46195c7b23088d8d5e1,PodSandboxId:b8318befe07469158d1baebdbcd583172ab1ddcae1ed5a7a994cde04df31237d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723815323334009281,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-25fps,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 4450bc7f-3e21-405e-9c71-ba96f0b981db,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9aad8246b133bf2274e3787f75f23d24f56386e7840210a22be5a1c0bc89a638,PodSandboxId:21877cd51f78d8e94e9d103ebe11baa8106297ac0cae86f3c36813939c2f10f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAIN
ER_RUNNING,CreatedAt:1723815322917436112,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fs5sl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c26fe66c-eab7-42f5-b40f-68e2b1604f8f,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a78068b276e936137b3319db6c1e2adb91f073795813f526e48012480b23387c,PodSandboxId:05ab0c2d2a91f93876bea787f19619d506764c2724af99589c9b4572400592b9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723
815322910980674,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8c00592-fb99-42cc-b13c-2259512834e7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0a8097e30ed9d743062ecbe1a5f291e81a90df1a4c0ab244b760b1bd307dade,PodSandboxId:137f743bdc7a3daa9dcbaa055f2eff1fec1b295715db5f3cd639928d75040851,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723815318024004133,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-759623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f077a0aa375f6a3cdae2b257370274af,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c21f56900e14a07f58f2845cd3bfb1cdb0459179de108b36f5ea4b1674b8ffd9,PodSandboxId:5fc5d61f66aed616b5fa3ce9bd5ba93349e332f3a1f6050ab9c4b9fe7f19f1ec,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723815318017154394,Labels:map[string]string{io
.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-759623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56c044b16cd531203ceaabf7a870c9a8,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df407a85310786e69578dc9c81c9ceb21e0a3376439af85f611234fa07265d78,PodSandboxId:23fb8d7acc43954405556580e9d5a359bfa463deb1ebd1b13d4f1bbb76451aaa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723815317969338241,Labels:map[str
ing]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-759623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c0640198e884779fa0045199a20abea,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69205ebfdf557da8715152da5b671d51d725f8ee471703cf5f0f4867773cdba2,PodSandboxId:bd1bc191b8b1f32e1dbd7407ba7f7343a03b67754508f855bfd1dea6e87720c0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723815317914921442,Labels:map[string]s
tring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-759623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0ebc536e1bfafc769cba0efec870698,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:216477333f98684dbfddd6483c9dd9c856bcfd4f3adf60e398a6dbc6f4660a54,PodSandboxId:23e7d37828326c0d0d25c99d83cc31873e751612e1c91eb9055e7399b10c7d07,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723815276367091271,Labels:map[string]st
ring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8c00592-fb99-42cc-b13c-2259512834e7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4360310fa01d248ec85833d065ce5576102ada91df04d6be122b7603727715c,PodSandboxId:e3c26ef2535a0009a6fde110d114cd8cdc1d9a6f1f494d21cc3524bbb7473b9d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723815246304065543,Labels:map[string]string{io.kubernetes.contai
ner.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-25fps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4450bc7f-3e21-405e-9c71-ba96f0b981db,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0967fd24cbb441eaa2b5c854faee4427d1abd3f7d6a33906c7e0239c9e155a6,PodSandboxId:463ccc0728b10eeb32d0c4d370d556ff1bf6ea5b36194eb1c56f76da3b84fa2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedIma
ge:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723815246268070384,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mv64l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea0a4cad-6f1f-4325-b061-758fa7b86f56,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84b1a84ce8b21f2457a869ae6de1bd871b554fbf24daf7b0917ecb9f5be730ee,PodSandboxId:16e68f652fb35682df0b60b0787e78d2963d2683504477c27eda666b5ad0
d572,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723815245801450127,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fs5sl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c26fe66c-eab7-42f5-b40f-68e2b1604f8f,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a3bd5a844fa8eab4293d8c034bc6e1e4c7c53d18f1ad34a7d872abc477cebe9,PodSandboxId:7cb4179ab861eefb2e7ec22c4dd81294720072af198c2db8565defad4ffb6d38,Metadata:&ContainerMetadata{Na
me:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723815235267692846,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-759623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f077a0aa375f6a3cdae2b257370274af,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b09f6899adef85ec16c2bab0c545d8e6b72cb753bdc63e40f7dfa46242739f4,PodSandboxId:295cf3a645495b8c2d713a059037b5969065bda5681fb58a7ace12d728d6b55e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0
,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723815235258939372,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-759623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56c044b16cd531203ceaabf7a870c9a8,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae8dfb53e0d4957cbb74cfb3c137bf7f0332b0263b1028502597aed9a43702e4,PodSandboxId:2da98600be816d8369078fd6fa04decc66abad9727e6e8b7a8715c2a6500055d,Metadata:&ContainerMetadata{Name:kube-schedul
er,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723815235201941539,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-759623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0ebc536e1bfafc769cba0efec870698,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6c9fba1f618a01d8b5ca59760f1006dd53662c62262ba2fc253cff8ef7d57be,PodSandboxId:4a5b2b3c347454bb9007903ef76767218dd062867a2498417de44092398b3e86,Metadata:&ContainerMetadata{Name:kube-apiserver,Att
empt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723815235145371675,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-759623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c0640198e884779fa0045199a20abea,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0b9cedeb-5c2b-4cbb-a411-2c9b605cfe9d name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:35:25 kubernetes-upgrade-759623 crio[2319]: time="2024-08-16 13:35:25.960914953Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7925d123-c1de-4f08-96ff-d04946d7cf03 name=/runtime.v1.RuntimeService/Version
	Aug 16 13:35:25 kubernetes-upgrade-759623 crio[2319]: time="2024-08-16 13:35:25.961042972Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7925d123-c1de-4f08-96ff-d04946d7cf03 name=/runtime.v1.RuntimeService/Version
	Aug 16 13:35:25 kubernetes-upgrade-759623 crio[2319]: time="2024-08-16 13:35:25.963857558Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=80951a22-93c3-4547-9e84-1a0c12b600eb name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 13:35:25 kubernetes-upgrade-759623 crio[2319]: time="2024-08-16 13:35:25.964451457Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723815325964408963,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=80951a22-93c3-4547-9e84-1a0c12b600eb name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 13:35:25 kubernetes-upgrade-759623 crio[2319]: time="2024-08-16 13:35:25.965212985Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=81261615-cd4b-494f-899f-27991bdc23a0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:35:25 kubernetes-upgrade-759623 crio[2319]: time="2024-08-16 13:35:25.965338208Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=81261615-cd4b-494f-899f-27991bdc23a0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:35:25 kubernetes-upgrade-759623 crio[2319]: time="2024-08-16 13:35:25.965761221Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2c409d9361b0ae473c5bb4c4d462a5ff0fbe6b56a2130b72055c9d538361a288,PodSandboxId:0684a1dc2577a449add997d6ec1557334938c7391d39793b9f5b3cf7df5ea964,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723815323404477277,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mv64l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea0a4cad-6f1f-4325-b061-758fa7b86f56,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eb3b042c4248705e4cfa986d7951647953f3162d39db46195c7b23088d8d5e1,PodSandboxId:b8318befe07469158d1baebdbcd583172ab1ddcae1ed5a7a994cde04df31237d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723815323334009281,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-25fps,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 4450bc7f-3e21-405e-9c71-ba96f0b981db,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9aad8246b133bf2274e3787f75f23d24f56386e7840210a22be5a1c0bc89a638,PodSandboxId:21877cd51f78d8e94e9d103ebe11baa8106297ac0cae86f3c36813939c2f10f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAIN
ER_RUNNING,CreatedAt:1723815322917436112,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fs5sl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c26fe66c-eab7-42f5-b40f-68e2b1604f8f,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a78068b276e936137b3319db6c1e2adb91f073795813f526e48012480b23387c,PodSandboxId:05ab0c2d2a91f93876bea787f19619d506764c2724af99589c9b4572400592b9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723
815322910980674,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8c00592-fb99-42cc-b13c-2259512834e7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0a8097e30ed9d743062ecbe1a5f291e81a90df1a4c0ab244b760b1bd307dade,PodSandboxId:137f743bdc7a3daa9dcbaa055f2eff1fec1b295715db5f3cd639928d75040851,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723815318024004133,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-759623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f077a0aa375f6a3cdae2b257370274af,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c21f56900e14a07f58f2845cd3bfb1cdb0459179de108b36f5ea4b1674b8ffd9,PodSandboxId:5fc5d61f66aed616b5fa3ce9bd5ba93349e332f3a1f6050ab9c4b9fe7f19f1ec,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723815318017154394,Labels:map[string]string{io
.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-759623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56c044b16cd531203ceaabf7a870c9a8,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df407a85310786e69578dc9c81c9ceb21e0a3376439af85f611234fa07265d78,PodSandboxId:23fb8d7acc43954405556580e9d5a359bfa463deb1ebd1b13d4f1bbb76451aaa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723815317969338241,Labels:map[str
ing]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-759623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c0640198e884779fa0045199a20abea,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69205ebfdf557da8715152da5b671d51d725f8ee471703cf5f0f4867773cdba2,PodSandboxId:bd1bc191b8b1f32e1dbd7407ba7f7343a03b67754508f855bfd1dea6e87720c0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723815317914921442,Labels:map[string]s
tring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-759623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0ebc536e1bfafc769cba0efec870698,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:216477333f98684dbfddd6483c9dd9c856bcfd4f3adf60e398a6dbc6f4660a54,PodSandboxId:23e7d37828326c0d0d25c99d83cc31873e751612e1c91eb9055e7399b10c7d07,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723815276367091271,Labels:map[string]st
ring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8c00592-fb99-42cc-b13c-2259512834e7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4360310fa01d248ec85833d065ce5576102ada91df04d6be122b7603727715c,PodSandboxId:e3c26ef2535a0009a6fde110d114cd8cdc1d9a6f1f494d21cc3524bbb7473b9d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723815246304065543,Labels:map[string]string{io.kubernetes.contai
ner.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-25fps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4450bc7f-3e21-405e-9c71-ba96f0b981db,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0967fd24cbb441eaa2b5c854faee4427d1abd3f7d6a33906c7e0239c9e155a6,PodSandboxId:463ccc0728b10eeb32d0c4d370d556ff1bf6ea5b36194eb1c56f76da3b84fa2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedIma
ge:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723815246268070384,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mv64l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea0a4cad-6f1f-4325-b061-758fa7b86f56,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84b1a84ce8b21f2457a869ae6de1bd871b554fbf24daf7b0917ecb9f5be730ee,PodSandboxId:16e68f652fb35682df0b60b0787e78d2963d2683504477c27eda666b5ad0
d572,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723815245801450127,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fs5sl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c26fe66c-eab7-42f5-b40f-68e2b1604f8f,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a3bd5a844fa8eab4293d8c034bc6e1e4c7c53d18f1ad34a7d872abc477cebe9,PodSandboxId:7cb4179ab861eefb2e7ec22c4dd81294720072af198c2db8565defad4ffb6d38,Metadata:&ContainerMetadata{Na
me:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723815235267692846,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-759623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f077a0aa375f6a3cdae2b257370274af,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b09f6899adef85ec16c2bab0c545d8e6b72cb753bdc63e40f7dfa46242739f4,PodSandboxId:295cf3a645495b8c2d713a059037b5969065bda5681fb58a7ace12d728d6b55e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0
,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723815235258939372,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-759623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56c044b16cd531203ceaabf7a870c9a8,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae8dfb53e0d4957cbb74cfb3c137bf7f0332b0263b1028502597aed9a43702e4,PodSandboxId:2da98600be816d8369078fd6fa04decc66abad9727e6e8b7a8715c2a6500055d,Metadata:&ContainerMetadata{Name:kube-schedul
er,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723815235201941539,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-759623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0ebc536e1bfafc769cba0efec870698,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6c9fba1f618a01d8b5ca59760f1006dd53662c62262ba2fc253cff8ef7d57be,PodSandboxId:4a5b2b3c347454bb9007903ef76767218dd062867a2498417de44092398b3e86,Metadata:&ContainerMetadata{Name:kube-apiserver,Att
empt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723815235145371675,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-759623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c0640198e884779fa0045199a20abea,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=81261615-cd4b-494f-899f-27991bdc23a0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:35:26 kubernetes-upgrade-759623 crio[2319]: time="2024-08-16 13:35:26.014533095Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=eacd4c07-9d77-440f-ba27-d43f86f16288 name=/runtime.v1.RuntimeService/Version
	Aug 16 13:35:26 kubernetes-upgrade-759623 crio[2319]: time="2024-08-16 13:35:26.014649983Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=eacd4c07-9d77-440f-ba27-d43f86f16288 name=/runtime.v1.RuntimeService/Version
	Aug 16 13:35:26 kubernetes-upgrade-759623 crio[2319]: time="2024-08-16 13:35:26.015935469Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5dee8c0c-32a8-4be5-89a5-cc816184a57d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 13:35:26 kubernetes-upgrade-759623 crio[2319]: time="2024-08-16 13:35:26.016657719Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723815326016617174,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5dee8c0c-32a8-4be5-89a5-cc816184a57d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 13:35:26 kubernetes-upgrade-759623 crio[2319]: time="2024-08-16 13:35:26.017374544Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2caf70a5-c834-443b-b357-15afa8e60294 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:35:26 kubernetes-upgrade-759623 crio[2319]: time="2024-08-16 13:35:26.017475627Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2caf70a5-c834-443b-b357-15afa8e60294 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:35:26 kubernetes-upgrade-759623 crio[2319]: time="2024-08-16 13:35:26.017990301Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2c409d9361b0ae473c5bb4c4d462a5ff0fbe6b56a2130b72055c9d538361a288,PodSandboxId:0684a1dc2577a449add997d6ec1557334938c7391d39793b9f5b3cf7df5ea964,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723815323404477277,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mv64l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea0a4cad-6f1f-4325-b061-758fa7b86f56,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eb3b042c4248705e4cfa986d7951647953f3162d39db46195c7b23088d8d5e1,PodSandboxId:b8318befe07469158d1baebdbcd583172ab1ddcae1ed5a7a994cde04df31237d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723815323334009281,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-25fps,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 4450bc7f-3e21-405e-9c71-ba96f0b981db,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9aad8246b133bf2274e3787f75f23d24f56386e7840210a22be5a1c0bc89a638,PodSandboxId:21877cd51f78d8e94e9d103ebe11baa8106297ac0cae86f3c36813939c2f10f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAIN
ER_RUNNING,CreatedAt:1723815322917436112,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fs5sl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c26fe66c-eab7-42f5-b40f-68e2b1604f8f,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a78068b276e936137b3319db6c1e2adb91f073795813f526e48012480b23387c,PodSandboxId:05ab0c2d2a91f93876bea787f19619d506764c2724af99589c9b4572400592b9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723
815322910980674,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8c00592-fb99-42cc-b13c-2259512834e7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0a8097e30ed9d743062ecbe1a5f291e81a90df1a4c0ab244b760b1bd307dade,PodSandboxId:137f743bdc7a3daa9dcbaa055f2eff1fec1b295715db5f3cd639928d75040851,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723815318024004133,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-759623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f077a0aa375f6a3cdae2b257370274af,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c21f56900e14a07f58f2845cd3bfb1cdb0459179de108b36f5ea4b1674b8ffd9,PodSandboxId:5fc5d61f66aed616b5fa3ce9bd5ba93349e332f3a1f6050ab9c4b9fe7f19f1ec,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723815318017154394,Labels:map[string]string{io
.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-759623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56c044b16cd531203ceaabf7a870c9a8,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df407a85310786e69578dc9c81c9ceb21e0a3376439af85f611234fa07265d78,PodSandboxId:23fb8d7acc43954405556580e9d5a359bfa463deb1ebd1b13d4f1bbb76451aaa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723815317969338241,Labels:map[str
ing]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-759623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c0640198e884779fa0045199a20abea,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69205ebfdf557da8715152da5b671d51d725f8ee471703cf5f0f4867773cdba2,PodSandboxId:bd1bc191b8b1f32e1dbd7407ba7f7343a03b67754508f855bfd1dea6e87720c0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723815317914921442,Labels:map[string]s
tring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-759623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0ebc536e1bfafc769cba0efec870698,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:216477333f98684dbfddd6483c9dd9c856bcfd4f3adf60e398a6dbc6f4660a54,PodSandboxId:23e7d37828326c0d0d25c99d83cc31873e751612e1c91eb9055e7399b10c7d07,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723815276367091271,Labels:map[string]st
ring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8c00592-fb99-42cc-b13c-2259512834e7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4360310fa01d248ec85833d065ce5576102ada91df04d6be122b7603727715c,PodSandboxId:e3c26ef2535a0009a6fde110d114cd8cdc1d9a6f1f494d21cc3524bbb7473b9d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723815246304065543,Labels:map[string]string{io.kubernetes.contai
ner.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-25fps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4450bc7f-3e21-405e-9c71-ba96f0b981db,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0967fd24cbb441eaa2b5c854faee4427d1abd3f7d6a33906c7e0239c9e155a6,PodSandboxId:463ccc0728b10eeb32d0c4d370d556ff1bf6ea5b36194eb1c56f76da3b84fa2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedIma
ge:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723815246268070384,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mv64l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea0a4cad-6f1f-4325-b061-758fa7b86f56,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84b1a84ce8b21f2457a869ae6de1bd871b554fbf24daf7b0917ecb9f5be730ee,PodSandboxId:16e68f652fb35682df0b60b0787e78d2963d2683504477c27eda666b5ad0
d572,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723815245801450127,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fs5sl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c26fe66c-eab7-42f5-b40f-68e2b1604f8f,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a3bd5a844fa8eab4293d8c034bc6e1e4c7c53d18f1ad34a7d872abc477cebe9,PodSandboxId:7cb4179ab861eefb2e7ec22c4dd81294720072af198c2db8565defad4ffb6d38,Metadata:&ContainerMetadata{Na
me:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723815235267692846,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-759623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f077a0aa375f6a3cdae2b257370274af,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b09f6899adef85ec16c2bab0c545d8e6b72cb753bdc63e40f7dfa46242739f4,PodSandboxId:295cf3a645495b8c2d713a059037b5969065bda5681fb58a7ace12d728d6b55e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0
,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723815235258939372,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-759623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56c044b16cd531203ceaabf7a870c9a8,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae8dfb53e0d4957cbb74cfb3c137bf7f0332b0263b1028502597aed9a43702e4,PodSandboxId:2da98600be816d8369078fd6fa04decc66abad9727e6e8b7a8715c2a6500055d,Metadata:&ContainerMetadata{Name:kube-schedul
er,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723815235201941539,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-759623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0ebc536e1bfafc769cba0efec870698,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6c9fba1f618a01d8b5ca59760f1006dd53662c62262ba2fc253cff8ef7d57be,PodSandboxId:4a5b2b3c347454bb9007903ef76767218dd062867a2498417de44092398b3e86,Metadata:&ContainerMetadata{Name:kube-apiserver,Att
empt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723815235145371675,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-759623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c0640198e884779fa0045199a20abea,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2caf70a5-c834-443b-b357-15afa8e60294 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:35:26 kubernetes-upgrade-759623 crio[2319]: time="2024-08-16 13:35:26.057850417Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=81000be8-98ab-4146-aaff-fac6440b01d1 name=/runtime.v1.RuntimeService/Version
	Aug 16 13:35:26 kubernetes-upgrade-759623 crio[2319]: time="2024-08-16 13:35:26.057944058Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=81000be8-98ab-4146-aaff-fac6440b01d1 name=/runtime.v1.RuntimeService/Version
	Aug 16 13:35:26 kubernetes-upgrade-759623 crio[2319]: time="2024-08-16 13:35:26.059407440Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8d75cf9c-58e0-4efe-8561-141255009721 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 13:35:26 kubernetes-upgrade-759623 crio[2319]: time="2024-08-16 13:35:26.059816416Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723815326059790423,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8d75cf9c-58e0-4efe-8561-141255009721 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 13:35:26 kubernetes-upgrade-759623 crio[2319]: time="2024-08-16 13:35:26.060407179Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=28eecebe-820f-405c-8997-77544bc6cd48 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:35:26 kubernetes-upgrade-759623 crio[2319]: time="2024-08-16 13:35:26.060479700Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=28eecebe-820f-405c-8997-77544bc6cd48 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:35:26 kubernetes-upgrade-759623 crio[2319]: time="2024-08-16 13:35:26.060853541Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2c409d9361b0ae473c5bb4c4d462a5ff0fbe6b56a2130b72055c9d538361a288,PodSandboxId:0684a1dc2577a449add997d6ec1557334938c7391d39793b9f5b3cf7df5ea964,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723815323404477277,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mv64l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea0a4cad-6f1f-4325-b061-758fa7b86f56,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eb3b042c4248705e4cfa986d7951647953f3162d39db46195c7b23088d8d5e1,PodSandboxId:b8318befe07469158d1baebdbcd583172ab1ddcae1ed5a7a994cde04df31237d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723815323334009281,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-25fps,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 4450bc7f-3e21-405e-9c71-ba96f0b981db,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9aad8246b133bf2274e3787f75f23d24f56386e7840210a22be5a1c0bc89a638,PodSandboxId:21877cd51f78d8e94e9d103ebe11baa8106297ac0cae86f3c36813939c2f10f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAIN
ER_RUNNING,CreatedAt:1723815322917436112,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fs5sl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c26fe66c-eab7-42f5-b40f-68e2b1604f8f,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a78068b276e936137b3319db6c1e2adb91f073795813f526e48012480b23387c,PodSandboxId:05ab0c2d2a91f93876bea787f19619d506764c2724af99589c9b4572400592b9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723
815322910980674,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8c00592-fb99-42cc-b13c-2259512834e7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0a8097e30ed9d743062ecbe1a5f291e81a90df1a4c0ab244b760b1bd307dade,PodSandboxId:137f743bdc7a3daa9dcbaa055f2eff1fec1b295715db5f3cd639928d75040851,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723815318024004133,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-759623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f077a0aa375f6a3cdae2b257370274af,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c21f56900e14a07f58f2845cd3bfb1cdb0459179de108b36f5ea4b1674b8ffd9,PodSandboxId:5fc5d61f66aed616b5fa3ce9bd5ba93349e332f3a1f6050ab9c4b9fe7f19f1ec,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723815318017154394,Labels:map[string]string{io
.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-759623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56c044b16cd531203ceaabf7a870c9a8,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df407a85310786e69578dc9c81c9ceb21e0a3376439af85f611234fa07265d78,PodSandboxId:23fb8d7acc43954405556580e9d5a359bfa463deb1ebd1b13d4f1bbb76451aaa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723815317969338241,Labels:map[str
ing]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-759623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c0640198e884779fa0045199a20abea,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69205ebfdf557da8715152da5b671d51d725f8ee471703cf5f0f4867773cdba2,PodSandboxId:bd1bc191b8b1f32e1dbd7407ba7f7343a03b67754508f855bfd1dea6e87720c0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723815317914921442,Labels:map[string]s
tring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-759623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0ebc536e1bfafc769cba0efec870698,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:216477333f98684dbfddd6483c9dd9c856bcfd4f3adf60e398a6dbc6f4660a54,PodSandboxId:23e7d37828326c0d0d25c99d83cc31873e751612e1c91eb9055e7399b10c7d07,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723815276367091271,Labels:map[string]st
ring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8c00592-fb99-42cc-b13c-2259512834e7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4360310fa01d248ec85833d065ce5576102ada91df04d6be122b7603727715c,PodSandboxId:e3c26ef2535a0009a6fde110d114cd8cdc1d9a6f1f494d21cc3524bbb7473b9d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723815246304065543,Labels:map[string]string{io.kubernetes.contai
ner.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-25fps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4450bc7f-3e21-405e-9c71-ba96f0b981db,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0967fd24cbb441eaa2b5c854faee4427d1abd3f7d6a33906c7e0239c9e155a6,PodSandboxId:463ccc0728b10eeb32d0c4d370d556ff1bf6ea5b36194eb1c56f76da3b84fa2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedIma
ge:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723815246268070384,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mv64l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea0a4cad-6f1f-4325-b061-758fa7b86f56,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84b1a84ce8b21f2457a869ae6de1bd871b554fbf24daf7b0917ecb9f5be730ee,PodSandboxId:16e68f652fb35682df0b60b0787e78d2963d2683504477c27eda666b5ad0
d572,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723815245801450127,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fs5sl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c26fe66c-eab7-42f5-b40f-68e2b1604f8f,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a3bd5a844fa8eab4293d8c034bc6e1e4c7c53d18f1ad34a7d872abc477cebe9,PodSandboxId:7cb4179ab861eefb2e7ec22c4dd81294720072af198c2db8565defad4ffb6d38,Metadata:&ContainerMetadata{Na
me:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723815235267692846,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-759623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f077a0aa375f6a3cdae2b257370274af,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b09f6899adef85ec16c2bab0c545d8e6b72cb753bdc63e40f7dfa46242739f4,PodSandboxId:295cf3a645495b8c2d713a059037b5969065bda5681fb58a7ace12d728d6b55e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0
,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723815235258939372,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-759623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56c044b16cd531203ceaabf7a870c9a8,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae8dfb53e0d4957cbb74cfb3c137bf7f0332b0263b1028502597aed9a43702e4,PodSandboxId:2da98600be816d8369078fd6fa04decc66abad9727e6e8b7a8715c2a6500055d,Metadata:&ContainerMetadata{Name:kube-schedul
er,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723815235201941539,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-759623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0ebc536e1bfafc769cba0efec870698,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6c9fba1f618a01d8b5ca59760f1006dd53662c62262ba2fc253cff8ef7d57be,PodSandboxId:4a5b2b3c347454bb9007903ef76767218dd062867a2498417de44092398b3e86,Metadata:&ContainerMetadata{Name:kube-apiserver,Att
empt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723815235145371675,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-759623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c0640198e884779fa0045199a20abea,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=28eecebe-820f-405c-8997-77544bc6cd48 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	2c409d9361b0a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   2 seconds ago        Running             coredns                   1                   0684a1dc2577a       coredns-6f6b679f8f-mv64l
	3eb3b042c4248       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   2 seconds ago        Running             coredns                   1                   b8318befe0746       coredns-6f6b679f8f-25fps
	9aad8246b133b       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   3 seconds ago        Running             kube-proxy                1                   21877cd51f78d       kube-proxy-fs5sl
	a78068b276e93       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago        Running             storage-provisioner       2                   05ab0c2d2a91f       storage-provisioner
	b0a8097e30ed9       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   8 seconds ago        Running             etcd                      1                   137f743bdc7a3       etcd-kubernetes-upgrade-759623
	c21f56900e14a       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   8 seconds ago        Running             kube-controller-manager   1                   5fc5d61f66aed       kube-controller-manager-kubernetes-upgrade-759623
	df407a8531078       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   8 seconds ago        Running             kube-apiserver            1                   23fb8d7acc439       kube-apiserver-kubernetes-upgrade-759623
	69205ebfdf557       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   8 seconds ago        Running             kube-scheduler            1                   bd1bc191b8b1f       kube-scheduler-kubernetes-upgrade-759623
	216477333f986       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   49 seconds ago       Exited              storage-provisioner       1                   23e7d37828326       storage-provisioner
	a4360310fa01d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   0                   e3c26ef2535a0       coredns-6f6b679f8f-25fps
	a0967fd24cbb4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   0                   463ccc0728b10       coredns-6f6b679f8f-mv64l
	84b1a84ce8b21       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   About a minute ago   Exited              kube-proxy                0                   16e68f652fb35       kube-proxy-fs5sl
	3a3bd5a844fa8       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   About a minute ago   Exited              etcd                      0                   7cb4179ab861e       etcd-kubernetes-upgrade-759623
	7b09f6899adef       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   About a minute ago   Exited              kube-controller-manager   0                   295cf3a645495       kube-controller-manager-kubernetes-upgrade-759623
	ae8dfb53e0d49       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   About a minute ago   Exited              kube-scheduler            0                   2da98600be816       kube-scheduler-kubernetes-upgrade-759623
	f6c9fba1f618a       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   About a minute ago   Exited              kube-apiserver            0                   4a5b2b3c34745       kube-apiserver-kubernetes-upgrade-759623
	
	
	==> coredns [2c409d9361b0ae473c5bb4c4d462a5ff0fbe6b56a2130b72055c9d538361a288] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [3eb3b042c4248705e4cfa986d7951647953f3162d39db46195c7b23088d8d5e1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [a0967fd24cbb441eaa2b5c854faee4427d1abd3f7d6a33906c7e0239c9e155a6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a4360310fa01d248ec85833d065ce5576102ada91df04d6be122b7603727715c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-759623
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-759623
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 13:33:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-759623
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 13:35:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 13:35:21 +0000   Fri, 16 Aug 2024 13:33:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 13:35:21 +0000   Fri, 16 Aug 2024 13:33:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 13:35:21 +0000   Fri, 16 Aug 2024 13:33:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 13:35:21 +0000   Fri, 16 Aug 2024 13:34:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.57
	  Hostname:    kubernetes-upgrade-759623
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 676cd8bd07bb45a5be55aa07415b877f
	  System UUID:                676cd8bd-07bb-45a5-be55-aa07415b877f
	  Boot ID:                    c9910ca4-f064-405b-a193-b4b8724f4523
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-25fps                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     81s
	  kube-system                 coredns-6f6b679f8f-mv64l                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     81s
	  kube-system                 etcd-kubernetes-upgrade-759623                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         85s
	  kube-system                 kube-apiserver-kubernetes-upgrade-759623             250m (12%)    0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-759623    200m (10%)    0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 kube-proxy-fs5sl                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 kube-scheduler-kubernetes-upgrade-759623             100m (5%)     0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 storage-provisioner                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         82s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 80s                kube-proxy       
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  NodeHasNoDiskPressure    92s (x8 over 93s)  kubelet          Node kubernetes-upgrade-759623 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     92s (x7 over 93s)  kubelet          Node kubernetes-upgrade-759623 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  92s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  92s (x8 over 93s)  kubelet          Node kubernetes-upgrade-759623 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           82s                node-controller  Node kubernetes-upgrade-759623 event: Registered Node kubernetes-upgrade-759623 in Controller
	  Normal  Starting                 9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9s (x8 over 9s)    kubelet          Node kubernetes-upgrade-759623 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x8 over 9s)    kubelet          Node kubernetes-upgrade-759623 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x7 over 9s)    kubelet          Node kubernetes-upgrade-759623 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           1s                 node-controller  Node kubernetes-upgrade-759623 event: Registered Node kubernetes-upgrade-759623 in Controller
	
	
	==> dmesg <==
	[  +2.371562] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.271102] systemd-fstab-generator[568]: Ignoring "noauto" option for root device
	[  +0.055882] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055490] systemd-fstab-generator[580]: Ignoring "noauto" option for root device
	[  +0.192098] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.127626] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.279682] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +4.251512] systemd-fstab-generator[733]: Ignoring "noauto" option for root device
	[  +1.851505] systemd-fstab-generator[855]: Ignoring "noauto" option for root device
	[  +0.064103] kauditd_printk_skb: 158 callbacks suppressed
	[Aug16 13:34] systemd-fstab-generator[1226]: Ignoring "noauto" option for root device
	[  +0.084564] kauditd_printk_skb: 69 callbacks suppressed
	[ +32.311363] kauditd_printk_skb: 109 callbacks suppressed
	[Aug16 13:35] systemd-fstab-generator[2236]: Ignoring "noauto" option for root device
	[  +0.165765] systemd-fstab-generator[2248]: Ignoring "noauto" option for root device
	[  +0.210162] systemd-fstab-generator[2262]: Ignoring "noauto" option for root device
	[  +0.166482] systemd-fstab-generator[2274]: Ignoring "noauto" option for root device
	[  +0.327406] systemd-fstab-generator[2304]: Ignoring "noauto" option for root device
	[  +7.669434] systemd-fstab-generator[2457]: Ignoring "noauto" option for root device
	[  +0.079111] kauditd_printk_skb: 100 callbacks suppressed
	[  +2.042415] systemd-fstab-generator[2579]: Ignoring "noauto" option for root device
	[  +5.601400] kauditd_printk_skb: 75 callbacks suppressed
	[  +1.464851] systemd-fstab-generator[3505]: Ignoring "noauto" option for root device
	
	
	==> etcd [3a3bd5a844fa8eab4293d8c034bc6e1e4c7c53d18f1ad34a7d872abc477cebe9] <==
	{"level":"info","ts":"2024-08-16T13:33:56.252387Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"79ee2fa200dbf73d","local-member-attributes":"{Name:kubernetes-upgrade-759623 ClientURLs:[https://192.168.39.57:2379]}","request-path":"/0/members/79ee2fa200dbf73d/attributes","cluster-id":"cdb6bc6ece496785","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-16T13:33:56.252436Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T13:33:56.252866Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T13:33:56.253645Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T13:33:56.257382Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-16T13:33:56.252881Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-16T13:33:56.257536Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-16T13:33:56.258038Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T13:33:56.258790Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.57:2379"}
	{"level":"info","ts":"2024-08-16T13:33:56.269787Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cdb6bc6ece496785","local-member-id":"79ee2fa200dbf73d","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T13:33:56.273357Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T13:33:56.275228Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T13:34:18.223877Z","caller":"traceutil/trace.go:171","msg":"trace[1068405873] transaction","detail":"{read_only:false; response_revision:386; number_of_response:1; }","duration":"128.52076ms","start":"2024-08-16T13:34:18.095049Z","end":"2024-08-16T13:34:18.223569Z","steps":["trace[1068405873] 'process raft request'  (duration: 128.300146ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T13:34:18.693924Z","caller":"traceutil/trace.go:171","msg":"trace[2050261434] transaction","detail":"{read_only:false; response_revision:387; number_of_response:1; }","duration":"132.217201ms","start":"2024-08-16T13:34:18.561691Z","end":"2024-08-16T13:34:18.693908Z","steps":["trace[2050261434] 'process raft request'  (duration: 132.048188ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T13:34:40.730364Z","caller":"traceutil/trace.go:171","msg":"trace[1946322602] transaction","detail":"{read_only:false; response_revision:403; number_of_response:1; }","duration":"173.769505ms","start":"2024-08-16T13:34:40.556582Z","end":"2024-08-16T13:34:40.730351Z","steps":["trace[1946322602] 'process raft request'  (duration: 173.651935ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T13:34:59.564899Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-16T13:34:59.565041Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"kubernetes-upgrade-759623","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.57:2380"],"advertise-client-urls":["https://192.168.39.57:2379"]}
	{"level":"warn","ts":"2024-08-16T13:34:59.565107Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.57:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-16T13:34:59.565137Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.57:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-16T13:34:59.565254Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-16T13:34:59.565320Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-16T13:34:59.623033Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"79ee2fa200dbf73d","current-leader-member-id":"79ee2fa200dbf73d"}
	{"level":"info","ts":"2024-08-16T13:34:59.625981Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.57:2380"}
	{"level":"info","ts":"2024-08-16T13:34:59.626055Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.57:2380"}
	{"level":"info","ts":"2024-08-16T13:34:59.626063Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"kubernetes-upgrade-759623","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.57:2380"],"advertise-client-urls":["https://192.168.39.57:2379"]}
	
	
	==> etcd [b0a8097e30ed9d743062ecbe1a5f291e81a90df1a4c0ab244b760b1bd307dade] <==
	{"level":"info","ts":"2024-08-16T13:35:18.398078Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cdb6bc6ece496785","local-member-id":"79ee2fa200dbf73d","added-peer-id":"79ee2fa200dbf73d","added-peer-peer-urls":["https://192.168.39.57:2380"]}
	{"level":"info","ts":"2024-08-16T13:35:18.398280Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cdb6bc6ece496785","local-member-id":"79ee2fa200dbf73d","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T13:35:18.398319Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T13:35:18.404813Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T13:35:18.410779Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.57:2380"}
	{"level":"info","ts":"2024-08-16T13:35:18.410813Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.57:2380"}
	{"level":"info","ts":"2024-08-16T13:35:18.410724Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-16T13:35:18.415043Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"79ee2fa200dbf73d","initial-advertise-peer-urls":["https://192.168.39.57:2380"],"listen-peer-urls":["https://192.168.39.57:2380"],"advertise-client-urls":["https://192.168.39.57:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.57:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-16T13:35:18.415086Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-16T13:35:20.247068Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"79ee2fa200dbf73d is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-16T13:35:20.247107Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"79ee2fa200dbf73d became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-16T13:35:20.247148Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"79ee2fa200dbf73d received MsgPreVoteResp from 79ee2fa200dbf73d at term 2"}
	{"level":"info","ts":"2024-08-16T13:35:20.247205Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"79ee2fa200dbf73d became candidate at term 3"}
	{"level":"info","ts":"2024-08-16T13:35:20.247214Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"79ee2fa200dbf73d received MsgVoteResp from 79ee2fa200dbf73d at term 3"}
	{"level":"info","ts":"2024-08-16T13:35:20.247222Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"79ee2fa200dbf73d became leader at term 3"}
	{"level":"info","ts":"2024-08-16T13:35:20.247229Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 79ee2fa200dbf73d elected leader 79ee2fa200dbf73d at term 3"}
	{"level":"info","ts":"2024-08-16T13:35:20.253528Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T13:35:20.253462Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"79ee2fa200dbf73d","local-member-attributes":"{Name:kubernetes-upgrade-759623 ClientURLs:[https://192.168.39.57:2379]}","request-path":"/0/members/79ee2fa200dbf73d/attributes","cluster-id":"cdb6bc6ece496785","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-16T13:35:20.254372Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T13:35:20.254682Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T13:35:20.255479Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T13:35:20.255613Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.57:2379"}
	{"level":"info","ts":"2024-08-16T13:35:20.256485Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-16T13:35:20.256532Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-16T13:35:20.256566Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 13:35:26 up 1 min,  0 users,  load average: 1.61, 0.44, 0.15
	Linux kubernetes-upgrade-759623 5.10.207 #1 SMP Wed Aug 14 19:18:01 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [df407a85310786e69578dc9c81c9ceb21e0a3376439af85f611234fa07265d78] <==
	I0816 13:35:21.597061       1 policy_source.go:224] refreshing policies
	I0816 13:35:21.614267       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0816 13:35:21.641984       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0816 13:35:21.642098       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0816 13:35:21.642524       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0816 13:35:21.644557       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0816 13:35:21.644700       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0816 13:35:21.644869       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0816 13:35:21.655062       1 shared_informer.go:320] Caches are synced for configmaps
	I0816 13:35:21.657494       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0816 13:35:21.671510       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0816 13:35:21.671774       1 aggregator.go:171] initial CRD sync complete...
	I0816 13:35:21.671817       1 autoregister_controller.go:144] Starting autoregister controller
	I0816 13:35:21.671840       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0816 13:35:21.671863       1 cache.go:39] Caches are synced for autoregister controller
	I0816 13:35:21.675142       1 shared_informer.go:320] Caches are synced for node_authorizer
	E0816 13:35:21.679312       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0816 13:35:22.429669       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0816 13:35:23.767294       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0816 13:35:23.797288       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0816 13:35:23.855280       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0816 13:35:23.898662       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0816 13:35:23.909391       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0816 13:35:24.975021       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0816 13:35:25.012125       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [f6c9fba1f618a01d8b5ca59760f1006dd53662c62262ba2fc253cff8ef7d57be] <==
	I0816 13:33:58.086233       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0816 13:33:58.737956       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0816 13:33:58.745591       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0816 13:33:58.745951       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0816 13:33:59.373052       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0816 13:33:59.415405       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0816 13:33:59.552775       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0816 13:33:59.559438       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.57]
	I0816 13:33:59.560371       1 controller.go:615] quota admission added evaluator for: endpoints
	I0816 13:33:59.564570       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0816 13:33:59.806546       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0816 13:34:03.899315       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0816 13:34:03.923586       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0816 13:34:03.935974       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0816 13:34:04.703227       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0816 13:34:04.954323       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0816 13:34:59.557857       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	E0816 13:34:59.571362       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0816 13:34:59.571468       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0816 13:34:59.571521       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0816 13:34:59.571555       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0816 13:34:59.571625       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &status.Error{s:(*status.Status)(0xc0036f5708)}: rpc error: code = Unknown desc = malformed header: missing HTTP content-type" logger="UnhandledError"
	E0816 13:34:59.571624       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0816 13:34:59.571712       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0816 13:34:59.576301       1 controller.go:163] "Unhandled Error" err="unable to sync kubernetes service: rpc error: code = Unknown desc = malformed header: missing HTTP content-type" logger="UnhandledError"
	
	
	==> kube-controller-manager [7b09f6899adef85ec16c2bab0c545d8e6b72cb753bdc63e40f7dfa46242739f4] <==
	I0816 13:34:04.555973       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0816 13:34:04.561477       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-759623"
	I0816 13:34:04.563666       1 shared_informer.go:320] Caches are synced for GC
	I0816 13:34:04.597772       1 shared_informer.go:320] Caches are synced for HPA
	I0816 13:34:04.689804       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0816 13:34:04.689888       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0816 13:34:04.691675       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0816 13:34:04.691722       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0816 13:34:04.734791       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0816 13:34:04.753706       1 shared_informer.go:320] Caches are synced for resource quota
	I0816 13:34:04.760640       1 shared_informer.go:320] Caches are synced for resource quota
	I0816 13:34:05.057253       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-759623"
	I0816 13:34:05.215368       1 shared_informer.go:320] Caches are synced for garbage collector
	I0816 13:34:05.226656       1 shared_informer.go:320] Caches are synced for garbage collector
	I0816 13:34:05.226704       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0816 13:34:05.553440       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="593.070628ms"
	I0816 13:34:05.570876       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="17.24225ms"
	I0816 13:34:05.571629       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="53.299µs"
	I0816 13:34:05.599018       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="95.777µs"
	I0816 13:34:07.259445       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="75.218µs"
	I0816 13:34:07.292095       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="15.269662ms"
	I0816 13:34:07.292338       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="78.343µs"
	I0816 13:34:07.320521       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="16.382879ms"
	I0816 13:34:07.320607       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="43.55µs"
	I0816 13:34:08.400575       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-759623"
	
	
	==> kube-controller-manager [c21f56900e14a07f58f2845cd3bfb1cdb0459179de108b36f5ea4b1674b8ffd9] <==
	I0816 13:35:24.954889       1 shared_informer.go:320] Caches are synced for cronjob
	I0816 13:35:24.965261       1 shared_informer.go:320] Caches are synced for ephemeral
	I0816 13:35:24.967228       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0816 13:35:24.970802       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0816 13:35:24.976375       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0816 13:35:24.976703       1 shared_informer.go:320] Caches are synced for persistent volume
	I0816 13:35:24.984801       1 shared_informer.go:320] Caches are synced for job
	I0816 13:35:24.984926       1 shared_informer.go:320] Caches are synced for HPA
	I0816 13:35:25.004420       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0816 13:35:25.004474       1 shared_informer.go:320] Caches are synced for PVC protection
	I0816 13:35:25.005919       1 shared_informer.go:320] Caches are synced for deployment
	I0816 13:35:25.063340       1 shared_informer.go:320] Caches are synced for taint
	I0816 13:35:25.063563       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0816 13:35:25.063674       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-759623"
	I0816 13:35:25.063815       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0816 13:35:25.090230       1 shared_informer.go:320] Caches are synced for disruption
	I0816 13:35:25.093125       1 shared_informer.go:320] Caches are synced for daemon sets
	I0816 13:35:25.113246       1 shared_informer.go:320] Caches are synced for resource quota
	I0816 13:35:25.142322       1 shared_informer.go:320] Caches are synced for resource quota
	I0816 13:35:25.159380       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0816 13:35:25.210672       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="243.328839ms"
	I0816 13:35:25.210810       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="71.13µs"
	I0816 13:35:25.597927       1 shared_informer.go:320] Caches are synced for garbage collector
	I0816 13:35:25.597983       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0816 13:35:25.606513       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-proxy [84b1a84ce8b21f2457a869ae6de1bd871b554fbf24daf7b0917ecb9f5be730ee] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0816 13:34:06.118065       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0816 13:34:06.137405       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.57"]
	E0816 13:34:06.137493       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 13:34:06.278750       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0816 13:34:06.278796       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0816 13:34:06.278826       1 server_linux.go:169] "Using iptables Proxier"
	I0816 13:34:06.282398       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 13:34:06.282624       1 server.go:483] "Version info" version="v1.31.0"
	I0816 13:34:06.282635       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 13:34:06.284659       1 config.go:197] "Starting service config controller"
	I0816 13:34:06.284695       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 13:34:06.284717       1 config.go:104] "Starting endpoint slice config controller"
	I0816 13:34:06.284721       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 13:34:06.285369       1 config.go:326] "Starting node config controller"
	I0816 13:34:06.285394       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 13:34:06.386393       1 shared_informer.go:320] Caches are synced for node config
	I0816 13:34:06.386460       1 shared_informer.go:320] Caches are synced for service config
	I0816 13:34:06.386521       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [9aad8246b133bf2274e3787f75f23d24f56386e7840210a22be5a1c0bc89a638] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0816 13:35:23.320128       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0816 13:35:23.330351       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.57"]
	E0816 13:35:23.330429       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 13:35:23.428974       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0816 13:35:23.429020       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0816 13:35:23.429048       1 server_linux.go:169] "Using iptables Proxier"
	I0816 13:35:23.445350       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 13:35:23.445549       1 server.go:483] "Version info" version="v1.31.0"
	I0816 13:35:23.445560       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 13:35:23.450284       1 config.go:197] "Starting service config controller"
	I0816 13:35:23.450308       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 13:35:23.450389       1 config.go:104] "Starting endpoint slice config controller"
	I0816 13:35:23.450395       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 13:35:23.450896       1 config.go:326] "Starting node config controller"
	I0816 13:35:23.450903       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 13:35:23.555199       1 shared_informer.go:320] Caches are synced for node config
	I0816 13:35:23.555232       1 shared_informer.go:320] Caches are synced for service config
	I0816 13:35:23.555256       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [69205ebfdf557da8715152da5b671d51d725f8ee471703cf5f0f4867773cdba2] <==
	I0816 13:35:18.624432       1 serving.go:386] Generated self-signed cert in-memory
	W0816 13:35:21.516860       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0816 13:35:21.516925       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0816 13:35:21.516941       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0816 13:35:21.516953       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0816 13:35:21.613657       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0816 13:35:21.619242       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 13:35:21.627488       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0816 13:35:21.627552       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 13:35:21.631524       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0816 13:35:21.631637       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0816 13:35:21.728304       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [ae8dfb53e0d4957cbb74cfb3c137bf7f0332b0263b1028502597aed9a43702e4] <==
	E0816 13:33:57.845273       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 13:33:57.845349       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 13:33:57.845378       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0816 13:33:57.845419       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 13:33:57.845428       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 13:33:57.845464       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 13:33:57.845492       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 13:33:57.845537       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 13:33:57.845564       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 13:33:58.687516       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 13:33:58.687582       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 13:33:58.707066       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 13:33:58.707122       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 13:33:58.869930       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 13:33:58.869964       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 13:33:58.899299       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 13:33:58.899435       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0816 13:33:58.941782       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 13:33:58.941980       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0816 13:33:59.060096       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 13:33:59.060305       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0816 13:33:59.087457       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 13:33:59.087544       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0816 13:34:01.629119       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0816 13:34:59.560994       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 16 13:35:17 kubernetes-upgrade-759623 kubelet[2586]: I0816 13:35:17.424551    2586 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/56c044b16cd531203ceaabf7a870c9a8-k8s-certs\") pod \"kube-controller-manager-kubernetes-upgrade-759623\" (UID: \"56c044b16cd531203ceaabf7a870c9a8\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-759623"
	Aug 16 13:35:17 kubernetes-upgrade-759623 kubelet[2586]: I0816 13:35:17.424572    2586 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d0ebc536e1bfafc769cba0efec870698-kubeconfig\") pod \"kube-scheduler-kubernetes-upgrade-759623\" (UID: \"d0ebc536e1bfafc769cba0efec870698\") " pod="kube-system/kube-scheduler-kubernetes-upgrade-759623"
	Aug 16 13:35:17 kubernetes-upgrade-759623 kubelet[2586]: I0816 13:35:17.424590    2586 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/f077a0aa375f6a3cdae2b257370274af-etcd-certs\") pod \"etcd-kubernetes-upgrade-759623\" (UID: \"f077a0aa375f6a3cdae2b257370274af\") " pod="kube-system/etcd-kubernetes-upgrade-759623"
	Aug 16 13:35:17 kubernetes-upgrade-759623 kubelet[2586]: I0816 13:35:17.576757    2586 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-759623"
	Aug 16 13:35:17 kubernetes-upgrade-759623 kubelet[2586]: E0816 13:35:17.578065    2586 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.57:8443: connect: connection refused" node="kubernetes-upgrade-759623"
	Aug 16 13:35:17 kubernetes-upgrade-759623 kubelet[2586]: E0816 13:35:17.824469    2586 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-759623?timeout=10s\": dial tcp 192.168.39.57:8443: connect: connection refused" interval="800ms"
	Aug 16 13:35:17 kubernetes-upgrade-759623 kubelet[2586]: I0816 13:35:17.986044    2586 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-759623"
	Aug 16 13:35:17 kubernetes-upgrade-759623 kubelet[2586]: E0816 13:35:17.986901    2586 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.57:8443: connect: connection refused" node="kubernetes-upgrade-759623"
	Aug 16 13:35:18 kubernetes-upgrade-759623 kubelet[2586]: W0816 13:35:18.278953    2586 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.57:8443: connect: connection refused
	Aug 16 13:35:18 kubernetes-upgrade-759623 kubelet[2586]: E0816 13:35:18.279095    2586 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.57:8443: connect: connection refused" logger="UnhandledError"
	Aug 16 13:35:18 kubernetes-upgrade-759623 kubelet[2586]: W0816 13:35:18.315218    2586 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.39.57:8443: connect: connection refused
	Aug 16 13:35:18 kubernetes-upgrade-759623 kubelet[2586]: E0816 13:35:18.315337    2586 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.39.57:8443: connect: connection refused" logger="UnhandledError"
	Aug 16 13:35:18 kubernetes-upgrade-759623 kubelet[2586]: I0816 13:35:18.788944    2586 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-759623"
	Aug 16 13:35:21 kubernetes-upgrade-759623 kubelet[2586]: I0816 13:35:21.653856    2586 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-759623"
	Aug 16 13:35:21 kubernetes-upgrade-759623 kubelet[2586]: I0816 13:35:21.654512    2586 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-759623"
	Aug 16 13:35:21 kubernetes-upgrade-759623 kubelet[2586]: I0816 13:35:21.654640    2586 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 16 13:35:21 kubernetes-upgrade-759623 kubelet[2586]: I0816 13:35:21.655931    2586 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 16 13:35:21 kubernetes-upgrade-759623 kubelet[2586]: E0816 13:35:21.682691    2586 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-kubernetes-upgrade-759623\" already exists" pod="kube-system/kube-controller-manager-kubernetes-upgrade-759623"
	Aug 16 13:35:22 kubernetes-upgrade-759623 kubelet[2586]: I0816 13:35:22.202507    2586 apiserver.go:52] "Watching apiserver"
	Aug 16 13:35:22 kubernetes-upgrade-759623 kubelet[2586]: I0816 13:35:22.216812    2586 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Aug 16 13:35:22 kubernetes-upgrade-759623 kubelet[2586]: I0816 13:35:22.278273    2586 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c26fe66c-eab7-42f5-b40f-68e2b1604f8f-xtables-lock\") pod \"kube-proxy-fs5sl\" (UID: \"c26fe66c-eab7-42f5-b40f-68e2b1604f8f\") " pod="kube-system/kube-proxy-fs5sl"
	Aug 16 13:35:22 kubernetes-upgrade-759623 kubelet[2586]: I0816 13:35:22.278330    2586 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c26fe66c-eab7-42f5-b40f-68e2b1604f8f-lib-modules\") pod \"kube-proxy-fs5sl\" (UID: \"c26fe66c-eab7-42f5-b40f-68e2b1604f8f\") " pod="kube-system/kube-proxy-fs5sl"
	Aug 16 13:35:22 kubernetes-upgrade-759623 kubelet[2586]: I0816 13:35:22.278383    2586 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e8c00592-fb99-42cc-b13c-2259512834e7-tmp\") pod \"storage-provisioner\" (UID: \"e8c00592-fb99-42cc-b13c-2259512834e7\") " pod="kube-system/storage-provisioner"
	Aug 16 13:35:25 kubernetes-upgrade-759623 kubelet[2586]: I0816 13:35:25.470556    2586 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Aug 16 13:35:25 kubernetes-upgrade-759623 kubelet[2586]: I0816 13:35:25.471033    2586 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	
	
	==> storage-provisioner [216477333f98684dbfddd6483c9dd9c856bcfd4f3adf60e398a6dbc6f4660a54] <==
	I0816 13:34:36.509383       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0816 13:34:36.518949       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0816 13:34:36.519016       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0816 13:34:36.528716       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0816 13:34:36.528923       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-759623_908d2e89-5664-4c46-9e28-21ad75e8b8b7!
	I0816 13:34:36.533574       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d011cf05-c234-4701-99de-7804692f2054", APIVersion:"v1", ResourceVersion:"396", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-759623_908d2e89-5664-4c46-9e28-21ad75e8b8b7 became leader
	I0816 13:34:36.629778       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-759623_908d2e89-5664-4c46-9e28-21ad75e8b8b7!
	
	
	==> storage-provisioner [a78068b276e936137b3319db6c1e2adb91f073795813f526e48012480b23387c] <==
	I0816 13:35:23.082491       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0816 13:35:23.104930       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0816 13:35:23.104997       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 13:35:25.537291   55132 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19423-3966/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-759623 -n kubernetes-upgrade-759623
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-759623 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-759623" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-759623
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-759623: (1.130343452s)
--- FAIL: TestKubernetesUpgrade (442.36s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (69.64s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-356375 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-356375 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m5.685283842s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-356375] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-3966/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-3966/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-356375" primary control-plane node in "pause-356375" cluster
	* Updating the running kvm2 "pause-356375" VM ...
	* Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-356375" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 13:30:26.588400   48217 out.go:345] Setting OutFile to fd 1 ...
	I0816 13:30:26.588566   48217 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 13:30:26.588577   48217 out.go:358] Setting ErrFile to fd 2...
	I0816 13:30:26.588581   48217 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 13:30:26.588841   48217 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-3966/.minikube/bin
	I0816 13:30:26.590026   48217 out.go:352] Setting JSON to false
	I0816 13:30:26.591035   48217 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4372,"bootTime":1723810655,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 13:30:26.591098   48217 start.go:139] virtualization: kvm guest
	I0816 13:30:26.592993   48217 out.go:177] * [pause-356375] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 13:30:26.594620   48217 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 13:30:26.594682   48217 notify.go:220] Checking for updates...
	I0816 13:30:26.597524   48217 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 13:30:26.599277   48217 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 13:30:26.600993   48217 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-3966/.minikube
	I0816 13:30:26.602434   48217 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 13:30:26.603930   48217 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 13:30:26.605797   48217 config.go:182] Loaded profile config "pause-356375": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 13:30:26.606284   48217 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:30:26.606356   48217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:30:26.623181   48217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43121
	I0816 13:30:26.623705   48217 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:30:26.624415   48217 main.go:141] libmachine: Using API Version  1
	I0816 13:30:26.624440   48217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:30:26.624803   48217 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:30:26.625051   48217 main.go:141] libmachine: (pause-356375) Calling .DriverName
	I0816 13:30:26.625354   48217 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 13:30:26.625656   48217 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:30:26.625687   48217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:30:26.641356   48217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43333
	I0816 13:30:26.641837   48217 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:30:26.642787   48217 main.go:141] libmachine: Using API Version  1
	I0816 13:30:26.642814   48217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:30:26.643279   48217 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:30:26.643502   48217 main.go:141] libmachine: (pause-356375) Calling .DriverName
	I0816 13:30:26.684227   48217 out.go:177] * Using the kvm2 driver based on existing profile
	I0816 13:30:26.685549   48217 start.go:297] selected driver: kvm2
	I0816 13:30:26.685585   48217 start.go:901] validating driver "kvm2" against &{Name:pause-356375 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.31.0 ClusterName:pause-356375 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.95 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-devi
ce-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 13:30:26.685738   48217 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 13:30:26.686264   48217 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 13:30:26.686353   48217 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-3966/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 13:30:26.704104   48217 install.go:137] /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0816 13:30:26.705063   48217 cni.go:84] Creating CNI manager for ""
	I0816 13:30:26.705078   48217 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:30:26.705132   48217 start.go:340] cluster config:
	{Name:pause-356375 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:pause-356375 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.95 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:f
alse registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 13:30:26.705262   48217 iso.go:125] acquiring lock: {Name:mk00139b359c6fe4c84d80e843a2099303fb7c5b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 13:30:26.707102   48217 out.go:177] * Starting "pause-356375" primary control-plane node in "pause-356375" cluster
	I0816 13:30:26.708480   48217 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 13:30:26.708524   48217 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0816 13:30:26.708543   48217 cache.go:56] Caching tarball of preloaded images
	I0816 13:30:26.708645   48217 preload.go:172] Found /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 13:30:26.708657   48217 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0816 13:30:26.708787   48217 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/pause-356375/config.json ...
	I0816 13:30:26.709118   48217 start.go:360] acquireMachinesLock for pause-356375: {Name:mk0b4b88ee8893cefe753073bc08069ac50e5641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 13:30:26.709199   48217 start.go:364] duration metric: took 41.053µs to acquireMachinesLock for "pause-356375"
	I0816 13:30:26.709227   48217 start.go:96] Skipping create...Using existing machine configuration
	I0816 13:30:26.709244   48217 fix.go:54] fixHost starting: 
	I0816 13:30:26.709647   48217 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:30:26.709690   48217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:30:26.729860   48217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40813
	I0816 13:30:26.730325   48217 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:30:26.730878   48217 main.go:141] libmachine: Using API Version  1
	I0816 13:30:26.730901   48217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:30:26.731414   48217 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:30:26.731629   48217 main.go:141] libmachine: (pause-356375) Calling .DriverName
	I0816 13:30:26.731819   48217 main.go:141] libmachine: (pause-356375) Calling .GetState
	I0816 13:30:26.733626   48217 fix.go:112] recreateIfNeeded on pause-356375: state=Running err=<nil>
	W0816 13:30:26.733657   48217 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 13:30:26.735455   48217 out.go:177] * Updating the running kvm2 "pause-356375" VM ...
	I0816 13:30:26.736643   48217 machine.go:93] provisionDockerMachine start ...
	I0816 13:30:26.736668   48217 main.go:141] libmachine: (pause-356375) Calling .DriverName
	I0816 13:30:26.736934   48217 main.go:141] libmachine: (pause-356375) Calling .GetSSHHostname
	I0816 13:30:26.739564   48217 main.go:141] libmachine: (pause-356375) DBG | domain pause-356375 has defined MAC address 52:54:00:e5:5f:4d in network mk-pause-356375
	I0816 13:30:26.740007   48217 main.go:141] libmachine: (pause-356375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:5f:4d", ip: ""} in network mk-pause-356375: {Iface:virbr3 ExpiryTime:2024-08-16 14:29:15 +0000 UTC Type:0 Mac:52:54:00:e5:5f:4d Iaid: IPaddr:192.168.61.95 Prefix:24 Hostname:pause-356375 Clientid:01:52:54:00:e5:5f:4d}
	I0816 13:30:26.740036   48217 main.go:141] libmachine: (pause-356375) DBG | domain pause-356375 has defined IP address 192.168.61.95 and MAC address 52:54:00:e5:5f:4d in network mk-pause-356375
	I0816 13:30:26.740211   48217 main.go:141] libmachine: (pause-356375) Calling .GetSSHPort
	I0816 13:30:26.740382   48217 main.go:141] libmachine: (pause-356375) Calling .GetSSHKeyPath
	I0816 13:30:26.740561   48217 main.go:141] libmachine: (pause-356375) Calling .GetSSHKeyPath
	I0816 13:30:26.740739   48217 main.go:141] libmachine: (pause-356375) Calling .GetSSHUsername
	I0816 13:30:26.740958   48217 main.go:141] libmachine: Using SSH client type: native
	I0816 13:30:26.741169   48217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.95 22 <nil> <nil>}
	I0816 13:30:26.741185   48217 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 13:30:26.870198   48217 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-356375
	
	I0816 13:30:26.870231   48217 main.go:141] libmachine: (pause-356375) Calling .GetMachineName
	I0816 13:30:26.870504   48217 buildroot.go:166] provisioning hostname "pause-356375"
	I0816 13:30:26.870577   48217 main.go:141] libmachine: (pause-356375) Calling .GetMachineName
	I0816 13:30:26.870891   48217 main.go:141] libmachine: (pause-356375) Calling .GetSSHHostname
	I0816 13:30:26.873522   48217 main.go:141] libmachine: (pause-356375) DBG | domain pause-356375 has defined MAC address 52:54:00:e5:5f:4d in network mk-pause-356375
	I0816 13:30:26.873893   48217 main.go:141] libmachine: (pause-356375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:5f:4d", ip: ""} in network mk-pause-356375: {Iface:virbr3 ExpiryTime:2024-08-16 14:29:15 +0000 UTC Type:0 Mac:52:54:00:e5:5f:4d Iaid: IPaddr:192.168.61.95 Prefix:24 Hostname:pause-356375 Clientid:01:52:54:00:e5:5f:4d}
	I0816 13:30:26.873917   48217 main.go:141] libmachine: (pause-356375) DBG | domain pause-356375 has defined IP address 192.168.61.95 and MAC address 52:54:00:e5:5f:4d in network mk-pause-356375
	I0816 13:30:26.874107   48217 main.go:141] libmachine: (pause-356375) Calling .GetSSHPort
	I0816 13:30:26.874305   48217 main.go:141] libmachine: (pause-356375) Calling .GetSSHKeyPath
	I0816 13:30:26.874498   48217 main.go:141] libmachine: (pause-356375) Calling .GetSSHKeyPath
	I0816 13:30:26.874640   48217 main.go:141] libmachine: (pause-356375) Calling .GetSSHUsername
	I0816 13:30:26.874827   48217 main.go:141] libmachine: Using SSH client type: native
	I0816 13:30:26.875051   48217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.95 22 <nil> <nil>}
	I0816 13:30:26.875068   48217 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-356375 && echo "pause-356375" | sudo tee /etc/hostname
	I0816 13:30:27.010693   48217 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-356375
	
	I0816 13:30:27.010717   48217 main.go:141] libmachine: (pause-356375) Calling .GetSSHHostname
	I0816 13:30:27.014516   48217 main.go:141] libmachine: (pause-356375) DBG | domain pause-356375 has defined MAC address 52:54:00:e5:5f:4d in network mk-pause-356375
	I0816 13:30:27.015026   48217 main.go:141] libmachine: (pause-356375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:5f:4d", ip: ""} in network mk-pause-356375: {Iface:virbr3 ExpiryTime:2024-08-16 14:29:15 +0000 UTC Type:0 Mac:52:54:00:e5:5f:4d Iaid: IPaddr:192.168.61.95 Prefix:24 Hostname:pause-356375 Clientid:01:52:54:00:e5:5f:4d}
	I0816 13:30:27.015060   48217 main.go:141] libmachine: (pause-356375) DBG | domain pause-356375 has defined IP address 192.168.61.95 and MAC address 52:54:00:e5:5f:4d in network mk-pause-356375
	I0816 13:30:27.015307   48217 main.go:141] libmachine: (pause-356375) Calling .GetSSHPort
	I0816 13:30:27.015507   48217 main.go:141] libmachine: (pause-356375) Calling .GetSSHKeyPath
	I0816 13:30:27.015656   48217 main.go:141] libmachine: (pause-356375) Calling .GetSSHKeyPath
	I0816 13:30:27.015770   48217 main.go:141] libmachine: (pause-356375) Calling .GetSSHUsername
	I0816 13:30:27.015958   48217 main.go:141] libmachine: Using SSH client type: native
	I0816 13:30:27.016182   48217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.95 22 <nil> <nil>}
	I0816 13:30:27.016198   48217 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-356375' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-356375/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-356375' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 13:30:27.134942   48217 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 13:30:27.134997   48217 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-3966/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-3966/.minikube}
	I0816 13:30:27.135054   48217 buildroot.go:174] setting up certificates
	I0816 13:30:27.135068   48217 provision.go:84] configureAuth start
	I0816 13:30:27.135082   48217 main.go:141] libmachine: (pause-356375) Calling .GetMachineName
	I0816 13:30:27.135441   48217 main.go:141] libmachine: (pause-356375) Calling .GetIP
	I0816 13:30:27.138440   48217 main.go:141] libmachine: (pause-356375) DBG | domain pause-356375 has defined MAC address 52:54:00:e5:5f:4d in network mk-pause-356375
	I0816 13:30:27.138886   48217 main.go:141] libmachine: (pause-356375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:5f:4d", ip: ""} in network mk-pause-356375: {Iface:virbr3 ExpiryTime:2024-08-16 14:29:15 +0000 UTC Type:0 Mac:52:54:00:e5:5f:4d Iaid: IPaddr:192.168.61.95 Prefix:24 Hostname:pause-356375 Clientid:01:52:54:00:e5:5f:4d}
	I0816 13:30:27.138922   48217 main.go:141] libmachine: (pause-356375) DBG | domain pause-356375 has defined IP address 192.168.61.95 and MAC address 52:54:00:e5:5f:4d in network mk-pause-356375
	I0816 13:30:27.139176   48217 main.go:141] libmachine: (pause-356375) Calling .GetSSHHostname
	I0816 13:30:27.142007   48217 main.go:141] libmachine: (pause-356375) DBG | domain pause-356375 has defined MAC address 52:54:00:e5:5f:4d in network mk-pause-356375
	I0816 13:30:27.142474   48217 main.go:141] libmachine: (pause-356375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:5f:4d", ip: ""} in network mk-pause-356375: {Iface:virbr3 ExpiryTime:2024-08-16 14:29:15 +0000 UTC Type:0 Mac:52:54:00:e5:5f:4d Iaid: IPaddr:192.168.61.95 Prefix:24 Hostname:pause-356375 Clientid:01:52:54:00:e5:5f:4d}
	I0816 13:30:27.142507   48217 main.go:141] libmachine: (pause-356375) DBG | domain pause-356375 has defined IP address 192.168.61.95 and MAC address 52:54:00:e5:5f:4d in network mk-pause-356375
	I0816 13:30:27.142738   48217 provision.go:143] copyHostCerts
	I0816 13:30:27.142799   48217 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem, removing ...
	I0816 13:30:27.142818   48217 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem
	I0816 13:30:27.142905   48217 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem (1082 bytes)
	I0816 13:30:27.143068   48217 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem, removing ...
	I0816 13:30:27.143081   48217 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem
	I0816 13:30:27.143114   48217 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem (1123 bytes)
	I0816 13:30:27.143211   48217 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem, removing ...
	I0816 13:30:27.143221   48217 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem
	I0816 13:30:27.143253   48217 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem (1675 bytes)
	I0816 13:30:27.143331   48217 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem org=jenkins.pause-356375 san=[127.0.0.1 192.168.61.95 localhost minikube pause-356375]
	I0816 13:30:27.388098   48217 provision.go:177] copyRemoteCerts
	I0816 13:30:27.388173   48217 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 13:30:27.388202   48217 main.go:141] libmachine: (pause-356375) Calling .GetSSHHostname
	I0816 13:30:27.390865   48217 main.go:141] libmachine: (pause-356375) DBG | domain pause-356375 has defined MAC address 52:54:00:e5:5f:4d in network mk-pause-356375
	I0816 13:30:27.391290   48217 main.go:141] libmachine: (pause-356375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:5f:4d", ip: ""} in network mk-pause-356375: {Iface:virbr3 ExpiryTime:2024-08-16 14:29:15 +0000 UTC Type:0 Mac:52:54:00:e5:5f:4d Iaid: IPaddr:192.168.61.95 Prefix:24 Hostname:pause-356375 Clientid:01:52:54:00:e5:5f:4d}
	I0816 13:30:27.391315   48217 main.go:141] libmachine: (pause-356375) DBG | domain pause-356375 has defined IP address 192.168.61.95 and MAC address 52:54:00:e5:5f:4d in network mk-pause-356375
	I0816 13:30:27.391540   48217 main.go:141] libmachine: (pause-356375) Calling .GetSSHPort
	I0816 13:30:27.391785   48217 main.go:141] libmachine: (pause-356375) Calling .GetSSHKeyPath
	I0816 13:30:27.392003   48217 main.go:141] libmachine: (pause-356375) Calling .GetSSHUsername
	I0816 13:30:27.392178   48217 sshutil.go:53] new ssh client: &{IP:192.168.61.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/pause-356375/id_rsa Username:docker}
	I0816 13:30:27.490949   48217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 13:30:27.520612   48217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 13:30:27.552704   48217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0816 13:30:27.584253   48217 provision.go:87] duration metric: took 449.171524ms to configureAuth
	I0816 13:30:27.584288   48217 buildroot.go:189] setting minikube options for container-runtime
	I0816 13:30:27.584523   48217 config.go:182] Loaded profile config "pause-356375": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 13:30:27.584597   48217 main.go:141] libmachine: (pause-356375) Calling .GetSSHHostname
	I0816 13:30:27.587735   48217 main.go:141] libmachine: (pause-356375) DBG | domain pause-356375 has defined MAC address 52:54:00:e5:5f:4d in network mk-pause-356375
	I0816 13:30:27.588177   48217 main.go:141] libmachine: (pause-356375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:5f:4d", ip: ""} in network mk-pause-356375: {Iface:virbr3 ExpiryTime:2024-08-16 14:29:15 +0000 UTC Type:0 Mac:52:54:00:e5:5f:4d Iaid: IPaddr:192.168.61.95 Prefix:24 Hostname:pause-356375 Clientid:01:52:54:00:e5:5f:4d}
	I0816 13:30:27.588203   48217 main.go:141] libmachine: (pause-356375) DBG | domain pause-356375 has defined IP address 192.168.61.95 and MAC address 52:54:00:e5:5f:4d in network mk-pause-356375
	I0816 13:30:27.588476   48217 main.go:141] libmachine: (pause-356375) Calling .GetSSHPort
	I0816 13:30:27.588662   48217 main.go:141] libmachine: (pause-356375) Calling .GetSSHKeyPath
	I0816 13:30:27.588781   48217 main.go:141] libmachine: (pause-356375) Calling .GetSSHKeyPath
	I0816 13:30:27.588950   48217 main.go:141] libmachine: (pause-356375) Calling .GetSSHUsername
	I0816 13:30:27.589175   48217 main.go:141] libmachine: Using SSH client type: native
	I0816 13:30:27.589404   48217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.95 22 <nil> <nil>}
	I0816 13:30:27.589430   48217 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 13:30:33.213803   48217 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 13:30:33.213826   48217 machine.go:96] duration metric: took 6.477169701s to provisionDockerMachine
	I0816 13:30:33.213841   48217 start.go:293] postStartSetup for "pause-356375" (driver="kvm2")
	I0816 13:30:33.213853   48217 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 13:30:33.213879   48217 main.go:141] libmachine: (pause-356375) Calling .DriverName
	I0816 13:30:33.214292   48217 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 13:30:33.214323   48217 main.go:141] libmachine: (pause-356375) Calling .GetSSHHostname
	I0816 13:30:33.217496   48217 main.go:141] libmachine: (pause-356375) DBG | domain pause-356375 has defined MAC address 52:54:00:e5:5f:4d in network mk-pause-356375
	I0816 13:30:33.217852   48217 main.go:141] libmachine: (pause-356375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:5f:4d", ip: ""} in network mk-pause-356375: {Iface:virbr3 ExpiryTime:2024-08-16 14:29:15 +0000 UTC Type:0 Mac:52:54:00:e5:5f:4d Iaid: IPaddr:192.168.61.95 Prefix:24 Hostname:pause-356375 Clientid:01:52:54:00:e5:5f:4d}
	I0816 13:30:33.217883   48217 main.go:141] libmachine: (pause-356375) DBG | domain pause-356375 has defined IP address 192.168.61.95 and MAC address 52:54:00:e5:5f:4d in network mk-pause-356375
	I0816 13:30:33.218083   48217 main.go:141] libmachine: (pause-356375) Calling .GetSSHPort
	I0816 13:30:33.218301   48217 main.go:141] libmachine: (pause-356375) Calling .GetSSHKeyPath
	I0816 13:30:33.218458   48217 main.go:141] libmachine: (pause-356375) Calling .GetSSHUsername
	I0816 13:30:33.218607   48217 sshutil.go:53] new ssh client: &{IP:192.168.61.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/pause-356375/id_rsa Username:docker}
	I0816 13:30:33.314105   48217 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 13:30:33.319043   48217 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 13:30:33.319072   48217 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/addons for local assets ...
	I0816 13:30:33.319139   48217 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/files for local assets ...
	I0816 13:30:33.319246   48217 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem -> 111492.pem in /etc/ssl/certs
	I0816 13:30:33.319366   48217 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 13:30:33.329965   48217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /etc/ssl/certs/111492.pem (1708 bytes)
	I0816 13:30:33.361462   48217 start.go:296] duration metric: took 147.606283ms for postStartSetup
	I0816 13:30:33.361509   48217 fix.go:56] duration metric: took 6.652271453s for fixHost
	I0816 13:30:33.361534   48217 main.go:141] libmachine: (pause-356375) Calling .GetSSHHostname
	I0816 13:30:33.365027   48217 main.go:141] libmachine: (pause-356375) DBG | domain pause-356375 has defined MAC address 52:54:00:e5:5f:4d in network mk-pause-356375
	I0816 13:30:33.365428   48217 main.go:141] libmachine: (pause-356375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:5f:4d", ip: ""} in network mk-pause-356375: {Iface:virbr3 ExpiryTime:2024-08-16 14:29:15 +0000 UTC Type:0 Mac:52:54:00:e5:5f:4d Iaid: IPaddr:192.168.61.95 Prefix:24 Hostname:pause-356375 Clientid:01:52:54:00:e5:5f:4d}
	I0816 13:30:33.365461   48217 main.go:141] libmachine: (pause-356375) DBG | domain pause-356375 has defined IP address 192.168.61.95 and MAC address 52:54:00:e5:5f:4d in network mk-pause-356375
	I0816 13:30:33.365681   48217 main.go:141] libmachine: (pause-356375) Calling .GetSSHPort
	I0816 13:30:33.365916   48217 main.go:141] libmachine: (pause-356375) Calling .GetSSHKeyPath
	I0816 13:30:33.366090   48217 main.go:141] libmachine: (pause-356375) Calling .GetSSHKeyPath
	I0816 13:30:33.366304   48217 main.go:141] libmachine: (pause-356375) Calling .GetSSHUsername
	I0816 13:30:33.366488   48217 main.go:141] libmachine: Using SSH client type: native
	I0816 13:30:33.366697   48217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.95 22 <nil> <nil>}
	I0816 13:30:33.366713   48217 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 13:30:33.486531   48217 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723815033.476754862
	
	I0816 13:30:33.486569   48217 fix.go:216] guest clock: 1723815033.476754862
	I0816 13:30:33.486580   48217 fix.go:229] Guest: 2024-08-16 13:30:33.476754862 +0000 UTC Remote: 2024-08-16 13:30:33.361514264 +0000 UTC m=+6.815016264 (delta=115.240598ms)
	I0816 13:30:33.486638   48217 fix.go:200] guest clock delta is within tolerance: 115.240598ms
	I0816 13:30:33.486645   48217 start.go:83] releasing machines lock for "pause-356375", held for 6.777430196s
	I0816 13:30:33.486681   48217 main.go:141] libmachine: (pause-356375) Calling .DriverName
	I0816 13:30:33.486981   48217 main.go:141] libmachine: (pause-356375) Calling .GetIP
	I0816 13:30:33.490386   48217 main.go:141] libmachine: (pause-356375) DBG | domain pause-356375 has defined MAC address 52:54:00:e5:5f:4d in network mk-pause-356375
	I0816 13:30:33.490853   48217 main.go:141] libmachine: (pause-356375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:5f:4d", ip: ""} in network mk-pause-356375: {Iface:virbr3 ExpiryTime:2024-08-16 14:29:15 +0000 UTC Type:0 Mac:52:54:00:e5:5f:4d Iaid: IPaddr:192.168.61.95 Prefix:24 Hostname:pause-356375 Clientid:01:52:54:00:e5:5f:4d}
	I0816 13:30:33.490927   48217 main.go:141] libmachine: (pause-356375) DBG | domain pause-356375 has defined IP address 192.168.61.95 and MAC address 52:54:00:e5:5f:4d in network mk-pause-356375
	I0816 13:30:33.491052   48217 main.go:141] libmachine: (pause-356375) Calling .DriverName
	I0816 13:30:33.491639   48217 main.go:141] libmachine: (pause-356375) Calling .DriverName
	I0816 13:30:33.491846   48217 main.go:141] libmachine: (pause-356375) Calling .DriverName
	I0816 13:30:33.491931   48217 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 13:30:33.491979   48217 main.go:141] libmachine: (pause-356375) Calling .GetSSHHostname
	I0816 13:30:33.492059   48217 ssh_runner.go:195] Run: cat /version.json
	I0816 13:30:33.492074   48217 main.go:141] libmachine: (pause-356375) Calling .GetSSHHostname
	I0816 13:30:33.495744   48217 main.go:141] libmachine: (pause-356375) DBG | domain pause-356375 has defined MAC address 52:54:00:e5:5f:4d in network mk-pause-356375
	I0816 13:30:33.496197   48217 main.go:141] libmachine: (pause-356375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:5f:4d", ip: ""} in network mk-pause-356375: {Iface:virbr3 ExpiryTime:2024-08-16 14:29:15 +0000 UTC Type:0 Mac:52:54:00:e5:5f:4d Iaid: IPaddr:192.168.61.95 Prefix:24 Hostname:pause-356375 Clientid:01:52:54:00:e5:5f:4d}
	I0816 13:30:33.496221   48217 main.go:141] libmachine: (pause-356375) DBG | domain pause-356375 has defined IP address 192.168.61.95 and MAC address 52:54:00:e5:5f:4d in network mk-pause-356375
	I0816 13:30:33.496499   48217 main.go:141] libmachine: (pause-356375) DBG | domain pause-356375 has defined MAC address 52:54:00:e5:5f:4d in network mk-pause-356375
	I0816 13:30:33.496502   48217 main.go:141] libmachine: (pause-356375) Calling .GetSSHPort
	I0816 13:30:33.496803   48217 main.go:141] libmachine: (pause-356375) Calling .GetSSHKeyPath
	I0816 13:30:33.496875   48217 main.go:141] libmachine: (pause-356375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:5f:4d", ip: ""} in network mk-pause-356375: {Iface:virbr3 ExpiryTime:2024-08-16 14:29:15 +0000 UTC Type:0 Mac:52:54:00:e5:5f:4d Iaid: IPaddr:192.168.61.95 Prefix:24 Hostname:pause-356375 Clientid:01:52:54:00:e5:5f:4d}
	I0816 13:30:33.496894   48217 main.go:141] libmachine: (pause-356375) DBG | domain pause-356375 has defined IP address 192.168.61.95 and MAC address 52:54:00:e5:5f:4d in network mk-pause-356375
	I0816 13:30:33.496980   48217 main.go:141] libmachine: (pause-356375) Calling .GetSSHUsername
	I0816 13:30:33.497098   48217 main.go:141] libmachine: (pause-356375) Calling .GetSSHPort
	I0816 13:30:33.497180   48217 sshutil.go:53] new ssh client: &{IP:192.168.61.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/pause-356375/id_rsa Username:docker}
	I0816 13:30:33.497302   48217 main.go:141] libmachine: (pause-356375) Calling .GetSSHKeyPath
	I0816 13:30:33.497488   48217 main.go:141] libmachine: (pause-356375) Calling .GetSSHUsername
	I0816 13:30:33.497693   48217 sshutil.go:53] new ssh client: &{IP:192.168.61.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/pause-356375/id_rsa Username:docker}
	I0816 13:30:33.604984   48217 ssh_runner.go:195] Run: systemctl --version
	I0816 13:30:33.613020   48217 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 13:30:33.790729   48217 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 13:30:33.799169   48217 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 13:30:33.799262   48217 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 13:30:33.813148   48217 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0816 13:30:33.813174   48217 start.go:495] detecting cgroup driver to use...
	I0816 13:30:33.813248   48217 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 13:30:33.842133   48217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 13:30:33.865534   48217 docker.go:217] disabling cri-docker service (if available) ...
	I0816 13:30:33.865598   48217 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 13:30:33.895893   48217 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 13:30:33.919207   48217 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 13:30:34.123327   48217 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 13:30:34.319624   48217 docker.go:233] disabling docker service ...
	I0816 13:30:34.319707   48217 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 13:30:34.342247   48217 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 13:30:34.363505   48217 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 13:30:34.574856   48217 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 13:30:34.753877   48217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 13:30:34.772604   48217 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 13:30:34.795066   48217 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 13:30:34.795188   48217 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:30:34.811255   48217 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 13:30:34.811326   48217 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:30:34.826264   48217 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:30:34.840513   48217 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:30:34.909870   48217 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 13:30:34.931232   48217 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:30:35.025516   48217 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:30:35.140630   48217 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:30:35.245200   48217 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 13:30:35.294883   48217 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 13:30:35.413641   48217 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:30:35.776116   48217 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 13:30:36.518291   48217 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 13:30:36.518370   48217 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 13:30:36.525190   48217 start.go:563] Will wait 60s for crictl version
	I0816 13:30:36.525268   48217 ssh_runner.go:195] Run: which crictl
	I0816 13:30:36.529311   48217 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 13:30:36.573267   48217 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 13:30:36.573374   48217 ssh_runner.go:195] Run: crio --version
	I0816 13:30:36.609838   48217 ssh_runner.go:195] Run: crio --version
	I0816 13:30:36.652422   48217 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 13:30:36.653848   48217 main.go:141] libmachine: (pause-356375) Calling .GetIP
	I0816 13:30:36.657115   48217 main.go:141] libmachine: (pause-356375) DBG | domain pause-356375 has defined MAC address 52:54:00:e5:5f:4d in network mk-pause-356375
	I0816 13:30:36.657633   48217 main.go:141] libmachine: (pause-356375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:5f:4d", ip: ""} in network mk-pause-356375: {Iface:virbr3 ExpiryTime:2024-08-16 14:29:15 +0000 UTC Type:0 Mac:52:54:00:e5:5f:4d Iaid: IPaddr:192.168.61.95 Prefix:24 Hostname:pause-356375 Clientid:01:52:54:00:e5:5f:4d}
	I0816 13:30:36.657677   48217 main.go:141] libmachine: (pause-356375) DBG | domain pause-356375 has defined IP address 192.168.61.95 and MAC address 52:54:00:e5:5f:4d in network mk-pause-356375
	I0816 13:30:36.657944   48217 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0816 13:30:36.664210   48217 kubeadm.go:883] updating cluster {Name:pause-356375 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0
ClusterName:pause-356375 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.95 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false
olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 13:30:36.664457   48217 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 13:30:36.664546   48217 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 13:30:36.730783   48217 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 13:30:36.730813   48217 crio.go:433] Images already preloaded, skipping extraction
	I0816 13:30:36.730879   48217 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 13:30:36.784283   48217 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 13:30:36.784311   48217 cache_images.go:84] Images are preloaded, skipping loading
	I0816 13:30:36.784321   48217 kubeadm.go:934] updating node { 192.168.61.95 8443 v1.31.0 crio true true} ...
	I0816 13:30:36.784458   48217 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-356375 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.95
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:pause-356375 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 13:30:36.784536   48217 ssh_runner.go:195] Run: crio config
	I0816 13:30:36.840723   48217 cni.go:84] Creating CNI manager for ""
	I0816 13:30:36.840745   48217 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:30:36.840760   48217 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 13:30:36.840787   48217 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.95 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-356375 NodeName:pause-356375 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.95"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.95 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 13:30:36.840953   48217 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.95
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-356375"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.95
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.95"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 13:30:36.841018   48217 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 13:30:36.859324   48217 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 13:30:36.859392   48217 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 13:30:36.871791   48217 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0816 13:30:36.892786   48217 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 13:30:36.913401   48217 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0816 13:30:36.934806   48217 ssh_runner.go:195] Run: grep 192.168.61.95	control-plane.minikube.internal$ /etc/hosts
	I0816 13:30:36.942758   48217 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:30:37.250480   48217 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 13:30:37.378013   48217 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/pause-356375 for IP: 192.168.61.95
	I0816 13:30:37.378055   48217 certs.go:194] generating shared ca certs ...
	I0816 13:30:37.378079   48217 certs.go:226] acquiring lock for ca certs: {Name:mkdb46755373c135acf8239d2d4352f4b0b3d1d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:30:37.378337   48217 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key
	I0816 13:30:37.378439   48217 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key
	I0816 13:30:37.378456   48217 certs.go:256] generating profile certs ...
	I0816 13:30:37.378625   48217 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/pause-356375/client.key
	I0816 13:30:37.378724   48217 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/pause-356375/apiserver.key.10df6d7d
	I0816 13:30:37.378798   48217 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/pause-356375/proxy-client.key
	I0816 13:30:37.379004   48217 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem (1338 bytes)
	W0816 13:30:37.379056   48217 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149_empty.pem, impossibly tiny 0 bytes
	I0816 13:30:37.379083   48217 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 13:30:37.379117   48217 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem (1082 bytes)
	I0816 13:30:37.379168   48217 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem (1123 bytes)
	I0816 13:30:37.379200   48217 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem (1675 bytes)
	I0816 13:30:37.379272   48217 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem (1708 bytes)
	I0816 13:30:37.380266   48217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 13:30:37.506567   48217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 13:30:37.559014   48217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 13:30:37.624334   48217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 13:30:37.664854   48217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/pause-356375/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0816 13:30:37.699010   48217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/pause-356375/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 13:30:37.728620   48217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/pause-356375/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 13:30:37.759149   48217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/pause-356375/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 13:30:37.800479   48217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem --> /usr/share/ca-certificates/11149.pem (1338 bytes)
	I0816 13:30:37.847988   48217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /usr/share/ca-certificates/111492.pem (1708 bytes)
	I0816 13:30:37.890333   48217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 13:30:37.940812   48217 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 13:30:37.984164   48217 ssh_runner.go:195] Run: openssl version
	I0816 13:30:37.993191   48217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11149.pem && ln -fs /usr/share/ca-certificates/11149.pem /etc/ssl/certs/11149.pem"
	I0816 13:30:38.010817   48217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11149.pem
	I0816 13:30:38.018528   48217 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 12:33 /usr/share/ca-certificates/11149.pem
	I0816 13:30:38.018607   48217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11149.pem
	I0816 13:30:38.027661   48217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11149.pem /etc/ssl/certs/51391683.0"
	I0816 13:30:38.045904   48217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111492.pem && ln -fs /usr/share/ca-certificates/111492.pem /etc/ssl/certs/111492.pem"
	I0816 13:30:38.066241   48217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111492.pem
	I0816 13:30:38.072524   48217 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 12:33 /usr/share/ca-certificates/111492.pem
	I0816 13:30:38.072594   48217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111492.pem
	I0816 13:30:38.080660   48217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111492.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 13:30:38.094194   48217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 13:30:38.113353   48217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:30:38.164279   48217 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 12:22 /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:30:38.164441   48217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:30:38.176854   48217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 13:30:38.189753   48217 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 13:30:38.197900   48217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 13:30:38.205423   48217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 13:30:38.212053   48217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 13:30:38.218688   48217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 13:30:38.226383   48217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 13:30:38.234761   48217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 13:30:38.241427   48217 kubeadm.go:392] StartCluster: {Name:pause-356375 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:pause-356375 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.95 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false ol
m:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 13:30:38.241612   48217 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 13:30:38.241691   48217 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 13:30:38.286994   48217 cri.go:89] found id: "c83fb4bd3f7e647a65b5b4f9ef499a8273b6bf543ecd4095eb1df915e8fa7fe7"
	I0816 13:30:38.287021   48217 cri.go:89] found id: "869400086fb4715dcf21b51f25a0efb8f01b8b1804934eb092e2211d6e56c9bc"
	I0816 13:30:38.287027   48217 cri.go:89] found id: "e431d9d28b0dc358a0fe60ad154b50e31ef53490c30b10c8499efce9d2be37c3"
	I0816 13:30:38.287032   48217 cri.go:89] found id: "bb18c4333d0f5062cf53db13c5b390aa62949c93454fc9c61990dc021b9f1676"
	I0816 13:30:38.287035   48217 cri.go:89] found id: "92f0922a8e3f3c1e3e6f428433f5fbdb503a25dbbe12604de08bc1f47489eed4"
	I0816 13:30:38.287039   48217 cri.go:89] found id: "3ee07c27f2fa921f98b92431882892c3ca2512d03804b097206361e54779f73d"
	I0816 13:30:38.287043   48217 cri.go:89] found id: "5e8779526b1dd6e9465802954ef836693701b7a91aefcfb9a93bdfb547dde3ce"
	I0816 13:30:38.287047   48217 cri.go:89] found id: ""
	I0816 13:30:38.287127   48217 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-356375 -n pause-356375
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-356375 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-356375 logs -n 25: (1.355373191s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p scheduled-stop-802327       | scheduled-stop-802327     | jenkins | v1.33.1 | 16 Aug 24 13:26 UTC |                     |
	|         | --schedule 5m                  |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-802327       | scheduled-stop-802327     | jenkins | v1.33.1 | 16 Aug 24 13:26 UTC |                     |
	|         | --schedule 5m                  |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-802327       | scheduled-stop-802327     | jenkins | v1.33.1 | 16 Aug 24 13:26 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-802327       | scheduled-stop-802327     | jenkins | v1.33.1 | 16 Aug 24 13:26 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-802327       | scheduled-stop-802327     | jenkins | v1.33.1 | 16 Aug 24 13:26 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-802327       | scheduled-stop-802327     | jenkins | v1.33.1 | 16 Aug 24 13:26 UTC | 16 Aug 24 13:26 UTC |
	|         | --cancel-scheduled             |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-802327       | scheduled-stop-802327     | jenkins | v1.33.1 | 16 Aug 24 13:27 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-802327       | scheduled-stop-802327     | jenkins | v1.33.1 | 16 Aug 24 13:27 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-802327       | scheduled-stop-802327     | jenkins | v1.33.1 | 16 Aug 24 13:27 UTC | 16 Aug 24 13:27 UTC |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| delete  | -p scheduled-stop-802327       | scheduled-stop-802327     | jenkins | v1.33.1 | 16 Aug 24 13:28 UTC | 16 Aug 24 13:28 UTC |
	| start   | -p kubernetes-upgrade-759623   | kubernetes-upgrade-759623 | jenkins | v1.33.1 | 16 Aug 24 13:28 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p offline-crio-718759         | offline-crio-718759       | jenkins | v1.33.1 | 16 Aug 24 13:28 UTC | 16 Aug 24 13:29 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --memory=2048             |                           |         |         |                     |                     |
	|         | --wait=true --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-729731      | minikube                  | jenkins | v1.26.0 | 16 Aug 24 13:28 UTC | 16 Aug 24 13:30 UTC |
	|         | --memory=2200 --vm-driver=kvm2 |                           |         |         |                     |                     |
	|         |  --container-runtime=crio      |                           |         |         |                     |                     |
	| start   | -p pause-356375 --memory=2048  | pause-356375              | jenkins | v1.33.1 | 16 Aug 24 13:28 UTC | 16 Aug 24 13:30 UTC |
	|         | --install-addons=false         |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2       |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p offline-crio-718759         | offline-crio-718759       | jenkins | v1.33.1 | 16 Aug 24 13:29 UTC | 16 Aug 24 13:29 UTC |
	| start   | -p NoKubernetes-169820         | NoKubernetes-169820       | jenkins | v1.33.1 | 16 Aug 24 13:29 UTC |                     |
	|         | --no-kubernetes                |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20      |                           |         |         |                     |                     |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-169820         | NoKubernetes-169820       | jenkins | v1.33.1 | 16 Aug 24 13:29 UTC | 16 Aug 24 13:30 UTC |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-729731      | running-upgrade-729731    | jenkins | v1.33.1 | 16 Aug 24 13:30 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p pause-356375                | pause-356375              | jenkins | v1.33.1 | 16 Aug 24 13:30 UTC | 16 Aug 24 13:31 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-169820         | NoKubernetes-169820       | jenkins | v1.33.1 | 16 Aug 24 13:30 UTC | 16 Aug 24 13:30 UTC |
	|         | --no-kubernetes --driver=kvm2  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-169820         | NoKubernetes-169820       | jenkins | v1.33.1 | 16 Aug 24 13:30 UTC | 16 Aug 24 13:30 UTC |
	| start   | -p NoKubernetes-169820         | NoKubernetes-169820       | jenkins | v1.33.1 | 16 Aug 24 13:30 UTC | 16 Aug 24 13:31 UTC |
	|         | --no-kubernetes --driver=kvm2  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-169820 sudo    | NoKubernetes-169820       | jenkins | v1.33.1 | 16 Aug 24 13:31 UTC |                     |
	|         | systemctl is-active --quiet    |                           |         |         |                     |                     |
	|         | service kubelet                |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-169820         | NoKubernetes-169820       | jenkins | v1.33.1 | 16 Aug 24 13:31 UTC | 16 Aug 24 13:31 UTC |
	| start   | -p NoKubernetes-169820         | NoKubernetes-169820       | jenkins | v1.33.1 | 16 Aug 24 13:31 UTC |                     |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 13:31:28
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 13:31:28.111381   49146 out.go:345] Setting OutFile to fd 1 ...
	I0816 13:31:28.111473   49146 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 13:31:28.111477   49146 out.go:358] Setting ErrFile to fd 2...
	I0816 13:31:28.111481   49146 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 13:31:28.111666   49146 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-3966/.minikube/bin
	I0816 13:31:28.112155   49146 out.go:352] Setting JSON to false
	I0816 13:31:28.113249   49146 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4433,"bootTime":1723810655,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 13:31:28.113295   49146 start.go:139] virtualization: kvm guest
	I0816 13:31:28.115652   49146 out.go:177] * [NoKubernetes-169820] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 13:31:28.116958   49146 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 13:31:28.116974   49146 notify.go:220] Checking for updates...
	I0816 13:31:28.119653   49146 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 13:31:28.120884   49146 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 13:31:28.122100   49146 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-3966/.minikube
	I0816 13:31:28.123606   49146 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 13:31:28.124931   49146 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 13:31:28.126523   49146 config.go:182] Loaded profile config "NoKubernetes-169820": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0816 13:31:28.126978   49146 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:31:28.127045   49146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:31:28.141828   49146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35297
	I0816 13:31:28.142212   49146 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:31:28.142673   49146 main.go:141] libmachine: Using API Version  1
	I0816 13:31:28.142687   49146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:31:28.143001   49146 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:31:28.143145   49146 main.go:141] libmachine: (NoKubernetes-169820) Calling .DriverName
	I0816 13:31:28.143366   49146 start.go:1780] No Kubernetes version set for minikube, setting Kubernetes version to v0.0.0
	I0816 13:31:28.143383   49146 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 13:31:28.143760   49146 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:31:28.143797   49146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:31:28.157761   49146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41879
	I0816 13:31:28.158092   49146 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:31:28.158544   49146 main.go:141] libmachine: Using API Version  1
	I0816 13:31:28.158556   49146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:31:28.158823   49146 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:31:28.158987   49146 main.go:141] libmachine: (NoKubernetes-169820) Calling .DriverName
	I0816 13:31:28.194191   49146 out.go:177] * Using the kvm2 driver based on existing profile
	I0816 13:31:28.195515   49146 start.go:297] selected driver: kvm2
	I0816 13:31:28.195528   49146 start.go:901] validating driver "kvm2" against &{Name:NoKubernetes-169820 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v0.0.0 ClusterName:NoKubernetes-169820 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.148 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 13:31:28.195690   49146 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 13:31:28.196102   49146 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 13:31:28.196162   49146 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-3966/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 13:31:28.210734   49146 install.go:137] /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0816 13:31:28.211447   49146 cni.go:84] Creating CNI manager for ""
	I0816 13:31:28.211455   49146 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:31:28.211500   49146 start.go:340] cluster config:
	{Name:NoKubernetes-169820 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-169820 Namespace:default APISer
verHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.148 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientP
ath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 13:31:28.211592   49146 iso.go:125] acquiring lock: {Name:mk00139b359c6fe4c84d80e843a2099303fb7c5b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 13:31:28.214198   49146 out.go:177] * Starting minikube without Kubernetes in cluster NoKubernetes-169820
	I0816 13:31:28.215623   49146 preload.go:131] Checking if preload exists for k8s version v0.0.0 and runtime crio
	W0816 13:31:28.773616   49146 preload.go:114] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0816 13:31:28.773749   49146 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/NoKubernetes-169820/config.json ...
	I0816 13:31:28.774020   49146 start.go:360] acquireMachinesLock for NoKubernetes-169820: {Name:mk0b4b88ee8893cefe753073bc08069ac50e5641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 13:31:28.774070   49146 start.go:364] duration metric: took 35.525µs to acquireMachinesLock for "NoKubernetes-169820"
	I0816 13:31:28.774084   49146 start.go:96] Skipping create...Using existing machine configuration
	I0816 13:31:28.774088   49146 fix.go:54] fixHost starting: 
	I0816 13:31:28.774356   49146 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:31:28.774387   49146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:31:28.789708   49146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36779
	I0816 13:31:28.790127   49146 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:31:28.790662   49146 main.go:141] libmachine: Using API Version  1
	I0816 13:31:28.790677   49146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:31:28.790986   49146 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:31:28.791226   49146 main.go:141] libmachine: (NoKubernetes-169820) Calling .DriverName
	I0816 13:31:28.791394   49146 main.go:141] libmachine: (NoKubernetes-169820) Calling .GetState
	I0816 13:31:28.793320   49146 fix.go:112] recreateIfNeeded on NoKubernetes-169820: state=Stopped err=<nil>
	I0816 13:31:28.793353   49146 main.go:141] libmachine: (NoKubernetes-169820) Calling .DriverName
	W0816 13:31:28.793492   49146 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 13:31:28.795309   49146 out.go:177] * Restarting existing kvm2 VM for "NoKubernetes-169820" ...
	I0816 13:31:28.470128   48217 pod_ready.go:103] pod "kube-apiserver-pause-356375" in "kube-system" namespace has status "Ready":"False"
	I0816 13:31:28.968374   48217 pod_ready.go:93] pod "kube-apiserver-pause-356375" in "kube-system" namespace has status "Ready":"True"
	I0816 13:31:28.968396   48217 pod_ready.go:82] duration metric: took 9.508222534s for pod "kube-apiserver-pause-356375" in "kube-system" namespace to be "Ready" ...
	I0816 13:31:28.968405   48217 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-pause-356375" in "kube-system" namespace to be "Ready" ...
	I0816 13:31:28.974466   48217 pod_ready.go:93] pod "kube-controller-manager-pause-356375" in "kube-system" namespace has status "Ready":"True"
	I0816 13:31:28.974487   48217 pod_ready.go:82] duration metric: took 6.074749ms for pod "kube-controller-manager-pause-356375" in "kube-system" namespace to be "Ready" ...
	I0816 13:31:28.974497   48217 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-s5r7l" in "kube-system" namespace to be "Ready" ...
	I0816 13:31:28.980023   48217 pod_ready.go:93] pod "kube-proxy-s5r7l" in "kube-system" namespace has status "Ready":"True"
	I0816 13:31:28.980041   48217 pod_ready.go:82] duration metric: took 5.539454ms for pod "kube-proxy-s5r7l" in "kube-system" namespace to be "Ready" ...
	I0816 13:31:28.980050   48217 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-pause-356375" in "kube-system" namespace to be "Ready" ...
	I0816 13:31:28.987353   48217 pod_ready.go:93] pod "kube-scheduler-pause-356375" in "kube-system" namespace has status "Ready":"True"
	I0816 13:31:28.987389   48217 pod_ready.go:82] duration metric: took 7.331309ms for pod "kube-scheduler-pause-356375" in "kube-system" namespace to be "Ready" ...
	I0816 13:31:28.987400   48217 pod_ready.go:39] duration metric: took 15.046041931s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:31:28.987417   48217 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 13:31:29.000421   48217 ops.go:34] apiserver oom_adj: -16
	I0816 13:31:29.000444   48217 kubeadm.go:597] duration metric: took 50.638722057s to restartPrimaryControlPlane
	I0816 13:31:29.000456   48217 kubeadm.go:394] duration metric: took 50.759041177s to StartCluster
	I0816 13:31:29.000475   48217 settings.go:142] acquiring lock: {Name:mk96ffc9331ceacd8f3c1c33a59b38e047898a0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:31:29.000544   48217 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 13:31:29.001814   48217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/kubeconfig: {Name:mk102d7d0e1fecb6a50b9d6c1ee82dcff7f7a898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:31:29.002075   48217 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.95 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 13:31:29.002192   48217 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 13:31:29.002344   48217 config.go:182] Loaded profile config "pause-356375": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 13:31:29.003658   48217 out.go:177] * Enabled addons: 
	I0816 13:31:29.003670   48217 out.go:177] * Verifying Kubernetes components...
	I0816 13:31:24.824653   48168 api_server.go:253] Checking apiserver healthz at https://192.168.72.176:8443/healthz ...
	I0816 13:31:24.831325   48168 api_server.go:279] https://192.168.72.176:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 13:31:24.831347   48168 api_server.go:103] status: https://192.168.72.176:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 13:31:25.324403   48168 api_server.go:253] Checking apiserver healthz at https://192.168.72.176:8443/healthz ...
	I0816 13:31:25.329481   48168 api_server.go:279] https://192.168.72.176:8443/healthz returned 200:
	ok
	I0816 13:31:25.337652   48168 api_server.go:141] control plane version: v1.24.1
	I0816 13:31:25.337674   48168 api_server.go:131] duration metric: took 36.514134893s to wait for apiserver health ...
	I0816 13:31:25.337684   48168 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 13:31:25.344699   48168 system_pods.go:59] 7 kube-system pods found
	I0816 13:31:25.344732   48168 system_pods.go:61] "coredns-6d4b75cb6d-mxpw5" [5d21130a-096f-4a6e-b543-01bc3568c70e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 13:31:25.344741   48168 system_pods.go:61] "etcd-running-upgrade-729731" [adfa725e-81e9-4a63-ab6b-9f07ce90f48c] Running
	I0816 13:31:25.344748   48168 system_pods.go:61] "kube-apiserver-running-upgrade-729731" [62f00339-bd25-4174-b6ba-3b13d8c123ec] Running
	I0816 13:31:25.344754   48168 system_pods.go:61] "kube-controller-manager-running-upgrade-729731" [2cffda32-b5cf-4e45-ab11-3d03adb6f8e4] Running
	I0816 13:31:25.344764   48168 system_pods.go:61] "kube-proxy-qvkjj" [d5af14f9-849f-4a23-b273-7a233a882737] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0816 13:31:25.344770   48168 system_pods.go:61] "kube-scheduler-running-upgrade-729731" [ab814916-f3ad-4fb9-83be-0892619cf030] Running
	I0816 13:31:25.344781   48168 system_pods.go:61] "storage-provisioner" [37310e34-ba0c-42d4-818c-87843c45064d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0816 13:31:25.344794   48168 system_pods.go:74] duration metric: took 7.101285ms to wait for pod list to return data ...
	I0816 13:31:25.344810   48168 kubeadm.go:582] duration metric: took 36.829738935s to wait for: map[apiserver:true system_pods:true]
	I0816 13:31:25.344828   48168 node_conditions.go:102] verifying NodePressure condition ...
	I0816 13:31:25.349822   48168 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0816 13:31:25.349847   48168 node_conditions.go:123] node cpu capacity is 2
	I0816 13:31:25.349857   48168 node_conditions.go:105] duration metric: took 5.023634ms to run NodePressure ...
	I0816 13:31:25.349869   48168 start.go:241] waiting for startup goroutines ...
	I0816 13:31:29.742859   46501 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:31:29.743130   46501 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:31:29.004995   48217 addons.go:510] duration metric: took 2.805331ms for enable addons: enabled=[]
	I0816 13:31:29.005055   48217 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:31:29.219066   48217 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 13:31:29.240787   48217 node_ready.go:35] waiting up to 6m0s for node "pause-356375" to be "Ready" ...
	I0816 13:31:29.244249   48217 node_ready.go:49] node "pause-356375" has status "Ready":"True"
	I0816 13:31:29.244282   48217 node_ready.go:38] duration metric: took 3.456545ms for node "pause-356375" to be "Ready" ...
	I0816 13:31:29.244293   48217 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:31:29.255031   48217 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-5mkc9" in "kube-system" namespace to be "Ready" ...
	I0816 13:31:29.364760   48217 pod_ready.go:93] pod "coredns-6f6b679f8f-5mkc9" in "kube-system" namespace has status "Ready":"True"
	I0816 13:31:29.364791   48217 pod_ready.go:82] duration metric: took 109.717811ms for pod "coredns-6f6b679f8f-5mkc9" in "kube-system" namespace to be "Ready" ...
	I0816 13:31:29.364805   48217 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-356375" in "kube-system" namespace to be "Ready" ...
	I0816 13:31:29.765594   48217 pod_ready.go:93] pod "etcd-pause-356375" in "kube-system" namespace has status "Ready":"True"
	I0816 13:31:29.765620   48217 pod_ready.go:82] duration metric: took 400.806959ms for pod "etcd-pause-356375" in "kube-system" namespace to be "Ready" ...
	I0816 13:31:29.765632   48217 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-356375" in "kube-system" namespace to be "Ready" ...
	I0816 13:31:30.163941   48217 pod_ready.go:93] pod "kube-apiserver-pause-356375" in "kube-system" namespace has status "Ready":"True"
	I0816 13:31:30.163971   48217 pod_ready.go:82] duration metric: took 398.330365ms for pod "kube-apiserver-pause-356375" in "kube-system" namespace to be "Ready" ...
	I0816 13:31:30.163983   48217 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-356375" in "kube-system" namespace to be "Ready" ...
	I0816 13:31:30.565544   48217 pod_ready.go:93] pod "kube-controller-manager-pause-356375" in "kube-system" namespace has status "Ready":"True"
	I0816 13:31:30.565568   48217 pod_ready.go:82] duration metric: took 401.575839ms for pod "kube-controller-manager-pause-356375" in "kube-system" namespace to be "Ready" ...
	I0816 13:31:30.565581   48217 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-s5r7l" in "kube-system" namespace to be "Ready" ...
	I0816 13:31:30.965511   48217 pod_ready.go:93] pod "kube-proxy-s5r7l" in "kube-system" namespace has status "Ready":"True"
	I0816 13:31:30.965543   48217 pod_ready.go:82] duration metric: took 399.954286ms for pod "kube-proxy-s5r7l" in "kube-system" namespace to be "Ready" ...
	I0816 13:31:30.965556   48217 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-356375" in "kube-system" namespace to be "Ready" ...
	I0816 13:31:31.366232   48217 pod_ready.go:93] pod "kube-scheduler-pause-356375" in "kube-system" namespace has status "Ready":"True"
	I0816 13:31:31.366259   48217 pod_ready.go:82] duration metric: took 400.694894ms for pod "kube-scheduler-pause-356375" in "kube-system" namespace to be "Ready" ...
	I0816 13:31:31.366272   48217 pod_ready.go:39] duration metric: took 2.121968805s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:31:31.366287   48217 api_server.go:52] waiting for apiserver process to appear ...
	I0816 13:31:31.366344   48217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:31:31.382846   48217 api_server.go:72] duration metric: took 2.380737721s to wait for apiserver process to appear ...
	I0816 13:31:31.382876   48217 api_server.go:88] waiting for apiserver healthz status ...
	I0816 13:31:31.382896   48217 api_server.go:253] Checking apiserver healthz at https://192.168.61.95:8443/healthz ...
	I0816 13:31:31.388503   48217 api_server.go:279] https://192.168.61.95:8443/healthz returned 200:
	ok
	I0816 13:31:31.389432   48217 api_server.go:141] control plane version: v1.31.0
	I0816 13:31:31.389454   48217 api_server.go:131] duration metric: took 6.569893ms to wait for apiserver health ...
	I0816 13:31:31.389463   48217 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 13:31:31.567667   48217 system_pods.go:59] 6 kube-system pods found
	I0816 13:31:31.567707   48217 system_pods.go:61] "coredns-6f6b679f8f-5mkc9" [d8f3c142-a0dc-4fc8-9f85-a8d9dc6aced3] Running
	I0816 13:31:31.567717   48217 system_pods.go:61] "etcd-pause-356375" [a3adb3ab-b7ec-4892-a222-ecbd5f386f3d] Running
	I0816 13:31:31.567723   48217 system_pods.go:61] "kube-apiserver-pause-356375" [ab7fb55c-4524-47d8-a340-429b754fed3b] Running
	I0816 13:31:31.567728   48217 system_pods.go:61] "kube-controller-manager-pause-356375" [2a2b1bb2-a8ca-490a-846b-07366c76d22c] Running
	I0816 13:31:31.567733   48217 system_pods.go:61] "kube-proxy-s5r7l" [e5bc83bc-fa37-4011-868b-0b47230d3c6e] Running
	I0816 13:31:31.567741   48217 system_pods.go:61] "kube-scheduler-pause-356375" [079d342f-dc2a-4947-baad-38a636406991] Running
	I0816 13:31:31.567749   48217 system_pods.go:74] duration metric: took 178.280099ms to wait for pod list to return data ...
	I0816 13:31:31.567760   48217 default_sa.go:34] waiting for default service account to be created ...
	I0816 13:31:31.765042   48217 default_sa.go:45] found service account: "default"
	I0816 13:31:31.765074   48217 default_sa.go:55] duration metric: took 197.303381ms for default service account to be created ...
	I0816 13:31:31.765086   48217 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 13:31:31.966421   48217 system_pods.go:86] 6 kube-system pods found
	I0816 13:31:31.966450   48217 system_pods.go:89] "coredns-6f6b679f8f-5mkc9" [d8f3c142-a0dc-4fc8-9f85-a8d9dc6aced3] Running
	I0816 13:31:31.966456   48217 system_pods.go:89] "etcd-pause-356375" [a3adb3ab-b7ec-4892-a222-ecbd5f386f3d] Running
	I0816 13:31:31.966460   48217 system_pods.go:89] "kube-apiserver-pause-356375" [ab7fb55c-4524-47d8-a340-429b754fed3b] Running
	I0816 13:31:31.966463   48217 system_pods.go:89] "kube-controller-manager-pause-356375" [2a2b1bb2-a8ca-490a-846b-07366c76d22c] Running
	I0816 13:31:31.966469   48217 system_pods.go:89] "kube-proxy-s5r7l" [e5bc83bc-fa37-4011-868b-0b47230d3c6e] Running
	I0816 13:31:31.966472   48217 system_pods.go:89] "kube-scheduler-pause-356375" [079d342f-dc2a-4947-baad-38a636406991] Running
	I0816 13:31:31.966479   48217 system_pods.go:126] duration metric: took 201.386834ms to wait for k8s-apps to be running ...
	I0816 13:31:31.966485   48217 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 13:31:31.966538   48217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 13:31:31.982027   48217 system_svc.go:56] duration metric: took 15.527896ms WaitForService to wait for kubelet
	I0816 13:31:31.982077   48217 kubeadm.go:582] duration metric: took 2.979973216s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 13:31:31.982113   48217 node_conditions.go:102] verifying NodePressure condition ...
	I0816 13:31:32.164406   48217 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 13:31:32.164437   48217 node_conditions.go:123] node cpu capacity is 2
	I0816 13:31:32.164447   48217 node_conditions.go:105] duration metric: took 182.32635ms to run NodePressure ...
	I0816 13:31:32.164460   48217 start.go:241] waiting for startup goroutines ...
	I0816 13:31:32.164466   48217 start.go:246] waiting for cluster config update ...
	I0816 13:31:32.164473   48217 start.go:255] writing updated cluster config ...
	I0816 13:31:32.164751   48217 ssh_runner.go:195] Run: rm -f paused
	I0816 13:31:32.213627   48217 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 13:31:32.215857   48217 out.go:177] * Done! kubectl is now configured to use "pause-356375" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 16 13:31:32 pause-356375 crio[2803]: time="2024-08-16 13:31:32.874276697Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723815092874249787,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fca2f5ca-2cc6-44a3-be1c-9e41adba1321 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 13:31:32 pause-356375 crio[2803]: time="2024-08-16 13:31:32.874801830Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8eec2ac7-1c9c-4881-909b-b61f12a0d10e name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:31:32 pause-356375 crio[2803]: time="2024-08-16 13:31:32.874874651Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8eec2ac7-1c9c-4881-909b-b61f12a0d10e name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:31:32 pause-356375 crio[2803]: time="2024-08-16 13:31:32.875160226Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f367ca3640a8d316a76d5ff8e4016f6225dfda551875f398bacd2dcf777c6169,PodSandboxId:e701cd2e3fe9cda8b314b70915d98368c1d41fbfbfba6c7bed93591233b00d66,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723815073332167221,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s5r7l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5bc83bc-fa37-4011-868b-0b47230d3c6e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:698cc17261e45cbe816ecad32122ff7a21edeb0cf9b4e328d74c4be555f3fc74,PodSandboxId:b356b7057b91331c8198cd786cacc7705438297f24fdf1d47f11188a25a9f302,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723815073324421981,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5mkc9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f3c142-a0dc-4fc8-9f85-a8d9dc6aced3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a3d76b6252af6f98db04521887ef6d9ad0d8fec64d5b093aeacd2b8a9450b1f,PodSandboxId:dce00528f1e2c0d0ddf372de60b58ff7e0829859a1d907d42b978b8615d4dc27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723815069233090284,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-356375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7dee116a13
ff34607b19d5c8a7d75e4,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62c7f69b6d2512a2b1b776c66258191dc56b3e8a4e953194df93038a150312aa,PodSandboxId:f1d76445c7c009a6f4d161acaf6d374b867e7cc056182685ad44f2c0169941f4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723815068522309512,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-356375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a2087a04043a1b5f9290941330a3
8b0,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:802d9b0bdea91592420d456ebd3a24bd868afda1f0956f27dd84ee1d2c063da8,PodSandboxId:b9967ead36baba37a2d8bf12e9cf039812673423280c21188f7d88c57bf93415,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723815068495403629,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-356375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27f4dcfb13b23a27379df872fa50109,},Annotations:map[string]string{io.kubernete
s.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e00ed948540f4ce859c64c1f9c24d04f3264bf0b33e478d4ced10a5501f7bf2,PodSandboxId:9aba8c6c36426672053356c6584b8ccc34046609fa8c0d40d888a3d32cbaee7a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723815068514672685,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-356375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c7c006a84dfaec546eb6543f597c337,},Annotations:map[string]string{io.
kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e52df5a61660e485bc705c956685309c88e3f61191ad892f220c0aa905c4a6f,PodSandboxId:dce00528f1e2c0d0ddf372de60b58ff7e0829859a1d907d42b978b8615d4dc27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723815046646537088,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-356375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7dee116a13ff34607b19d5c8a7d75e4,},Annotations:map[string]string{io.kubernetes.containe
r.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c83fb4bd3f7e647a65b5b4f9ef499a8273b6bf543ecd4095eb1df915e8fa7fe7,PodSandboxId:b356b7057b91331c8198cd786cacc7705438297f24fdf1d47f11188a25a9f302,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723815037772308994,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5mkc9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f3c142-a0dc-4fc8-9f85-a8d9dc6aced3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.c
ontainer.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:869400086fb4715dcf21b51f25a0efb8f01b8b1804934eb092e2211d6e56c9bc,PodSandboxId:5e986af3720af8fed05785330617b01d2d40b452cf3354a7f97660027f4c52f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723815035642393197,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s
5r7l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5bc83bc-fa37-4011-868b-0b47230d3c6e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e431d9d28b0dc358a0fe60ad154b50e31ef53490c30b10c8499efce9d2be37c3,PodSandboxId:38831dc2313e134732cf542bae349cb074ef6a5e98a0ad2a9e90572229844b4e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723815035589579601,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manag
er-pause-356375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c7c006a84dfaec546eb6543f597c337,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb18c4333d0f5062cf53db13c5b390aa62949c93454fc9c61990dc021b9f1676,PodSandboxId:3be2591c2634edb593fa4318bac622b20fde73a8dd101a412cc8255ce48f5995,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723815035480571637,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-356375,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a2087a04043a1b5f9290941330a38b0,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ee07c27f2fa921f98b92431882892c3ca2512d03804b097206361e54779f73d,PodSandboxId:aef4d8caec953cc23b1c24e5bfbb04f19022c3e0accce892963aa7ba3ad2b2c7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723815035225293522,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-356375,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: b27f4dcfb13b23a27379df872fa50109,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8eec2ac7-1c9c-4881-909b-b61f12a0d10e name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:31:32 pause-356375 crio[2803]: time="2024-08-16 13:31:32.918662049Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3f6b2c6f-a36e-4642-97e8-01312f40ff7c name=/runtime.v1.RuntimeService/Version
	Aug 16 13:31:32 pause-356375 crio[2803]: time="2024-08-16 13:31:32.918758683Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3f6b2c6f-a36e-4642-97e8-01312f40ff7c name=/runtime.v1.RuntimeService/Version
	Aug 16 13:31:32 pause-356375 crio[2803]: time="2024-08-16 13:31:32.919630215Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=110ae0b5-6c75-47d8-82c3-21697c63370b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 13:31:32 pause-356375 crio[2803]: time="2024-08-16 13:31:32.920151640Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723815092920121748,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=110ae0b5-6c75-47d8-82c3-21697c63370b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 13:31:32 pause-356375 crio[2803]: time="2024-08-16 13:31:32.920672072Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4184cbeb-04fb-47ea-9aa5-7ca2c583f820 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:31:32 pause-356375 crio[2803]: time="2024-08-16 13:31:32.920738812Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4184cbeb-04fb-47ea-9aa5-7ca2c583f820 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:31:32 pause-356375 crio[2803]: time="2024-08-16 13:31:32.921123685Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f367ca3640a8d316a76d5ff8e4016f6225dfda551875f398bacd2dcf777c6169,PodSandboxId:e701cd2e3fe9cda8b314b70915d98368c1d41fbfbfba6c7bed93591233b00d66,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723815073332167221,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s5r7l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5bc83bc-fa37-4011-868b-0b47230d3c6e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:698cc17261e45cbe816ecad32122ff7a21edeb0cf9b4e328d74c4be555f3fc74,PodSandboxId:b356b7057b91331c8198cd786cacc7705438297f24fdf1d47f11188a25a9f302,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723815073324421981,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5mkc9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f3c142-a0dc-4fc8-9f85-a8d9dc6aced3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a3d76b6252af6f98db04521887ef6d9ad0d8fec64d5b093aeacd2b8a9450b1f,PodSandboxId:dce00528f1e2c0d0ddf372de60b58ff7e0829859a1d907d42b978b8615d4dc27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723815069233090284,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-356375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7dee116a13
ff34607b19d5c8a7d75e4,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62c7f69b6d2512a2b1b776c66258191dc56b3e8a4e953194df93038a150312aa,PodSandboxId:f1d76445c7c009a6f4d161acaf6d374b867e7cc056182685ad44f2c0169941f4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723815068522309512,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-356375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a2087a04043a1b5f9290941330a3
8b0,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:802d9b0bdea91592420d456ebd3a24bd868afda1f0956f27dd84ee1d2c063da8,PodSandboxId:b9967ead36baba37a2d8bf12e9cf039812673423280c21188f7d88c57bf93415,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723815068495403629,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-356375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27f4dcfb13b23a27379df872fa50109,},Annotations:map[string]string{io.kubernete
s.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e00ed948540f4ce859c64c1f9c24d04f3264bf0b33e478d4ced10a5501f7bf2,PodSandboxId:9aba8c6c36426672053356c6584b8ccc34046609fa8c0d40d888a3d32cbaee7a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723815068514672685,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-356375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c7c006a84dfaec546eb6543f597c337,},Annotations:map[string]string{io.
kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e52df5a61660e485bc705c956685309c88e3f61191ad892f220c0aa905c4a6f,PodSandboxId:dce00528f1e2c0d0ddf372de60b58ff7e0829859a1d907d42b978b8615d4dc27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723815046646537088,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-356375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7dee116a13ff34607b19d5c8a7d75e4,},Annotations:map[string]string{io.kubernetes.containe
r.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c83fb4bd3f7e647a65b5b4f9ef499a8273b6bf543ecd4095eb1df915e8fa7fe7,PodSandboxId:b356b7057b91331c8198cd786cacc7705438297f24fdf1d47f11188a25a9f302,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723815037772308994,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5mkc9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f3c142-a0dc-4fc8-9f85-a8d9dc6aced3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.c
ontainer.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:869400086fb4715dcf21b51f25a0efb8f01b8b1804934eb092e2211d6e56c9bc,PodSandboxId:5e986af3720af8fed05785330617b01d2d40b452cf3354a7f97660027f4c52f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723815035642393197,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s
5r7l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5bc83bc-fa37-4011-868b-0b47230d3c6e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e431d9d28b0dc358a0fe60ad154b50e31ef53490c30b10c8499efce9d2be37c3,PodSandboxId:38831dc2313e134732cf542bae349cb074ef6a5e98a0ad2a9e90572229844b4e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723815035589579601,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manag
er-pause-356375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c7c006a84dfaec546eb6543f597c337,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb18c4333d0f5062cf53db13c5b390aa62949c93454fc9c61990dc021b9f1676,PodSandboxId:3be2591c2634edb593fa4318bac622b20fde73a8dd101a412cc8255ce48f5995,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723815035480571637,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-356375,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a2087a04043a1b5f9290941330a38b0,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ee07c27f2fa921f98b92431882892c3ca2512d03804b097206361e54779f73d,PodSandboxId:aef4d8caec953cc23b1c24e5bfbb04f19022c3e0accce892963aa7ba3ad2b2c7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723815035225293522,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-356375,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: b27f4dcfb13b23a27379df872fa50109,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4184cbeb-04fb-47ea-9aa5-7ca2c583f820 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:31:32 pause-356375 crio[2803]: time="2024-08-16 13:31:32.966783986Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bc1eaa38-2c43-4e40-927a-30a41d2c1861 name=/runtime.v1.RuntimeService/Version
	Aug 16 13:31:32 pause-356375 crio[2803]: time="2024-08-16 13:31:32.966860879Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bc1eaa38-2c43-4e40-927a-30a41d2c1861 name=/runtime.v1.RuntimeService/Version
	Aug 16 13:31:32 pause-356375 crio[2803]: time="2024-08-16 13:31:32.968999515Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a02ebf6e-25d0-4a09-9b8d-c0cb7929e278 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 13:31:32 pause-356375 crio[2803]: time="2024-08-16 13:31:32.969415302Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723815092969390105,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a02ebf6e-25d0-4a09-9b8d-c0cb7929e278 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 13:31:32 pause-356375 crio[2803]: time="2024-08-16 13:31:32.969890988Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e834ce72-ea49-4195-bb24-a02b56e0625f name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:31:32 pause-356375 crio[2803]: time="2024-08-16 13:31:32.969995999Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e834ce72-ea49-4195-bb24-a02b56e0625f name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:31:32 pause-356375 crio[2803]: time="2024-08-16 13:31:32.970230994Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f367ca3640a8d316a76d5ff8e4016f6225dfda551875f398bacd2dcf777c6169,PodSandboxId:e701cd2e3fe9cda8b314b70915d98368c1d41fbfbfba6c7bed93591233b00d66,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723815073332167221,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s5r7l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5bc83bc-fa37-4011-868b-0b47230d3c6e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:698cc17261e45cbe816ecad32122ff7a21edeb0cf9b4e328d74c4be555f3fc74,PodSandboxId:b356b7057b91331c8198cd786cacc7705438297f24fdf1d47f11188a25a9f302,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723815073324421981,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5mkc9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f3c142-a0dc-4fc8-9f85-a8d9dc6aced3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a3d76b6252af6f98db04521887ef6d9ad0d8fec64d5b093aeacd2b8a9450b1f,PodSandboxId:dce00528f1e2c0d0ddf372de60b58ff7e0829859a1d907d42b978b8615d4dc27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723815069233090284,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-356375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7dee116a13
ff34607b19d5c8a7d75e4,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62c7f69b6d2512a2b1b776c66258191dc56b3e8a4e953194df93038a150312aa,PodSandboxId:f1d76445c7c009a6f4d161acaf6d374b867e7cc056182685ad44f2c0169941f4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723815068522309512,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-356375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a2087a04043a1b5f9290941330a3
8b0,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:802d9b0bdea91592420d456ebd3a24bd868afda1f0956f27dd84ee1d2c063da8,PodSandboxId:b9967ead36baba37a2d8bf12e9cf039812673423280c21188f7d88c57bf93415,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723815068495403629,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-356375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27f4dcfb13b23a27379df872fa50109,},Annotations:map[string]string{io.kubernete
s.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e00ed948540f4ce859c64c1f9c24d04f3264bf0b33e478d4ced10a5501f7bf2,PodSandboxId:9aba8c6c36426672053356c6584b8ccc34046609fa8c0d40d888a3d32cbaee7a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723815068514672685,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-356375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c7c006a84dfaec546eb6543f597c337,},Annotations:map[string]string{io.
kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e52df5a61660e485bc705c956685309c88e3f61191ad892f220c0aa905c4a6f,PodSandboxId:dce00528f1e2c0d0ddf372de60b58ff7e0829859a1d907d42b978b8615d4dc27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723815046646537088,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-356375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7dee116a13ff34607b19d5c8a7d75e4,},Annotations:map[string]string{io.kubernetes.containe
r.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c83fb4bd3f7e647a65b5b4f9ef499a8273b6bf543ecd4095eb1df915e8fa7fe7,PodSandboxId:b356b7057b91331c8198cd786cacc7705438297f24fdf1d47f11188a25a9f302,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723815037772308994,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5mkc9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f3c142-a0dc-4fc8-9f85-a8d9dc6aced3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.c
ontainer.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:869400086fb4715dcf21b51f25a0efb8f01b8b1804934eb092e2211d6e56c9bc,PodSandboxId:5e986af3720af8fed05785330617b01d2d40b452cf3354a7f97660027f4c52f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723815035642393197,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s
5r7l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5bc83bc-fa37-4011-868b-0b47230d3c6e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e431d9d28b0dc358a0fe60ad154b50e31ef53490c30b10c8499efce9d2be37c3,PodSandboxId:38831dc2313e134732cf542bae349cb074ef6a5e98a0ad2a9e90572229844b4e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723815035589579601,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manag
er-pause-356375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c7c006a84dfaec546eb6543f597c337,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb18c4333d0f5062cf53db13c5b390aa62949c93454fc9c61990dc021b9f1676,PodSandboxId:3be2591c2634edb593fa4318bac622b20fde73a8dd101a412cc8255ce48f5995,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723815035480571637,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-356375,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a2087a04043a1b5f9290941330a38b0,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ee07c27f2fa921f98b92431882892c3ca2512d03804b097206361e54779f73d,PodSandboxId:aef4d8caec953cc23b1c24e5bfbb04f19022c3e0accce892963aa7ba3ad2b2c7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723815035225293522,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-356375,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: b27f4dcfb13b23a27379df872fa50109,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e834ce72-ea49-4195-bb24-a02b56e0625f name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:31:33 pause-356375 crio[2803]: time="2024-08-16 13:31:33.022533450Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=88c4079a-0e5d-4f16-9a59-e54bb615dc16 name=/runtime.v1.RuntimeService/Version
	Aug 16 13:31:33 pause-356375 crio[2803]: time="2024-08-16 13:31:33.022619152Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=88c4079a-0e5d-4f16-9a59-e54bb615dc16 name=/runtime.v1.RuntimeService/Version
	Aug 16 13:31:33 pause-356375 crio[2803]: time="2024-08-16 13:31:33.024116870Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=734b3a72-a722-442f-9a11-9a54d7cce9d0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 13:31:33 pause-356375 crio[2803]: time="2024-08-16 13:31:33.024501056Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723815093024479226,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=734b3a72-a722-442f-9a11-9a54d7cce9d0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 13:31:33 pause-356375 crio[2803]: time="2024-08-16 13:31:33.025093925Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b40d9529-f6d6-4cda-a2c2-f66b5539a27b name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:31:33 pause-356375 crio[2803]: time="2024-08-16 13:31:33.025145802Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b40d9529-f6d6-4cda-a2c2-f66b5539a27b name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:31:33 pause-356375 crio[2803]: time="2024-08-16 13:31:33.025413316Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f367ca3640a8d316a76d5ff8e4016f6225dfda551875f398bacd2dcf777c6169,PodSandboxId:e701cd2e3fe9cda8b314b70915d98368c1d41fbfbfba6c7bed93591233b00d66,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723815073332167221,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s5r7l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5bc83bc-fa37-4011-868b-0b47230d3c6e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:698cc17261e45cbe816ecad32122ff7a21edeb0cf9b4e328d74c4be555f3fc74,PodSandboxId:b356b7057b91331c8198cd786cacc7705438297f24fdf1d47f11188a25a9f302,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723815073324421981,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5mkc9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f3c142-a0dc-4fc8-9f85-a8d9dc6aced3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a3d76b6252af6f98db04521887ef6d9ad0d8fec64d5b093aeacd2b8a9450b1f,PodSandboxId:dce00528f1e2c0d0ddf372de60b58ff7e0829859a1d907d42b978b8615d4dc27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723815069233090284,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-356375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7dee116a13
ff34607b19d5c8a7d75e4,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62c7f69b6d2512a2b1b776c66258191dc56b3e8a4e953194df93038a150312aa,PodSandboxId:f1d76445c7c009a6f4d161acaf6d374b867e7cc056182685ad44f2c0169941f4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723815068522309512,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-356375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a2087a04043a1b5f9290941330a3
8b0,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:802d9b0bdea91592420d456ebd3a24bd868afda1f0956f27dd84ee1d2c063da8,PodSandboxId:b9967ead36baba37a2d8bf12e9cf039812673423280c21188f7d88c57bf93415,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723815068495403629,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-356375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27f4dcfb13b23a27379df872fa50109,},Annotations:map[string]string{io.kubernete
s.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e00ed948540f4ce859c64c1f9c24d04f3264bf0b33e478d4ced10a5501f7bf2,PodSandboxId:9aba8c6c36426672053356c6584b8ccc34046609fa8c0d40d888a3d32cbaee7a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723815068514672685,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-356375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c7c006a84dfaec546eb6543f597c337,},Annotations:map[string]string{io.
kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e52df5a61660e485bc705c956685309c88e3f61191ad892f220c0aa905c4a6f,PodSandboxId:dce00528f1e2c0d0ddf372de60b58ff7e0829859a1d907d42b978b8615d4dc27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723815046646537088,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-356375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7dee116a13ff34607b19d5c8a7d75e4,},Annotations:map[string]string{io.kubernetes.containe
r.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c83fb4bd3f7e647a65b5b4f9ef499a8273b6bf543ecd4095eb1df915e8fa7fe7,PodSandboxId:b356b7057b91331c8198cd786cacc7705438297f24fdf1d47f11188a25a9f302,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723815037772308994,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5mkc9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f3c142-a0dc-4fc8-9f85-a8d9dc6aced3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.c
ontainer.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:869400086fb4715dcf21b51f25a0efb8f01b8b1804934eb092e2211d6e56c9bc,PodSandboxId:5e986af3720af8fed05785330617b01d2d40b452cf3354a7f97660027f4c52f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723815035642393197,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s
5r7l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5bc83bc-fa37-4011-868b-0b47230d3c6e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e431d9d28b0dc358a0fe60ad154b50e31ef53490c30b10c8499efce9d2be37c3,PodSandboxId:38831dc2313e134732cf542bae349cb074ef6a5e98a0ad2a9e90572229844b4e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723815035589579601,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manag
er-pause-356375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c7c006a84dfaec546eb6543f597c337,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb18c4333d0f5062cf53db13c5b390aa62949c93454fc9c61990dc021b9f1676,PodSandboxId:3be2591c2634edb593fa4318bac622b20fde73a8dd101a412cc8255ce48f5995,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723815035480571637,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-356375,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a2087a04043a1b5f9290941330a38b0,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ee07c27f2fa921f98b92431882892c3ca2512d03804b097206361e54779f73d,PodSandboxId:aef4d8caec953cc23b1c24e5bfbb04f19022c3e0accce892963aa7ba3ad2b2c7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723815035225293522,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-356375,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: b27f4dcfb13b23a27379df872fa50109,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b40d9529-f6d6-4cda-a2c2-f66b5539a27b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f367ca3640a8d       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   19 seconds ago      Running             kube-proxy                2                   e701cd2e3fe9c       kube-proxy-s5r7l
	698cc17261e45       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   19 seconds ago      Running             coredns                   2                   b356b7057b913       coredns-6f6b679f8f-5mkc9
	3a3d76b6252af       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   23 seconds ago      Running             kube-apiserver            3                   dce00528f1e2c       kube-apiserver-pause-356375
	62c7f69b6d251       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   24 seconds ago      Running             kube-scheduler            2                   f1d76445c7c00       kube-scheduler-pause-356375
	2e00ed948540f       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   24 seconds ago      Running             kube-controller-manager   2                   9aba8c6c36426       kube-controller-manager-pause-356375
	802d9b0bdea91       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   24 seconds ago      Running             etcd                      2                   b9967ead36bab       etcd-pause-356375
	9e52df5a61660       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   46 seconds ago      Exited              kube-apiserver            2                   dce00528f1e2c       kube-apiserver-pause-356375
	c83fb4bd3f7e6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   55 seconds ago      Exited              coredns                   1                   b356b7057b913       coredns-6f6b679f8f-5mkc9
	869400086fb47       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   57 seconds ago      Exited              kube-proxy                1                   5e986af3720af       kube-proxy-s5r7l
	e431d9d28b0dc       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   57 seconds ago      Exited              kube-controller-manager   1                   38831dc2313e1       kube-controller-manager-pause-356375
	bb18c4333d0f5       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   57 seconds ago      Exited              kube-scheduler            1                   3be2591c2634e       kube-scheduler-pause-356375
	3ee07c27f2fa9       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   57 seconds ago      Exited              etcd                      1                   aef4d8caec953       etcd-pause-356375
	
	
	==> coredns [698cc17261e45cbe816ecad32122ff7a21edeb0cf9b4e328d74c4be555f3fc74] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54449 - 39309 "HINFO IN 391187268088687600.8156975224389480256. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.020423393s
	
	
	==> coredns [c83fb4bd3f7e647a65b5b4f9ef499a8273b6bf543ecd4095eb1df915e8fa7fe7] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:37631 - 37466 "HINFO IN 255362115335927128.7070604868213086085. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.030355206s
	
	
	==> describe nodes <==
	Name:               pause-356375
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-356375
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab84f9bc76071a77c857a14f5c66dccc01002b05
	                    minikube.k8s.io/name=pause-356375
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_16T13_29_42_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 13:29:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-356375
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 13:31:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 13:31:12 +0000   Fri, 16 Aug 2024 13:29:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 13:31:12 +0000   Fri, 16 Aug 2024 13:29:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 13:31:12 +0000   Fri, 16 Aug 2024 13:29:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 13:31:12 +0000   Fri, 16 Aug 2024 13:29:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.95
	  Hostname:    pause-356375
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 2150d7f2d8204c7dac3f0d362eeaba30
	  System UUID:                2150d7f2-d820-4c7d-ac3f-0d362eeaba30
	  Boot ID:                    8aae2164-4d4c-4c2a-89bb-adda014ac442
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-5mkc9                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     107s
	  kube-system                 etcd-pause-356375                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         112s
	  kube-system                 kube-apiserver-pause-356375             250m (12%)    0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-controller-manager-pause-356375    200m (10%)    0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-proxy-s5r7l                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-pause-356375             100m (5%)     0 (0%)      0 (0%)           0 (0%)         114s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 105s               kube-proxy       
	  Normal  Starting                 19s                kube-proxy       
	  Normal  NodeHasSufficientPID     112s               kubelet          Node pause-356375 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  112s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  112s               kubelet          Node pause-356375 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    112s               kubelet          Node pause-356375 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 112s               kubelet          Starting kubelet.
	  Normal  NodeReady                111s               kubelet          Node pause-356375 status is now: NodeReady
	  Normal  RegisteredNode           108s               node-controller  Node pause-356375 event: Registered Node pause-356375 in Controller
	  Normal  Starting                 44s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  43s (x8 over 43s)  kubelet          Node pause-356375 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    43s (x8 over 43s)  kubelet          Node pause-356375 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     43s (x7 over 43s)  kubelet          Node pause-356375 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  43s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18s                node-controller  Node pause-356375 event: Registered Node pause-356375 in Controller
	
	
	==> dmesg <==
	[  +0.068404] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.182232] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.170816] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.299213] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +4.279957] systemd-fstab-generator[757]: Ignoring "noauto" option for root device
	[  +0.061587] kauditd_printk_skb: 130 callbacks suppressed
	[  +5.503788] systemd-fstab-generator[899]: Ignoring "noauto" option for root device
	[  +0.067124] kauditd_printk_skb: 18 callbacks suppressed
	[  +7.002340] systemd-fstab-generator[1243]: Ignoring "noauto" option for root device
	[  +0.092528] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.348767] systemd-fstab-generator[1377]: Ignoring "noauto" option for root device
	[  +0.087068] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.063904] kauditd_printk_skb: 88 callbacks suppressed
	[Aug16 13:30] systemd-fstab-generator[2285]: Ignoring "noauto" option for root device
	[  +0.209769] systemd-fstab-generator[2297]: Ignoring "noauto" option for root device
	[  +0.236766] systemd-fstab-generator[2311]: Ignoring "noauto" option for root device
	[  +0.215627] systemd-fstab-generator[2323]: Ignoring "noauto" option for root device
	[  +0.880375] systemd-fstab-generator[2560]: Ignoring "noauto" option for root device
	[  +1.546859] systemd-fstab-generator[2931]: Ignoring "noauto" option for root device
	[  +4.541824] kauditd_printk_skb: 231 callbacks suppressed
	[  +8.132291] systemd-fstab-generator[3454]: Ignoring "noauto" option for root device
	[Aug16 13:31] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.181265] kauditd_printk_skb: 11 callbacks suppressed
	[ +15.671325] systemd-fstab-generator[3915]: Ignoring "noauto" option for root device
	[  +0.135398] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [3ee07c27f2fa921f98b92431882892c3ca2512d03804b097206361e54779f73d] <==
	
	
	==> etcd [802d9b0bdea91592420d456ebd3a24bd868afda1f0956f27dd84ee1d2c063da8] <==
	{"level":"info","ts":"2024-08-16T13:31:08.977292Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.61.95:2380"}
	{"level":"info","ts":"2024-08-16T13:31:08.978976Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"b13435ab5890267","initial-advertise-peer-urls":["https://192.168.61.95:2380"],"listen-peer-urls":["https://192.168.61.95:2380"],"advertise-client-urls":["https://192.168.61.95:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.95:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-16T13:31:08.979065Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-16T13:31:10.616870Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b13435ab5890267 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-16T13:31:10.617109Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b13435ab5890267 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-16T13:31:10.617178Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b13435ab5890267 received MsgPreVoteResp from b13435ab5890267 at term 2"}
	{"level":"info","ts":"2024-08-16T13:31:10.617226Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b13435ab5890267 became candidate at term 3"}
	{"level":"info","ts":"2024-08-16T13:31:10.617257Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b13435ab5890267 received MsgVoteResp from b13435ab5890267 at term 3"}
	{"level":"info","ts":"2024-08-16T13:31:10.617296Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b13435ab5890267 became leader at term 3"}
	{"level":"info","ts":"2024-08-16T13:31:10.617333Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b13435ab5890267 elected leader b13435ab5890267 at term 3"}
	{"level":"info","ts":"2024-08-16T13:31:10.624364Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"b13435ab5890267","local-member-attributes":"{Name:pause-356375 ClientURLs:[https://192.168.61.95:2379]}","request-path":"/0/members/b13435ab5890267/attributes","cluster-id":"9e3f8e6bd390d9f1","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-16T13:31:10.624651Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T13:31:10.624770Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-16T13:31:10.624827Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-16T13:31:10.624863Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T13:31:10.626712Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T13:31:10.626722Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T13:31:10.628690Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.95:2379"}
	{"level":"info","ts":"2024-08-16T13:31:10.629185Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-16T13:31:12.360170Z","caller":"traceutil/trace.go:171","msg":"trace[214432106] linearizableReadLoop","detail":"{readStateIndex:425; appliedIndex:422; }","duration":"115.019772ms","start":"2024-08-16T13:31:12.245136Z","end":"2024-08-16T13:31:12.360156Z","steps":["trace[214432106] 'read index received'  (duration: 106.109822ms)","trace[214432106] 'applied index is now lower than readState.Index'  (duration: 8.909279ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-16T13:31:12.360504Z","caller":"traceutil/trace.go:171","msg":"trace[1997573248] transaction","detail":"{read_only:false; number_of_response:0; response_revision:400; }","duration":"172.494878ms","start":"2024-08-16T13:31:12.187998Z","end":"2024-08-16T13:31:12.360493Z","steps":["trace[1997573248] 'process raft request'  (duration: 163.234194ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T13:31:12.360573Z","caller":"traceutil/trace.go:171","msg":"trace[1490365903] transaction","detail":"{read_only:false; response_revision:401; number_of_response:1; }","duration":"156.060495ms","start":"2024-08-16T13:31:12.204507Z","end":"2024-08-16T13:31:12.360568Z","steps":["trace[1490365903] 'process raft request'  (duration: 155.562232ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T13:31:12.360589Z","caller":"traceutil/trace.go:171","msg":"trace[126956225] transaction","detail":"{read_only:false; response_revision:402; number_of_response:1; }","duration":"152.330697ms","start":"2024-08-16T13:31:12.208255Z","end":"2024-08-16T13:31:12.360586Z","steps":["trace[126956225] 'process raft request'  (duration: 151.86998ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T13:31:12.360646Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.481047ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-16T13:31:12.361279Z","caller":"traceutil/trace.go:171","msg":"trace[689571226] range","detail":"{range_begin:/registry/limitranges; range_end:; response_count:0; response_revision:402; }","duration":"116.136709ms","start":"2024-08-16T13:31:12.245133Z","end":"2024-08-16T13:31:12.361270Z","steps":["trace[689571226] 'agreement among raft nodes before linearized reading'  (duration: 115.466233ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:31:33 up 2 min,  0 users,  load average: 0.69, 0.32, 0.13
	Linux pause-356375 5.10.207 #1 SMP Wed Aug 14 19:18:01 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3a3d76b6252af6f98db04521887ef6d9ad0d8fec64d5b093aeacd2b8a9450b1f] <==
	I0816 13:31:12.151635       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0816 13:31:12.152495       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0816 13:31:12.152544       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0816 13:31:12.152658       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0816 13:31:12.155022       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0816 13:31:12.155194       1 shared_informer.go:320] Caches are synced for configmaps
	I0816 13:31:12.158307       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0816 13:31:12.160429       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0816 13:31:12.160452       1 aggregator.go:171] initial CRD sync complete...
	I0816 13:31:12.160466       1 autoregister_controller.go:144] Starting autoregister controller
	I0816 13:31:12.160471       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0816 13:31:12.160476       1 cache.go:39] Caches are synced for autoregister controller
	I0816 13:31:12.161162       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0816 13:31:12.162248       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0816 13:31:12.162283       1 policy_source.go:224] refreshing policies
	E0816 13:31:12.164141       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0816 13:31:12.203872       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0816 13:31:12.955840       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0816 13:31:13.751350       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0816 13:31:13.767355       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0816 13:31:13.818583       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0816 13:31:13.853699       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0816 13:31:13.859845       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0816 13:31:15.643811       1 controller.go:615] quota admission added evaluator for: endpoints
	I0816 13:31:15.694511       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [9e52df5a61660e485bc705c956685309c88e3f61191ad892f220c0aa905c4a6f] <==
	I0816 13:30:46.846574       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W0816 13:30:47.172836       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:30:47.173731       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0816 13:30:47.173851       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0816 13:30:47.179399       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0816 13:30:47.182680       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0816 13:30:47.182695       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0816 13:30:47.182856       1 instance.go:232] Using reconciler: lease
	W0816 13:30:47.183947       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:30:48.173974       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:30:48.174294       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:30:48.185384       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:30:49.461073       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:30:49.746649       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:30:49.824170       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:30:51.937316       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:30:52.187234       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:30:52.193975       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:30:55.807363       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:30:56.130390       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:30:56.431296       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:31:02.686433       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:31:02.875059       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:31:03.762616       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0816 13:31:07.184223       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [2e00ed948540f4ce859c64c1f9c24d04f3264bf0b33e478d4ced10a5501f7bf2] <==
	I0816 13:31:15.388228       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0816 13:31:15.388380       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="70.247µs"
	I0816 13:31:15.389600       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0816 13:31:15.389732       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0816 13:31:15.389868       1 shared_informer.go:320] Caches are synced for expand
	I0816 13:31:15.389938       1 shared_informer.go:320] Caches are synced for GC
	I0816 13:31:15.393533       1 shared_informer.go:320] Caches are synced for namespace
	I0816 13:31:15.393537       1 shared_informer.go:320] Caches are synced for node
	I0816 13:31:15.393799       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I0816 13:31:15.393976       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0816 13:31:15.394085       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0816 13:31:15.394174       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0816 13:31:15.394359       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="pause-356375"
	I0816 13:31:15.397008       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0816 13:31:15.400522       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0816 13:31:15.408971       1 shared_informer.go:320] Caches are synced for service account
	I0816 13:31:15.417561       1 shared_informer.go:320] Caches are synced for PVC protection
	I0816 13:31:15.438548       1 shared_informer.go:320] Caches are synced for disruption
	I0816 13:31:15.441879       1 shared_informer.go:320] Caches are synced for daemon sets
	I0816 13:31:15.512802       1 shared_informer.go:320] Caches are synced for cronjob
	I0816 13:31:15.556095       1 shared_informer.go:320] Caches are synced for resource quota
	I0816 13:31:15.597988       1 shared_informer.go:320] Caches are synced for resource quota
	I0816 13:31:16.023242       1 shared_informer.go:320] Caches are synced for garbage collector
	I0816 13:31:16.039103       1 shared_informer.go:320] Caches are synced for garbage collector
	I0816 13:31:16.039184       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [e431d9d28b0dc358a0fe60ad154b50e31ef53490c30b10c8499efce9d2be37c3] <==
	
	
	==> kube-proxy [869400086fb4715dcf21b51f25a0efb8f01b8b1804934eb092e2211d6e56c9bc] <==
	
	
	==> kube-proxy [f367ca3640a8d316a76d5ff8e4016f6225dfda551875f398bacd2dcf777c6169] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0816 13:31:13.581749       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0816 13:31:13.605116       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.95"]
	E0816 13:31:13.605215       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 13:31:13.660959       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0816 13:31:13.660994       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0816 13:31:13.661020       1 server_linux.go:169] "Using iptables Proxier"
	I0816 13:31:13.665142       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 13:31:13.665563       1 server.go:483] "Version info" version="v1.31.0"
	I0816 13:31:13.665647       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 13:31:13.667623       1 config.go:197] "Starting service config controller"
	I0816 13:31:13.667782       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 13:31:13.667860       1 config.go:104] "Starting endpoint slice config controller"
	I0816 13:31:13.667866       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 13:31:13.667967       1 config.go:326] "Starting node config controller"
	I0816 13:31:13.667997       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 13:31:13.768800       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0816 13:31:13.768956       1 shared_informer.go:320] Caches are synced for node config
	I0816 13:31:13.768969       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [62c7f69b6d2512a2b1b776c66258191dc56b3e8a4e953194df93038a150312aa] <==
	W0816 13:31:12.061781       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 13:31:12.061812       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 13:31:12.061892       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 13:31:12.061980       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 13:31:12.062263       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 13:31:12.062365       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 13:31:12.062572       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0816 13:31:12.062654       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 13:31:12.062797       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 13:31:12.062876       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0816 13:31:12.063015       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 13:31:12.063098       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0816 13:31:12.063198       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 13:31:12.063280       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 13:31:12.063482       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 13:31:12.063511       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 13:31:12.063617       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 13:31:12.063701       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 13:31:12.063817       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 13:31:12.063881       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 13:31:12.066093       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0816 13:31:12.066180       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0816 13:31:12.067067       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 13:31:12.067125       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0816 13:31:14.997823       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [bb18c4333d0f5062cf53db13c5b390aa62949c93454fc9c61990dc021b9f1676] <==
	
	
	==> kubelet <==
	Aug 16 13:31:08 pause-356375 kubelet[3460]: E0816 13:31:08.396568    3460 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.95:8443: connect: connection refused" node="pause-356375"
	Aug 16 13:31:08 pause-356375 kubelet[3460]: I0816 13:31:08.478033    3460 scope.go:117] "RemoveContainer" containerID="3ee07c27f2fa921f98b92431882892c3ca2512d03804b097206361e54779f73d"
	Aug 16 13:31:08 pause-356375 kubelet[3460]: I0816 13:31:08.482025    3460 scope.go:117] "RemoveContainer" containerID="e431d9d28b0dc358a0fe60ad154b50e31ef53490c30b10c8499efce9d2be37c3"
	Aug 16 13:31:08 pause-356375 kubelet[3460]: I0816 13:31:08.482639    3460 scope.go:117] "RemoveContainer" containerID="bb18c4333d0f5062cf53db13c5b390aa62949c93454fc9c61990dc021b9f1676"
	Aug 16 13:31:08 pause-356375 kubelet[3460]: E0816 13:31:08.592083    3460 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-356375?timeout=10s\": dial tcp 192.168.61.95:8443: connect: connection refused" interval="800ms"
	Aug 16 13:31:09 pause-356375 kubelet[3460]: I0816 13:31:09.208308    3460 scope.go:117] "RemoveContainer" containerID="9e52df5a61660e485bc705c956685309c88e3f61191ad892f220c0aa905c4a6f"
	Aug 16 13:31:09 pause-356375 kubelet[3460]: E0816 13:31:09.393421    3460 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-356375?timeout=10s\": dial tcp 192.168.61.95:8443: connect: connection refused" interval="1.6s"
	Aug 16 13:31:09 pause-356375 kubelet[3460]: I0816 13:31:09.998122    3460 kubelet_node_status.go:72] "Attempting to register node" node="pause-356375"
	Aug 16 13:31:10 pause-356375 kubelet[3460]: E0816 13:31:10.098862    3460 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723815070098096195,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:31:10 pause-356375 kubelet[3460]: E0816 13:31:10.099619    3460 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723815070098096195,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:31:12 pause-356375 kubelet[3460]: I0816 13:31:12.365137    3460 kubelet_node_status.go:111] "Node was previously registered" node="pause-356375"
	Aug 16 13:31:12 pause-356375 kubelet[3460]: I0816 13:31:12.365256    3460 kubelet_node_status.go:75] "Successfully registered node" node="pause-356375"
	Aug 16 13:31:12 pause-356375 kubelet[3460]: I0816 13:31:12.365296    3460 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 16 13:31:12 pause-356375 kubelet[3460]: I0816 13:31:12.366579    3460 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 16 13:31:12 pause-356375 kubelet[3460]: E0816 13:31:12.385528    3460 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-pause-356375\" already exists" pod="kube-system/kube-apiserver-pause-356375"
	Aug 16 13:31:13 pause-356375 kubelet[3460]: I0816 13:31:13.003648    3460 apiserver.go:52] "Watching apiserver"
	Aug 16 13:31:13 pause-356375 kubelet[3460]: I0816 13:31:13.095297    3460 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Aug 16 13:31:13 pause-356375 kubelet[3460]: I0816 13:31:13.096610    3460 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e5bc83bc-fa37-4011-868b-0b47230d3c6e-xtables-lock\") pod \"kube-proxy-s5r7l\" (UID: \"e5bc83bc-fa37-4011-868b-0b47230d3c6e\") " pod="kube-system/kube-proxy-s5r7l"
	Aug 16 13:31:13 pause-356375 kubelet[3460]: I0816 13:31:13.096661    3460 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e5bc83bc-fa37-4011-868b-0b47230d3c6e-lib-modules\") pod \"kube-proxy-s5r7l\" (UID: \"e5bc83bc-fa37-4011-868b-0b47230d3c6e\") " pod="kube-system/kube-proxy-s5r7l"
	Aug 16 13:31:13 pause-356375 kubelet[3460]: I0816 13:31:13.307829    3460 scope.go:117] "RemoveContainer" containerID="869400086fb4715dcf21b51f25a0efb8f01b8b1804934eb092e2211d6e56c9bc"
	Aug 16 13:31:13 pause-356375 kubelet[3460]: I0816 13:31:13.308344    3460 scope.go:117] "RemoveContainer" containerID="c83fb4bd3f7e647a65b5b4f9ef499a8273b6bf543ecd4095eb1df915e8fa7fe7"
	Aug 16 13:31:20 pause-356375 kubelet[3460]: E0816 13:31:20.101015    3460 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723815080100750489,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:31:20 pause-356375 kubelet[3460]: E0816 13:31:20.101059    3460 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723815080100750489,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:31:30 pause-356375 kubelet[3460]: E0816 13:31:30.102549    3460 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723815090102124101,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:31:30 pause-356375 kubelet[3460]: E0816 13:31:30.102965    3460 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723815090102124101,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-356375 -n pause-356375
helpers_test.go:261: (dbg) Run:  kubectl --context pause-356375 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-356375 -n pause-356375
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-356375 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-356375 logs -n 25: (1.370544889s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p scheduled-stop-802327       | scheduled-stop-802327     | jenkins | v1.33.1 | 16 Aug 24 13:26 UTC |                     |
	|         | --schedule 5m                  |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-802327       | scheduled-stop-802327     | jenkins | v1.33.1 | 16 Aug 24 13:26 UTC |                     |
	|         | --schedule 5m                  |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-802327       | scheduled-stop-802327     | jenkins | v1.33.1 | 16 Aug 24 13:26 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-802327       | scheduled-stop-802327     | jenkins | v1.33.1 | 16 Aug 24 13:26 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-802327       | scheduled-stop-802327     | jenkins | v1.33.1 | 16 Aug 24 13:26 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-802327       | scheduled-stop-802327     | jenkins | v1.33.1 | 16 Aug 24 13:26 UTC | 16 Aug 24 13:26 UTC |
	|         | --cancel-scheduled             |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-802327       | scheduled-stop-802327     | jenkins | v1.33.1 | 16 Aug 24 13:27 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-802327       | scheduled-stop-802327     | jenkins | v1.33.1 | 16 Aug 24 13:27 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-802327       | scheduled-stop-802327     | jenkins | v1.33.1 | 16 Aug 24 13:27 UTC | 16 Aug 24 13:27 UTC |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| delete  | -p scheduled-stop-802327       | scheduled-stop-802327     | jenkins | v1.33.1 | 16 Aug 24 13:28 UTC | 16 Aug 24 13:28 UTC |
	| start   | -p kubernetes-upgrade-759623   | kubernetes-upgrade-759623 | jenkins | v1.33.1 | 16 Aug 24 13:28 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p offline-crio-718759         | offline-crio-718759       | jenkins | v1.33.1 | 16 Aug 24 13:28 UTC | 16 Aug 24 13:29 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --memory=2048             |                           |         |         |                     |                     |
	|         | --wait=true --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-729731      | minikube                  | jenkins | v1.26.0 | 16 Aug 24 13:28 UTC | 16 Aug 24 13:30 UTC |
	|         | --memory=2200 --vm-driver=kvm2 |                           |         |         |                     |                     |
	|         |  --container-runtime=crio      |                           |         |         |                     |                     |
	| start   | -p pause-356375 --memory=2048  | pause-356375              | jenkins | v1.33.1 | 16 Aug 24 13:28 UTC | 16 Aug 24 13:30 UTC |
	|         | --install-addons=false         |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2       |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p offline-crio-718759         | offline-crio-718759       | jenkins | v1.33.1 | 16 Aug 24 13:29 UTC | 16 Aug 24 13:29 UTC |
	| start   | -p NoKubernetes-169820         | NoKubernetes-169820       | jenkins | v1.33.1 | 16 Aug 24 13:29 UTC |                     |
	|         | --no-kubernetes                |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20      |                           |         |         |                     |                     |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-169820         | NoKubernetes-169820       | jenkins | v1.33.1 | 16 Aug 24 13:29 UTC | 16 Aug 24 13:30 UTC |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-729731      | running-upgrade-729731    | jenkins | v1.33.1 | 16 Aug 24 13:30 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p pause-356375                | pause-356375              | jenkins | v1.33.1 | 16 Aug 24 13:30 UTC | 16 Aug 24 13:31 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-169820         | NoKubernetes-169820       | jenkins | v1.33.1 | 16 Aug 24 13:30 UTC | 16 Aug 24 13:30 UTC |
	|         | --no-kubernetes --driver=kvm2  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-169820         | NoKubernetes-169820       | jenkins | v1.33.1 | 16 Aug 24 13:30 UTC | 16 Aug 24 13:30 UTC |
	| start   | -p NoKubernetes-169820         | NoKubernetes-169820       | jenkins | v1.33.1 | 16 Aug 24 13:30 UTC | 16 Aug 24 13:31 UTC |
	|         | --no-kubernetes --driver=kvm2  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-169820 sudo    | NoKubernetes-169820       | jenkins | v1.33.1 | 16 Aug 24 13:31 UTC |                     |
	|         | systemctl is-active --quiet    |                           |         |         |                     |                     |
	|         | service kubelet                |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-169820         | NoKubernetes-169820       | jenkins | v1.33.1 | 16 Aug 24 13:31 UTC | 16 Aug 24 13:31 UTC |
	| start   | -p NoKubernetes-169820         | NoKubernetes-169820       | jenkins | v1.33.1 | 16 Aug 24 13:31 UTC |                     |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 13:31:28
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 13:31:28.111381   49146 out.go:345] Setting OutFile to fd 1 ...
	I0816 13:31:28.111473   49146 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 13:31:28.111477   49146 out.go:358] Setting ErrFile to fd 2...
	I0816 13:31:28.111481   49146 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 13:31:28.111666   49146 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-3966/.minikube/bin
	I0816 13:31:28.112155   49146 out.go:352] Setting JSON to false
	I0816 13:31:28.113249   49146 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4433,"bootTime":1723810655,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 13:31:28.113295   49146 start.go:139] virtualization: kvm guest
	I0816 13:31:28.115652   49146 out.go:177] * [NoKubernetes-169820] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 13:31:28.116958   49146 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 13:31:28.116974   49146 notify.go:220] Checking for updates...
	I0816 13:31:28.119653   49146 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 13:31:28.120884   49146 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 13:31:28.122100   49146 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-3966/.minikube
	I0816 13:31:28.123606   49146 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 13:31:28.124931   49146 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 13:31:28.126523   49146 config.go:182] Loaded profile config "NoKubernetes-169820": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0816 13:31:28.126978   49146 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:31:28.127045   49146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:31:28.141828   49146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35297
	I0816 13:31:28.142212   49146 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:31:28.142673   49146 main.go:141] libmachine: Using API Version  1
	I0816 13:31:28.142687   49146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:31:28.143001   49146 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:31:28.143145   49146 main.go:141] libmachine: (NoKubernetes-169820) Calling .DriverName
	I0816 13:31:28.143366   49146 start.go:1780] No Kubernetes version set for minikube, setting Kubernetes version to v0.0.0
	I0816 13:31:28.143383   49146 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 13:31:28.143760   49146 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:31:28.143797   49146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:31:28.157761   49146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41879
	I0816 13:31:28.158092   49146 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:31:28.158544   49146 main.go:141] libmachine: Using API Version  1
	I0816 13:31:28.158556   49146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:31:28.158823   49146 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:31:28.158987   49146 main.go:141] libmachine: (NoKubernetes-169820) Calling .DriverName
	I0816 13:31:28.194191   49146 out.go:177] * Using the kvm2 driver based on existing profile
	I0816 13:31:28.195515   49146 start.go:297] selected driver: kvm2
	I0816 13:31:28.195528   49146 start.go:901] validating driver "kvm2" against &{Name:NoKubernetes-169820 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v0.0.0 ClusterName:NoKubernetes-169820 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.148 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 13:31:28.195690   49146 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 13:31:28.196102   49146 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 13:31:28.196162   49146 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-3966/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 13:31:28.210734   49146 install.go:137] /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0816 13:31:28.211447   49146 cni.go:84] Creating CNI manager for ""
	I0816 13:31:28.211455   49146 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:31:28.211500   49146 start.go:340] cluster config:
	{Name:NoKubernetes-169820 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-169820 Namespace:default APISer
verHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.148 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientP
ath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 13:31:28.211592   49146 iso.go:125] acquiring lock: {Name:mk00139b359c6fe4c84d80e843a2099303fb7c5b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 13:31:28.214198   49146 out.go:177] * Starting minikube without Kubernetes in cluster NoKubernetes-169820
	I0816 13:31:28.215623   49146 preload.go:131] Checking if preload exists for k8s version v0.0.0 and runtime crio
	W0816 13:31:28.773616   49146 preload.go:114] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0816 13:31:28.773749   49146 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/NoKubernetes-169820/config.json ...
	I0816 13:31:28.774020   49146 start.go:360] acquireMachinesLock for NoKubernetes-169820: {Name:mk0b4b88ee8893cefe753073bc08069ac50e5641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 13:31:28.774070   49146 start.go:364] duration metric: took 35.525µs to acquireMachinesLock for "NoKubernetes-169820"
	I0816 13:31:28.774084   49146 start.go:96] Skipping create...Using existing machine configuration
	I0816 13:31:28.774088   49146 fix.go:54] fixHost starting: 
	I0816 13:31:28.774356   49146 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:31:28.774387   49146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:31:28.789708   49146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36779
	I0816 13:31:28.790127   49146 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:31:28.790662   49146 main.go:141] libmachine: Using API Version  1
	I0816 13:31:28.790677   49146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:31:28.790986   49146 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:31:28.791226   49146 main.go:141] libmachine: (NoKubernetes-169820) Calling .DriverName
	I0816 13:31:28.791394   49146 main.go:141] libmachine: (NoKubernetes-169820) Calling .GetState
	I0816 13:31:28.793320   49146 fix.go:112] recreateIfNeeded on NoKubernetes-169820: state=Stopped err=<nil>
	I0816 13:31:28.793353   49146 main.go:141] libmachine: (NoKubernetes-169820) Calling .DriverName
	W0816 13:31:28.793492   49146 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 13:31:28.795309   49146 out.go:177] * Restarting existing kvm2 VM for "NoKubernetes-169820" ...
	I0816 13:31:28.470128   48217 pod_ready.go:103] pod "kube-apiserver-pause-356375" in "kube-system" namespace has status "Ready":"False"
	I0816 13:31:28.968374   48217 pod_ready.go:93] pod "kube-apiserver-pause-356375" in "kube-system" namespace has status "Ready":"True"
	I0816 13:31:28.968396   48217 pod_ready.go:82] duration metric: took 9.508222534s for pod "kube-apiserver-pause-356375" in "kube-system" namespace to be "Ready" ...
	I0816 13:31:28.968405   48217 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-pause-356375" in "kube-system" namespace to be "Ready" ...
	I0816 13:31:28.974466   48217 pod_ready.go:93] pod "kube-controller-manager-pause-356375" in "kube-system" namespace has status "Ready":"True"
	I0816 13:31:28.974487   48217 pod_ready.go:82] duration metric: took 6.074749ms for pod "kube-controller-manager-pause-356375" in "kube-system" namespace to be "Ready" ...
	I0816 13:31:28.974497   48217 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-s5r7l" in "kube-system" namespace to be "Ready" ...
	I0816 13:31:28.980023   48217 pod_ready.go:93] pod "kube-proxy-s5r7l" in "kube-system" namespace has status "Ready":"True"
	I0816 13:31:28.980041   48217 pod_ready.go:82] duration metric: took 5.539454ms for pod "kube-proxy-s5r7l" in "kube-system" namespace to be "Ready" ...
	I0816 13:31:28.980050   48217 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-pause-356375" in "kube-system" namespace to be "Ready" ...
	I0816 13:31:28.987353   48217 pod_ready.go:93] pod "kube-scheduler-pause-356375" in "kube-system" namespace has status "Ready":"True"
	I0816 13:31:28.987389   48217 pod_ready.go:82] duration metric: took 7.331309ms for pod "kube-scheduler-pause-356375" in "kube-system" namespace to be "Ready" ...
	I0816 13:31:28.987400   48217 pod_ready.go:39] duration metric: took 15.046041931s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:31:28.987417   48217 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 13:31:29.000421   48217 ops.go:34] apiserver oom_adj: -16
	I0816 13:31:29.000444   48217 kubeadm.go:597] duration metric: took 50.638722057s to restartPrimaryControlPlane
	I0816 13:31:29.000456   48217 kubeadm.go:394] duration metric: took 50.759041177s to StartCluster
	I0816 13:31:29.000475   48217 settings.go:142] acquiring lock: {Name:mk96ffc9331ceacd8f3c1c33a59b38e047898a0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:31:29.000544   48217 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 13:31:29.001814   48217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/kubeconfig: {Name:mk102d7d0e1fecb6a50b9d6c1ee82dcff7f7a898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:31:29.002075   48217 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.95 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 13:31:29.002192   48217 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 13:31:29.002344   48217 config.go:182] Loaded profile config "pause-356375": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 13:31:29.003658   48217 out.go:177] * Enabled addons: 
	I0816 13:31:29.003670   48217 out.go:177] * Verifying Kubernetes components...
	I0816 13:31:24.824653   48168 api_server.go:253] Checking apiserver healthz at https://192.168.72.176:8443/healthz ...
	I0816 13:31:24.831325   48168 api_server.go:279] https://192.168.72.176:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 13:31:24.831347   48168 api_server.go:103] status: https://192.168.72.176:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 13:31:25.324403   48168 api_server.go:253] Checking apiserver healthz at https://192.168.72.176:8443/healthz ...
	I0816 13:31:25.329481   48168 api_server.go:279] https://192.168.72.176:8443/healthz returned 200:
	ok
	I0816 13:31:25.337652   48168 api_server.go:141] control plane version: v1.24.1
	I0816 13:31:25.337674   48168 api_server.go:131] duration metric: took 36.514134893s to wait for apiserver health ...
	I0816 13:31:25.337684   48168 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 13:31:25.344699   48168 system_pods.go:59] 7 kube-system pods found
	I0816 13:31:25.344732   48168 system_pods.go:61] "coredns-6d4b75cb6d-mxpw5" [5d21130a-096f-4a6e-b543-01bc3568c70e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 13:31:25.344741   48168 system_pods.go:61] "etcd-running-upgrade-729731" [adfa725e-81e9-4a63-ab6b-9f07ce90f48c] Running
	I0816 13:31:25.344748   48168 system_pods.go:61] "kube-apiserver-running-upgrade-729731" [62f00339-bd25-4174-b6ba-3b13d8c123ec] Running
	I0816 13:31:25.344754   48168 system_pods.go:61] "kube-controller-manager-running-upgrade-729731" [2cffda32-b5cf-4e45-ab11-3d03adb6f8e4] Running
	I0816 13:31:25.344764   48168 system_pods.go:61] "kube-proxy-qvkjj" [d5af14f9-849f-4a23-b273-7a233a882737] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0816 13:31:25.344770   48168 system_pods.go:61] "kube-scheduler-running-upgrade-729731" [ab814916-f3ad-4fb9-83be-0892619cf030] Running
	I0816 13:31:25.344781   48168 system_pods.go:61] "storage-provisioner" [37310e34-ba0c-42d4-818c-87843c45064d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0816 13:31:25.344794   48168 system_pods.go:74] duration metric: took 7.101285ms to wait for pod list to return data ...
	I0816 13:31:25.344810   48168 kubeadm.go:582] duration metric: took 36.829738935s to wait for: map[apiserver:true system_pods:true]
	I0816 13:31:25.344828   48168 node_conditions.go:102] verifying NodePressure condition ...
	I0816 13:31:25.349822   48168 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0816 13:31:25.349847   48168 node_conditions.go:123] node cpu capacity is 2
	I0816 13:31:25.349857   48168 node_conditions.go:105] duration metric: took 5.023634ms to run NodePressure ...
	I0816 13:31:25.349869   48168 start.go:241] waiting for startup goroutines ...
	I0816 13:31:29.742859   46501 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:31:29.743130   46501 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:31:29.004995   48217 addons.go:510] duration metric: took 2.805331ms for enable addons: enabled=[]
	I0816 13:31:29.005055   48217 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:31:29.219066   48217 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 13:31:29.240787   48217 node_ready.go:35] waiting up to 6m0s for node "pause-356375" to be "Ready" ...
	I0816 13:31:29.244249   48217 node_ready.go:49] node "pause-356375" has status "Ready":"True"
	I0816 13:31:29.244282   48217 node_ready.go:38] duration metric: took 3.456545ms for node "pause-356375" to be "Ready" ...
	I0816 13:31:29.244293   48217 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:31:29.255031   48217 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-5mkc9" in "kube-system" namespace to be "Ready" ...
	I0816 13:31:29.364760   48217 pod_ready.go:93] pod "coredns-6f6b679f8f-5mkc9" in "kube-system" namespace has status "Ready":"True"
	I0816 13:31:29.364791   48217 pod_ready.go:82] duration metric: took 109.717811ms for pod "coredns-6f6b679f8f-5mkc9" in "kube-system" namespace to be "Ready" ...
	I0816 13:31:29.364805   48217 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-356375" in "kube-system" namespace to be "Ready" ...
	I0816 13:31:29.765594   48217 pod_ready.go:93] pod "etcd-pause-356375" in "kube-system" namespace has status "Ready":"True"
	I0816 13:31:29.765620   48217 pod_ready.go:82] duration metric: took 400.806959ms for pod "etcd-pause-356375" in "kube-system" namespace to be "Ready" ...
	I0816 13:31:29.765632   48217 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-356375" in "kube-system" namespace to be "Ready" ...
	I0816 13:31:30.163941   48217 pod_ready.go:93] pod "kube-apiserver-pause-356375" in "kube-system" namespace has status "Ready":"True"
	I0816 13:31:30.163971   48217 pod_ready.go:82] duration metric: took 398.330365ms for pod "kube-apiserver-pause-356375" in "kube-system" namespace to be "Ready" ...
	I0816 13:31:30.163983   48217 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-356375" in "kube-system" namespace to be "Ready" ...
	I0816 13:31:30.565544   48217 pod_ready.go:93] pod "kube-controller-manager-pause-356375" in "kube-system" namespace has status "Ready":"True"
	I0816 13:31:30.565568   48217 pod_ready.go:82] duration metric: took 401.575839ms for pod "kube-controller-manager-pause-356375" in "kube-system" namespace to be "Ready" ...
	I0816 13:31:30.565581   48217 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-s5r7l" in "kube-system" namespace to be "Ready" ...
	I0816 13:31:30.965511   48217 pod_ready.go:93] pod "kube-proxy-s5r7l" in "kube-system" namespace has status "Ready":"True"
	I0816 13:31:30.965543   48217 pod_ready.go:82] duration metric: took 399.954286ms for pod "kube-proxy-s5r7l" in "kube-system" namespace to be "Ready" ...
	I0816 13:31:30.965556   48217 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-356375" in "kube-system" namespace to be "Ready" ...
	I0816 13:31:31.366232   48217 pod_ready.go:93] pod "kube-scheduler-pause-356375" in "kube-system" namespace has status "Ready":"True"
	I0816 13:31:31.366259   48217 pod_ready.go:82] duration metric: took 400.694894ms for pod "kube-scheduler-pause-356375" in "kube-system" namespace to be "Ready" ...
	I0816 13:31:31.366272   48217 pod_ready.go:39] duration metric: took 2.121968805s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:31:31.366287   48217 api_server.go:52] waiting for apiserver process to appear ...
	I0816 13:31:31.366344   48217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:31:31.382846   48217 api_server.go:72] duration metric: took 2.380737721s to wait for apiserver process to appear ...
	I0816 13:31:31.382876   48217 api_server.go:88] waiting for apiserver healthz status ...
	I0816 13:31:31.382896   48217 api_server.go:253] Checking apiserver healthz at https://192.168.61.95:8443/healthz ...
	I0816 13:31:31.388503   48217 api_server.go:279] https://192.168.61.95:8443/healthz returned 200:
	ok
	I0816 13:31:31.389432   48217 api_server.go:141] control plane version: v1.31.0
	I0816 13:31:31.389454   48217 api_server.go:131] duration metric: took 6.569893ms to wait for apiserver health ...
	I0816 13:31:31.389463   48217 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 13:31:31.567667   48217 system_pods.go:59] 6 kube-system pods found
	I0816 13:31:31.567707   48217 system_pods.go:61] "coredns-6f6b679f8f-5mkc9" [d8f3c142-a0dc-4fc8-9f85-a8d9dc6aced3] Running
	I0816 13:31:31.567717   48217 system_pods.go:61] "etcd-pause-356375" [a3adb3ab-b7ec-4892-a222-ecbd5f386f3d] Running
	I0816 13:31:31.567723   48217 system_pods.go:61] "kube-apiserver-pause-356375" [ab7fb55c-4524-47d8-a340-429b754fed3b] Running
	I0816 13:31:31.567728   48217 system_pods.go:61] "kube-controller-manager-pause-356375" [2a2b1bb2-a8ca-490a-846b-07366c76d22c] Running
	I0816 13:31:31.567733   48217 system_pods.go:61] "kube-proxy-s5r7l" [e5bc83bc-fa37-4011-868b-0b47230d3c6e] Running
	I0816 13:31:31.567741   48217 system_pods.go:61] "kube-scheduler-pause-356375" [079d342f-dc2a-4947-baad-38a636406991] Running
	I0816 13:31:31.567749   48217 system_pods.go:74] duration metric: took 178.280099ms to wait for pod list to return data ...
	I0816 13:31:31.567760   48217 default_sa.go:34] waiting for default service account to be created ...
	I0816 13:31:31.765042   48217 default_sa.go:45] found service account: "default"
	I0816 13:31:31.765074   48217 default_sa.go:55] duration metric: took 197.303381ms for default service account to be created ...
	I0816 13:31:31.765086   48217 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 13:31:31.966421   48217 system_pods.go:86] 6 kube-system pods found
	I0816 13:31:31.966450   48217 system_pods.go:89] "coredns-6f6b679f8f-5mkc9" [d8f3c142-a0dc-4fc8-9f85-a8d9dc6aced3] Running
	I0816 13:31:31.966456   48217 system_pods.go:89] "etcd-pause-356375" [a3adb3ab-b7ec-4892-a222-ecbd5f386f3d] Running
	I0816 13:31:31.966460   48217 system_pods.go:89] "kube-apiserver-pause-356375" [ab7fb55c-4524-47d8-a340-429b754fed3b] Running
	I0816 13:31:31.966463   48217 system_pods.go:89] "kube-controller-manager-pause-356375" [2a2b1bb2-a8ca-490a-846b-07366c76d22c] Running
	I0816 13:31:31.966469   48217 system_pods.go:89] "kube-proxy-s5r7l" [e5bc83bc-fa37-4011-868b-0b47230d3c6e] Running
	I0816 13:31:31.966472   48217 system_pods.go:89] "kube-scheduler-pause-356375" [079d342f-dc2a-4947-baad-38a636406991] Running
	I0816 13:31:31.966479   48217 system_pods.go:126] duration metric: took 201.386834ms to wait for k8s-apps to be running ...
	I0816 13:31:31.966485   48217 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 13:31:31.966538   48217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 13:31:31.982027   48217 system_svc.go:56] duration metric: took 15.527896ms WaitForService to wait for kubelet
	I0816 13:31:31.982077   48217 kubeadm.go:582] duration metric: took 2.979973216s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 13:31:31.982113   48217 node_conditions.go:102] verifying NodePressure condition ...
	I0816 13:31:32.164406   48217 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 13:31:32.164437   48217 node_conditions.go:123] node cpu capacity is 2
	I0816 13:31:32.164447   48217 node_conditions.go:105] duration metric: took 182.32635ms to run NodePressure ...
	I0816 13:31:32.164460   48217 start.go:241] waiting for startup goroutines ...
	I0816 13:31:32.164466   48217 start.go:246] waiting for cluster config update ...
	I0816 13:31:32.164473   48217 start.go:255] writing updated cluster config ...
	I0816 13:31:32.164751   48217 ssh_runner.go:195] Run: rm -f paused
	I0816 13:31:32.213627   48217 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 13:31:32.215857   48217 out.go:177] * Done! kubectl is now configured to use "pause-356375" cluster and "default" namespace by default
	I0816 13:31:28.796506   49146 main.go:141] libmachine: (NoKubernetes-169820) Calling .Start
	I0816 13:31:28.796681   49146 main.go:141] libmachine: (NoKubernetes-169820) Ensuring networks are active...
	I0816 13:31:28.797524   49146 main.go:141] libmachine: (NoKubernetes-169820) Ensuring network default is active
	I0816 13:31:28.797916   49146 main.go:141] libmachine: (NoKubernetes-169820) Ensuring network mk-NoKubernetes-169820 is active
	I0816 13:31:28.798268   49146 main.go:141] libmachine: (NoKubernetes-169820) Getting domain xml...
	I0816 13:31:28.799030   49146 main.go:141] libmachine: (NoKubernetes-169820) Creating domain...
	I0816 13:31:30.121671   49146 main.go:141] libmachine: (NoKubernetes-169820) Waiting to get IP...
	I0816 13:31:30.122694   49146 main.go:141] libmachine: (NoKubernetes-169820) DBG | domain NoKubernetes-169820 has defined MAC address 52:54:00:04:04:c8 in network mk-NoKubernetes-169820
	I0816 13:31:30.123147   49146 main.go:141] libmachine: (NoKubernetes-169820) DBG | unable to find current IP address of domain NoKubernetes-169820 in network mk-NoKubernetes-169820
	I0816 13:31:30.123219   49146 main.go:141] libmachine: (NoKubernetes-169820) DBG | I0816 13:31:30.123122   49181 retry.go:31] will retry after 250.54006ms: waiting for machine to come up
	I0816 13:31:30.375593   49146 main.go:141] libmachine: (NoKubernetes-169820) DBG | domain NoKubernetes-169820 has defined MAC address 52:54:00:04:04:c8 in network mk-NoKubernetes-169820
	I0816 13:31:30.376110   49146 main.go:141] libmachine: (NoKubernetes-169820) DBG | unable to find current IP address of domain NoKubernetes-169820 in network mk-NoKubernetes-169820
	I0816 13:31:30.376176   49146 main.go:141] libmachine: (NoKubernetes-169820) DBG | I0816 13:31:30.376054   49181 retry.go:31] will retry after 334.667512ms: waiting for machine to come up
	I0816 13:31:30.712552   49146 main.go:141] libmachine: (NoKubernetes-169820) DBG | domain NoKubernetes-169820 has defined MAC address 52:54:00:04:04:c8 in network mk-NoKubernetes-169820
	I0816 13:31:30.713022   49146 main.go:141] libmachine: (NoKubernetes-169820) DBG | unable to find current IP address of domain NoKubernetes-169820 in network mk-NoKubernetes-169820
	I0816 13:31:30.713037   49146 main.go:141] libmachine: (NoKubernetes-169820) DBG | I0816 13:31:30.712971   49181 retry.go:31] will retry after 366.135609ms: waiting for machine to come up
	I0816 13:31:31.080496   49146 main.go:141] libmachine: (NoKubernetes-169820) DBG | domain NoKubernetes-169820 has defined MAC address 52:54:00:04:04:c8 in network mk-NoKubernetes-169820
	I0816 13:31:31.081002   49146 main.go:141] libmachine: (NoKubernetes-169820) DBG | unable to find current IP address of domain NoKubernetes-169820 in network mk-NoKubernetes-169820
	I0816 13:31:31.081031   49146 main.go:141] libmachine: (NoKubernetes-169820) DBG | I0816 13:31:31.080971   49181 retry.go:31] will retry after 464.982019ms: waiting for machine to come up
	I0816 13:31:31.547768   49146 main.go:141] libmachine: (NoKubernetes-169820) DBG | domain NoKubernetes-169820 has defined MAC address 52:54:00:04:04:c8 in network mk-NoKubernetes-169820
	I0816 13:31:31.548293   49146 main.go:141] libmachine: (NoKubernetes-169820) DBG | unable to find current IP address of domain NoKubernetes-169820 in network mk-NoKubernetes-169820
	I0816 13:31:31.548313   49146 main.go:141] libmachine: (NoKubernetes-169820) DBG | I0816 13:31:31.548235   49181 retry.go:31] will retry after 551.547887ms: waiting for machine to come up
	I0816 13:31:32.101068   49146 main.go:141] libmachine: (NoKubernetes-169820) DBG | domain NoKubernetes-169820 has defined MAC address 52:54:00:04:04:c8 in network mk-NoKubernetes-169820
	I0816 13:31:32.101560   49146 main.go:141] libmachine: (NoKubernetes-169820) DBG | unable to find current IP address of domain NoKubernetes-169820 in network mk-NoKubernetes-169820
	I0816 13:31:32.101577   49146 main.go:141] libmachine: (NoKubernetes-169820) DBG | I0816 13:31:32.101514   49181 retry.go:31] will retry after 866.371357ms: waiting for machine to come up
	I0816 13:31:32.969498   49146 main.go:141] libmachine: (NoKubernetes-169820) DBG | domain NoKubernetes-169820 has defined MAC address 52:54:00:04:04:c8 in network mk-NoKubernetes-169820
	I0816 13:31:32.970010   49146 main.go:141] libmachine: (NoKubernetes-169820) DBG | unable to find current IP address of domain NoKubernetes-169820 in network mk-NoKubernetes-169820
	I0816 13:31:32.970057   49146 main.go:141] libmachine: (NoKubernetes-169820) DBG | I0816 13:31:32.969964   49181 retry.go:31] will retry after 741.960574ms: waiting for machine to come up
	
	
	==> CRI-O <==
	Aug 16 13:31:34 pause-356375 crio[2803]: time="2024-08-16 13:31:34.840160141Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723815094840121677,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=700cce36-c66a-4b16-bdb6-e9bd409f5223 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 13:31:34 pause-356375 crio[2803]: time="2024-08-16 13:31:34.840806291Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f677c2bc-027a-42d7-ba86-011d005335f3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:31:34 pause-356375 crio[2803]: time="2024-08-16 13:31:34.840864244Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f677c2bc-027a-42d7-ba86-011d005335f3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:31:34 pause-356375 crio[2803]: time="2024-08-16 13:31:34.841169897Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f367ca3640a8d316a76d5ff8e4016f6225dfda551875f398bacd2dcf777c6169,PodSandboxId:e701cd2e3fe9cda8b314b70915d98368c1d41fbfbfba6c7bed93591233b00d66,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723815073332167221,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s5r7l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5bc83bc-fa37-4011-868b-0b47230d3c6e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:698cc17261e45cbe816ecad32122ff7a21edeb0cf9b4e328d74c4be555f3fc74,PodSandboxId:b356b7057b91331c8198cd786cacc7705438297f24fdf1d47f11188a25a9f302,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723815073324421981,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5mkc9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f3c142-a0dc-4fc8-9f85-a8d9dc6aced3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a3d76b6252af6f98db04521887ef6d9ad0d8fec64d5b093aeacd2b8a9450b1f,PodSandboxId:dce00528f1e2c0d0ddf372de60b58ff7e0829859a1d907d42b978b8615d4dc27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723815069233090284,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-356375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7dee116a13
ff34607b19d5c8a7d75e4,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62c7f69b6d2512a2b1b776c66258191dc56b3e8a4e953194df93038a150312aa,PodSandboxId:f1d76445c7c009a6f4d161acaf6d374b867e7cc056182685ad44f2c0169941f4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723815068522309512,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-356375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a2087a04043a1b5f9290941330a3
8b0,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:802d9b0bdea91592420d456ebd3a24bd868afda1f0956f27dd84ee1d2c063da8,PodSandboxId:b9967ead36baba37a2d8bf12e9cf039812673423280c21188f7d88c57bf93415,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723815068495403629,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-356375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27f4dcfb13b23a27379df872fa50109,},Annotations:map[string]string{io.kubernete
s.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e00ed948540f4ce859c64c1f9c24d04f3264bf0b33e478d4ced10a5501f7bf2,PodSandboxId:9aba8c6c36426672053356c6584b8ccc34046609fa8c0d40d888a3d32cbaee7a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723815068514672685,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-356375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c7c006a84dfaec546eb6543f597c337,},Annotations:map[string]string{io.
kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e52df5a61660e485bc705c956685309c88e3f61191ad892f220c0aa905c4a6f,PodSandboxId:dce00528f1e2c0d0ddf372de60b58ff7e0829859a1d907d42b978b8615d4dc27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723815046646537088,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-356375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7dee116a13ff34607b19d5c8a7d75e4,},Annotations:map[string]string{io.kubernetes.containe
r.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c83fb4bd3f7e647a65b5b4f9ef499a8273b6bf543ecd4095eb1df915e8fa7fe7,PodSandboxId:b356b7057b91331c8198cd786cacc7705438297f24fdf1d47f11188a25a9f302,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723815037772308994,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5mkc9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f3c142-a0dc-4fc8-9f85-a8d9dc6aced3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.c
ontainer.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:869400086fb4715dcf21b51f25a0efb8f01b8b1804934eb092e2211d6e56c9bc,PodSandboxId:5e986af3720af8fed05785330617b01d2d40b452cf3354a7f97660027f4c52f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723815035642393197,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s
5r7l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5bc83bc-fa37-4011-868b-0b47230d3c6e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e431d9d28b0dc358a0fe60ad154b50e31ef53490c30b10c8499efce9d2be37c3,PodSandboxId:38831dc2313e134732cf542bae349cb074ef6a5e98a0ad2a9e90572229844b4e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723815035589579601,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manag
er-pause-356375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c7c006a84dfaec546eb6543f597c337,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb18c4333d0f5062cf53db13c5b390aa62949c93454fc9c61990dc021b9f1676,PodSandboxId:3be2591c2634edb593fa4318bac622b20fde73a8dd101a412cc8255ce48f5995,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723815035480571637,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-356375,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a2087a04043a1b5f9290941330a38b0,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ee07c27f2fa921f98b92431882892c3ca2512d03804b097206361e54779f73d,PodSandboxId:aef4d8caec953cc23b1c24e5bfbb04f19022c3e0accce892963aa7ba3ad2b2c7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723815035225293522,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-356375,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: b27f4dcfb13b23a27379df872fa50109,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f677c2bc-027a-42d7-ba86-011d005335f3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:31:34 pause-356375 crio[2803]: time="2024-08-16 13:31:34.883532151Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=01b505ad-5de6-4d54-8c09-11faa5208219 name=/runtime.v1.RuntimeService/Version
	Aug 16 13:31:34 pause-356375 crio[2803]: time="2024-08-16 13:31:34.883633963Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=01b505ad-5de6-4d54-8c09-11faa5208219 name=/runtime.v1.RuntimeService/Version
	Aug 16 13:31:34 pause-356375 crio[2803]: time="2024-08-16 13:31:34.886522198Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8cd77349-6dae-495f-8744-597d5f5d0a68 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 13:31:34 pause-356375 crio[2803]: time="2024-08-16 13:31:34.887034755Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723815094887010052,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8cd77349-6dae-495f-8744-597d5f5d0a68 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 13:31:34 pause-356375 crio[2803]: time="2024-08-16 13:31:34.888657202Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cc3faf19-120f-4197-9d0c-6b9c7dc35343 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:31:34 pause-356375 crio[2803]: time="2024-08-16 13:31:34.888731128Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cc3faf19-120f-4197-9d0c-6b9c7dc35343 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:31:34 pause-356375 crio[2803]: time="2024-08-16 13:31:34.889027938Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f367ca3640a8d316a76d5ff8e4016f6225dfda551875f398bacd2dcf777c6169,PodSandboxId:e701cd2e3fe9cda8b314b70915d98368c1d41fbfbfba6c7bed93591233b00d66,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723815073332167221,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s5r7l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5bc83bc-fa37-4011-868b-0b47230d3c6e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:698cc17261e45cbe816ecad32122ff7a21edeb0cf9b4e328d74c4be555f3fc74,PodSandboxId:b356b7057b91331c8198cd786cacc7705438297f24fdf1d47f11188a25a9f302,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723815073324421981,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5mkc9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f3c142-a0dc-4fc8-9f85-a8d9dc6aced3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a3d76b6252af6f98db04521887ef6d9ad0d8fec64d5b093aeacd2b8a9450b1f,PodSandboxId:dce00528f1e2c0d0ddf372de60b58ff7e0829859a1d907d42b978b8615d4dc27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723815069233090284,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-356375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7dee116a13
ff34607b19d5c8a7d75e4,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62c7f69b6d2512a2b1b776c66258191dc56b3e8a4e953194df93038a150312aa,PodSandboxId:f1d76445c7c009a6f4d161acaf6d374b867e7cc056182685ad44f2c0169941f4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723815068522309512,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-356375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a2087a04043a1b5f9290941330a3
8b0,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:802d9b0bdea91592420d456ebd3a24bd868afda1f0956f27dd84ee1d2c063da8,PodSandboxId:b9967ead36baba37a2d8bf12e9cf039812673423280c21188f7d88c57bf93415,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723815068495403629,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-356375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27f4dcfb13b23a27379df872fa50109,},Annotations:map[string]string{io.kubernete
s.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e00ed948540f4ce859c64c1f9c24d04f3264bf0b33e478d4ced10a5501f7bf2,PodSandboxId:9aba8c6c36426672053356c6584b8ccc34046609fa8c0d40d888a3d32cbaee7a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723815068514672685,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-356375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c7c006a84dfaec546eb6543f597c337,},Annotations:map[string]string{io.
kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e52df5a61660e485bc705c956685309c88e3f61191ad892f220c0aa905c4a6f,PodSandboxId:dce00528f1e2c0d0ddf372de60b58ff7e0829859a1d907d42b978b8615d4dc27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723815046646537088,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-356375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7dee116a13ff34607b19d5c8a7d75e4,},Annotations:map[string]string{io.kubernetes.containe
r.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c83fb4bd3f7e647a65b5b4f9ef499a8273b6bf543ecd4095eb1df915e8fa7fe7,PodSandboxId:b356b7057b91331c8198cd786cacc7705438297f24fdf1d47f11188a25a9f302,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723815037772308994,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5mkc9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f3c142-a0dc-4fc8-9f85-a8d9dc6aced3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.c
ontainer.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:869400086fb4715dcf21b51f25a0efb8f01b8b1804934eb092e2211d6e56c9bc,PodSandboxId:5e986af3720af8fed05785330617b01d2d40b452cf3354a7f97660027f4c52f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723815035642393197,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s
5r7l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5bc83bc-fa37-4011-868b-0b47230d3c6e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e431d9d28b0dc358a0fe60ad154b50e31ef53490c30b10c8499efce9d2be37c3,PodSandboxId:38831dc2313e134732cf542bae349cb074ef6a5e98a0ad2a9e90572229844b4e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723815035589579601,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manag
er-pause-356375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c7c006a84dfaec546eb6543f597c337,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb18c4333d0f5062cf53db13c5b390aa62949c93454fc9c61990dc021b9f1676,PodSandboxId:3be2591c2634edb593fa4318bac622b20fde73a8dd101a412cc8255ce48f5995,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723815035480571637,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-356375,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a2087a04043a1b5f9290941330a38b0,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ee07c27f2fa921f98b92431882892c3ca2512d03804b097206361e54779f73d,PodSandboxId:aef4d8caec953cc23b1c24e5bfbb04f19022c3e0accce892963aa7ba3ad2b2c7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723815035225293522,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-356375,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: b27f4dcfb13b23a27379df872fa50109,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cc3faf19-120f-4197-9d0c-6b9c7dc35343 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:31:34 pause-356375 crio[2803]: time="2024-08-16 13:31:34.935740918Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fa0bfeb8-ab69-4f65-a27a-1a82ec703058 name=/runtime.v1.RuntimeService/Version
	Aug 16 13:31:34 pause-356375 crio[2803]: time="2024-08-16 13:31:34.935829833Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fa0bfeb8-ab69-4f65-a27a-1a82ec703058 name=/runtime.v1.RuntimeService/Version
	Aug 16 13:31:34 pause-356375 crio[2803]: time="2024-08-16 13:31:34.936983527Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b01039c8-1cbb-43d0-a3f5-281b384da1d9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 13:31:34 pause-356375 crio[2803]: time="2024-08-16 13:31:34.937473816Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723815094937449415,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b01039c8-1cbb-43d0-a3f5-281b384da1d9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 13:31:34 pause-356375 crio[2803]: time="2024-08-16 13:31:34.938073995Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bfb21dcf-e67e-47d1-86b7-a4022e6b8064 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:31:34 pause-356375 crio[2803]: time="2024-08-16 13:31:34.938144665Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bfb21dcf-e67e-47d1-86b7-a4022e6b8064 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:31:34 pause-356375 crio[2803]: time="2024-08-16 13:31:34.938383179Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f367ca3640a8d316a76d5ff8e4016f6225dfda551875f398bacd2dcf777c6169,PodSandboxId:e701cd2e3fe9cda8b314b70915d98368c1d41fbfbfba6c7bed93591233b00d66,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723815073332167221,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s5r7l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5bc83bc-fa37-4011-868b-0b47230d3c6e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:698cc17261e45cbe816ecad32122ff7a21edeb0cf9b4e328d74c4be555f3fc74,PodSandboxId:b356b7057b91331c8198cd786cacc7705438297f24fdf1d47f11188a25a9f302,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723815073324421981,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5mkc9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f3c142-a0dc-4fc8-9f85-a8d9dc6aced3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a3d76b6252af6f98db04521887ef6d9ad0d8fec64d5b093aeacd2b8a9450b1f,PodSandboxId:dce00528f1e2c0d0ddf372de60b58ff7e0829859a1d907d42b978b8615d4dc27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723815069233090284,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-356375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7dee116a13
ff34607b19d5c8a7d75e4,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62c7f69b6d2512a2b1b776c66258191dc56b3e8a4e953194df93038a150312aa,PodSandboxId:f1d76445c7c009a6f4d161acaf6d374b867e7cc056182685ad44f2c0169941f4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723815068522309512,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-356375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a2087a04043a1b5f9290941330a3
8b0,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:802d9b0bdea91592420d456ebd3a24bd868afda1f0956f27dd84ee1d2c063da8,PodSandboxId:b9967ead36baba37a2d8bf12e9cf039812673423280c21188f7d88c57bf93415,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723815068495403629,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-356375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27f4dcfb13b23a27379df872fa50109,},Annotations:map[string]string{io.kubernete
s.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e00ed948540f4ce859c64c1f9c24d04f3264bf0b33e478d4ced10a5501f7bf2,PodSandboxId:9aba8c6c36426672053356c6584b8ccc34046609fa8c0d40d888a3d32cbaee7a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723815068514672685,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-356375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c7c006a84dfaec546eb6543f597c337,},Annotations:map[string]string{io.
kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e52df5a61660e485bc705c956685309c88e3f61191ad892f220c0aa905c4a6f,PodSandboxId:dce00528f1e2c0d0ddf372de60b58ff7e0829859a1d907d42b978b8615d4dc27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723815046646537088,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-356375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7dee116a13ff34607b19d5c8a7d75e4,},Annotations:map[string]string{io.kubernetes.containe
r.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c83fb4bd3f7e647a65b5b4f9ef499a8273b6bf543ecd4095eb1df915e8fa7fe7,PodSandboxId:b356b7057b91331c8198cd786cacc7705438297f24fdf1d47f11188a25a9f302,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723815037772308994,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5mkc9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f3c142-a0dc-4fc8-9f85-a8d9dc6aced3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.c
ontainer.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:869400086fb4715dcf21b51f25a0efb8f01b8b1804934eb092e2211d6e56c9bc,PodSandboxId:5e986af3720af8fed05785330617b01d2d40b452cf3354a7f97660027f4c52f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723815035642393197,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s
5r7l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5bc83bc-fa37-4011-868b-0b47230d3c6e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e431d9d28b0dc358a0fe60ad154b50e31ef53490c30b10c8499efce9d2be37c3,PodSandboxId:38831dc2313e134732cf542bae349cb074ef6a5e98a0ad2a9e90572229844b4e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723815035589579601,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manag
er-pause-356375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c7c006a84dfaec546eb6543f597c337,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb18c4333d0f5062cf53db13c5b390aa62949c93454fc9c61990dc021b9f1676,PodSandboxId:3be2591c2634edb593fa4318bac622b20fde73a8dd101a412cc8255ce48f5995,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723815035480571637,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-356375,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a2087a04043a1b5f9290941330a38b0,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ee07c27f2fa921f98b92431882892c3ca2512d03804b097206361e54779f73d,PodSandboxId:aef4d8caec953cc23b1c24e5bfbb04f19022c3e0accce892963aa7ba3ad2b2c7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723815035225293522,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-356375,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: b27f4dcfb13b23a27379df872fa50109,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bfb21dcf-e67e-47d1-86b7-a4022e6b8064 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:31:34 pause-356375 crio[2803]: time="2024-08-16 13:31:34.989562405Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=65f61ffa-7019-43c6-aba2-fc5723534cee name=/runtime.v1.RuntimeService/Version
	Aug 16 13:31:34 pause-356375 crio[2803]: time="2024-08-16 13:31:34.989660584Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=65f61ffa-7019-43c6-aba2-fc5723534cee name=/runtime.v1.RuntimeService/Version
	Aug 16 13:31:34 pause-356375 crio[2803]: time="2024-08-16 13:31:34.991059150Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6ea5e2b3-d3b3-4c60-aaca-386b86c602b8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 13:31:34 pause-356375 crio[2803]: time="2024-08-16 13:31:34.991422355Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723815094991400997,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6ea5e2b3-d3b3-4c60-aaca-386b86c602b8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 13:31:34 pause-356375 crio[2803]: time="2024-08-16 13:31:34.992066931Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=81572598-41d4-41e8-9701-fd7a94c3c6e8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:31:34 pause-356375 crio[2803]: time="2024-08-16 13:31:34.992138889Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=81572598-41d4-41e8-9701-fd7a94c3c6e8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:31:34 pause-356375 crio[2803]: time="2024-08-16 13:31:34.992367776Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f367ca3640a8d316a76d5ff8e4016f6225dfda551875f398bacd2dcf777c6169,PodSandboxId:e701cd2e3fe9cda8b314b70915d98368c1d41fbfbfba6c7bed93591233b00d66,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723815073332167221,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s5r7l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5bc83bc-fa37-4011-868b-0b47230d3c6e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:698cc17261e45cbe816ecad32122ff7a21edeb0cf9b4e328d74c4be555f3fc74,PodSandboxId:b356b7057b91331c8198cd786cacc7705438297f24fdf1d47f11188a25a9f302,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723815073324421981,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5mkc9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f3c142-a0dc-4fc8-9f85-a8d9dc6aced3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a3d76b6252af6f98db04521887ef6d9ad0d8fec64d5b093aeacd2b8a9450b1f,PodSandboxId:dce00528f1e2c0d0ddf372de60b58ff7e0829859a1d907d42b978b8615d4dc27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723815069233090284,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-356375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7dee116a13
ff34607b19d5c8a7d75e4,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62c7f69b6d2512a2b1b776c66258191dc56b3e8a4e953194df93038a150312aa,PodSandboxId:f1d76445c7c009a6f4d161acaf6d374b867e7cc056182685ad44f2c0169941f4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723815068522309512,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-356375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a2087a04043a1b5f9290941330a3
8b0,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:802d9b0bdea91592420d456ebd3a24bd868afda1f0956f27dd84ee1d2c063da8,PodSandboxId:b9967ead36baba37a2d8bf12e9cf039812673423280c21188f7d88c57bf93415,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723815068495403629,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-356375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27f4dcfb13b23a27379df872fa50109,},Annotations:map[string]string{io.kubernete
s.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e00ed948540f4ce859c64c1f9c24d04f3264bf0b33e478d4ced10a5501f7bf2,PodSandboxId:9aba8c6c36426672053356c6584b8ccc34046609fa8c0d40d888a3d32cbaee7a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723815068514672685,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-356375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c7c006a84dfaec546eb6543f597c337,},Annotations:map[string]string{io.
kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e52df5a61660e485bc705c956685309c88e3f61191ad892f220c0aa905c4a6f,PodSandboxId:dce00528f1e2c0d0ddf372de60b58ff7e0829859a1d907d42b978b8615d4dc27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723815046646537088,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-356375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7dee116a13ff34607b19d5c8a7d75e4,},Annotations:map[string]string{io.kubernetes.containe
r.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c83fb4bd3f7e647a65b5b4f9ef499a8273b6bf543ecd4095eb1df915e8fa7fe7,PodSandboxId:b356b7057b91331c8198cd786cacc7705438297f24fdf1d47f11188a25a9f302,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723815037772308994,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5mkc9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f3c142-a0dc-4fc8-9f85-a8d9dc6aced3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.c
ontainer.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:869400086fb4715dcf21b51f25a0efb8f01b8b1804934eb092e2211d6e56c9bc,PodSandboxId:5e986af3720af8fed05785330617b01d2d40b452cf3354a7f97660027f4c52f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723815035642393197,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s
5r7l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5bc83bc-fa37-4011-868b-0b47230d3c6e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e431d9d28b0dc358a0fe60ad154b50e31ef53490c30b10c8499efce9d2be37c3,PodSandboxId:38831dc2313e134732cf542bae349cb074ef6a5e98a0ad2a9e90572229844b4e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723815035589579601,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manag
er-pause-356375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c7c006a84dfaec546eb6543f597c337,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb18c4333d0f5062cf53db13c5b390aa62949c93454fc9c61990dc021b9f1676,PodSandboxId:3be2591c2634edb593fa4318bac622b20fde73a8dd101a412cc8255ce48f5995,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723815035480571637,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-356375,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a2087a04043a1b5f9290941330a38b0,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ee07c27f2fa921f98b92431882892c3ca2512d03804b097206361e54779f73d,PodSandboxId:aef4d8caec953cc23b1c24e5bfbb04f19022c3e0accce892963aa7ba3ad2b2c7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723815035225293522,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-356375,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: b27f4dcfb13b23a27379df872fa50109,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=81572598-41d4-41e8-9701-fd7a94c3c6e8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f367ca3640a8d       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   21 seconds ago      Running             kube-proxy                2                   e701cd2e3fe9c       kube-proxy-s5r7l
	698cc17261e45       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   21 seconds ago      Running             coredns                   2                   b356b7057b913       coredns-6f6b679f8f-5mkc9
	3a3d76b6252af       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   25 seconds ago      Running             kube-apiserver            3                   dce00528f1e2c       kube-apiserver-pause-356375
	62c7f69b6d251       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   26 seconds ago      Running             kube-scheduler            2                   f1d76445c7c00       kube-scheduler-pause-356375
	2e00ed948540f       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   26 seconds ago      Running             kube-controller-manager   2                   9aba8c6c36426       kube-controller-manager-pause-356375
	802d9b0bdea91       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   26 seconds ago      Running             etcd                      2                   b9967ead36bab       etcd-pause-356375
	9e52df5a61660       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   48 seconds ago      Exited              kube-apiserver            2                   dce00528f1e2c       kube-apiserver-pause-356375
	c83fb4bd3f7e6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   57 seconds ago      Exited              coredns                   1                   b356b7057b913       coredns-6f6b679f8f-5mkc9
	869400086fb47       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   59 seconds ago      Exited              kube-proxy                1                   5e986af3720af       kube-proxy-s5r7l
	e431d9d28b0dc       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   59 seconds ago      Exited              kube-controller-manager   1                   38831dc2313e1       kube-controller-manager-pause-356375
	bb18c4333d0f5       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   59 seconds ago      Exited              kube-scheduler            1                   3be2591c2634e       kube-scheduler-pause-356375
	3ee07c27f2fa9       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   59 seconds ago      Exited              etcd                      1                   aef4d8caec953       etcd-pause-356375
	
	
	==> coredns [698cc17261e45cbe816ecad32122ff7a21edeb0cf9b4e328d74c4be555f3fc74] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54449 - 39309 "HINFO IN 391187268088687600.8156975224389480256. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.020423393s
	
	
	==> coredns [c83fb4bd3f7e647a65b5b4f9ef499a8273b6bf543ecd4095eb1df915e8fa7fe7] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:37631 - 37466 "HINFO IN 255362115335927128.7070604868213086085. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.030355206s
	
	
	==> describe nodes <==
	Name:               pause-356375
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-356375
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab84f9bc76071a77c857a14f5c66dccc01002b05
	                    minikube.k8s.io/name=pause-356375
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_16T13_29_42_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 13:29:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-356375
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 13:31:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 13:31:12 +0000   Fri, 16 Aug 2024 13:29:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 13:31:12 +0000   Fri, 16 Aug 2024 13:29:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 13:31:12 +0000   Fri, 16 Aug 2024 13:29:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 13:31:12 +0000   Fri, 16 Aug 2024 13:29:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.95
	  Hostname:    pause-356375
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 2150d7f2d8204c7dac3f0d362eeaba30
	  System UUID:                2150d7f2-d820-4c7d-ac3f-0d362eeaba30
	  Boot ID:                    8aae2164-4d4c-4c2a-89bb-adda014ac442
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-5mkc9                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     109s
	  kube-system                 etcd-pause-356375                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         114s
	  kube-system                 kube-apiserver-pause-356375             250m (12%)    0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-controller-manager-pause-356375    200m (10%)    0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-proxy-s5r7l                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-pause-356375             100m (5%)     0 (0%)      0 (0%)           0 (0%)         116s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 107s               kube-proxy       
	  Normal  Starting                 21s                kube-proxy       
	  Normal  NodeHasSufficientPID     114s               kubelet          Node pause-356375 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  114s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  114s               kubelet          Node pause-356375 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    114s               kubelet          Node pause-356375 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 114s               kubelet          Starting kubelet.
	  Normal  NodeReady                113s               kubelet          Node pause-356375 status is now: NodeReady
	  Normal  RegisteredNode           110s               node-controller  Node pause-356375 event: Registered Node pause-356375 in Controller
	  Normal  Starting                 46s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  45s (x8 over 45s)  kubelet          Node pause-356375 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    45s (x8 over 45s)  kubelet          Node pause-356375 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     45s (x7 over 45s)  kubelet          Node pause-356375 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  45s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           20s                node-controller  Node pause-356375 event: Registered Node pause-356375 in Controller
	
	
	==> dmesg <==
	[  +0.068404] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.182232] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.170816] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.299213] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +4.279957] systemd-fstab-generator[757]: Ignoring "noauto" option for root device
	[  +0.061587] kauditd_printk_skb: 130 callbacks suppressed
	[  +5.503788] systemd-fstab-generator[899]: Ignoring "noauto" option for root device
	[  +0.067124] kauditd_printk_skb: 18 callbacks suppressed
	[  +7.002340] systemd-fstab-generator[1243]: Ignoring "noauto" option for root device
	[  +0.092528] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.348767] systemd-fstab-generator[1377]: Ignoring "noauto" option for root device
	[  +0.087068] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.063904] kauditd_printk_skb: 88 callbacks suppressed
	[Aug16 13:30] systemd-fstab-generator[2285]: Ignoring "noauto" option for root device
	[  +0.209769] systemd-fstab-generator[2297]: Ignoring "noauto" option for root device
	[  +0.236766] systemd-fstab-generator[2311]: Ignoring "noauto" option for root device
	[  +0.215627] systemd-fstab-generator[2323]: Ignoring "noauto" option for root device
	[  +0.880375] systemd-fstab-generator[2560]: Ignoring "noauto" option for root device
	[  +1.546859] systemd-fstab-generator[2931]: Ignoring "noauto" option for root device
	[  +4.541824] kauditd_printk_skb: 231 callbacks suppressed
	[  +8.132291] systemd-fstab-generator[3454]: Ignoring "noauto" option for root device
	[Aug16 13:31] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.181265] kauditd_printk_skb: 11 callbacks suppressed
	[ +15.671325] systemd-fstab-generator[3915]: Ignoring "noauto" option for root device
	[  +0.135398] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [3ee07c27f2fa921f98b92431882892c3ca2512d03804b097206361e54779f73d] <==
	
	
	==> etcd [802d9b0bdea91592420d456ebd3a24bd868afda1f0956f27dd84ee1d2c063da8] <==
	{"level":"info","ts":"2024-08-16T13:31:08.977292Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.61.95:2380"}
	{"level":"info","ts":"2024-08-16T13:31:08.978976Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"b13435ab5890267","initial-advertise-peer-urls":["https://192.168.61.95:2380"],"listen-peer-urls":["https://192.168.61.95:2380"],"advertise-client-urls":["https://192.168.61.95:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.95:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-16T13:31:08.979065Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-16T13:31:10.616870Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b13435ab5890267 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-16T13:31:10.617109Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b13435ab5890267 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-16T13:31:10.617178Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b13435ab5890267 received MsgPreVoteResp from b13435ab5890267 at term 2"}
	{"level":"info","ts":"2024-08-16T13:31:10.617226Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b13435ab5890267 became candidate at term 3"}
	{"level":"info","ts":"2024-08-16T13:31:10.617257Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b13435ab5890267 received MsgVoteResp from b13435ab5890267 at term 3"}
	{"level":"info","ts":"2024-08-16T13:31:10.617296Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b13435ab5890267 became leader at term 3"}
	{"level":"info","ts":"2024-08-16T13:31:10.617333Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b13435ab5890267 elected leader b13435ab5890267 at term 3"}
	{"level":"info","ts":"2024-08-16T13:31:10.624364Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"b13435ab5890267","local-member-attributes":"{Name:pause-356375 ClientURLs:[https://192.168.61.95:2379]}","request-path":"/0/members/b13435ab5890267/attributes","cluster-id":"9e3f8e6bd390d9f1","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-16T13:31:10.624651Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T13:31:10.624770Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-16T13:31:10.624827Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-16T13:31:10.624863Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T13:31:10.626712Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T13:31:10.626722Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T13:31:10.628690Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.95:2379"}
	{"level":"info","ts":"2024-08-16T13:31:10.629185Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-16T13:31:12.360170Z","caller":"traceutil/trace.go:171","msg":"trace[214432106] linearizableReadLoop","detail":"{readStateIndex:425; appliedIndex:422; }","duration":"115.019772ms","start":"2024-08-16T13:31:12.245136Z","end":"2024-08-16T13:31:12.360156Z","steps":["trace[214432106] 'read index received'  (duration: 106.109822ms)","trace[214432106] 'applied index is now lower than readState.Index'  (duration: 8.909279ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-16T13:31:12.360504Z","caller":"traceutil/trace.go:171","msg":"trace[1997573248] transaction","detail":"{read_only:false; number_of_response:0; response_revision:400; }","duration":"172.494878ms","start":"2024-08-16T13:31:12.187998Z","end":"2024-08-16T13:31:12.360493Z","steps":["trace[1997573248] 'process raft request'  (duration: 163.234194ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T13:31:12.360573Z","caller":"traceutil/trace.go:171","msg":"trace[1490365903] transaction","detail":"{read_only:false; response_revision:401; number_of_response:1; }","duration":"156.060495ms","start":"2024-08-16T13:31:12.204507Z","end":"2024-08-16T13:31:12.360568Z","steps":["trace[1490365903] 'process raft request'  (duration: 155.562232ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T13:31:12.360589Z","caller":"traceutil/trace.go:171","msg":"trace[126956225] transaction","detail":"{read_only:false; response_revision:402; number_of_response:1; }","duration":"152.330697ms","start":"2024-08-16T13:31:12.208255Z","end":"2024-08-16T13:31:12.360586Z","steps":["trace[126956225] 'process raft request'  (duration: 151.86998ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T13:31:12.360646Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.481047ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-16T13:31:12.361279Z","caller":"traceutil/trace.go:171","msg":"trace[689571226] range","detail":"{range_begin:/registry/limitranges; range_end:; response_count:0; response_revision:402; }","duration":"116.136709ms","start":"2024-08-16T13:31:12.245133Z","end":"2024-08-16T13:31:12.361270Z","steps":["trace[689571226] 'agreement among raft nodes before linearized reading'  (duration: 115.466233ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:31:35 up 2 min,  0 users,  load average: 0.64, 0.32, 0.12
	Linux pause-356375 5.10.207 #1 SMP Wed Aug 14 19:18:01 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3a3d76b6252af6f98db04521887ef6d9ad0d8fec64d5b093aeacd2b8a9450b1f] <==
	I0816 13:31:12.151635       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0816 13:31:12.152495       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0816 13:31:12.152544       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0816 13:31:12.152658       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0816 13:31:12.155022       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0816 13:31:12.155194       1 shared_informer.go:320] Caches are synced for configmaps
	I0816 13:31:12.158307       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0816 13:31:12.160429       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0816 13:31:12.160452       1 aggregator.go:171] initial CRD sync complete...
	I0816 13:31:12.160466       1 autoregister_controller.go:144] Starting autoregister controller
	I0816 13:31:12.160471       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0816 13:31:12.160476       1 cache.go:39] Caches are synced for autoregister controller
	I0816 13:31:12.161162       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0816 13:31:12.162248       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0816 13:31:12.162283       1 policy_source.go:224] refreshing policies
	E0816 13:31:12.164141       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0816 13:31:12.203872       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0816 13:31:12.955840       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0816 13:31:13.751350       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0816 13:31:13.767355       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0816 13:31:13.818583       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0816 13:31:13.853699       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0816 13:31:13.859845       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0816 13:31:15.643811       1 controller.go:615] quota admission added evaluator for: endpoints
	I0816 13:31:15.694511       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [9e52df5a61660e485bc705c956685309c88e3f61191ad892f220c0aa905c4a6f] <==
	I0816 13:30:46.846574       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W0816 13:30:47.172836       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:30:47.173731       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0816 13:30:47.173851       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0816 13:30:47.179399       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0816 13:30:47.182680       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0816 13:30:47.182695       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0816 13:30:47.182856       1 instance.go:232] Using reconciler: lease
	W0816 13:30:47.183947       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:30:48.173974       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:30:48.174294       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:30:48.185384       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:30:49.461073       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:30:49.746649       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:30:49.824170       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:30:51.937316       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:30:52.187234       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:30:52.193975       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:30:55.807363       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:30:56.130390       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:30:56.431296       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:31:02.686433       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:31:02.875059       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:31:03.762616       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0816 13:31:07.184223       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [2e00ed948540f4ce859c64c1f9c24d04f3264bf0b33e478d4ced10a5501f7bf2] <==
	I0816 13:31:15.388228       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0816 13:31:15.388380       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="70.247µs"
	I0816 13:31:15.389600       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0816 13:31:15.389732       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0816 13:31:15.389868       1 shared_informer.go:320] Caches are synced for expand
	I0816 13:31:15.389938       1 shared_informer.go:320] Caches are synced for GC
	I0816 13:31:15.393533       1 shared_informer.go:320] Caches are synced for namespace
	I0816 13:31:15.393537       1 shared_informer.go:320] Caches are synced for node
	I0816 13:31:15.393799       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I0816 13:31:15.393976       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0816 13:31:15.394085       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0816 13:31:15.394174       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0816 13:31:15.394359       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="pause-356375"
	I0816 13:31:15.397008       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0816 13:31:15.400522       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0816 13:31:15.408971       1 shared_informer.go:320] Caches are synced for service account
	I0816 13:31:15.417561       1 shared_informer.go:320] Caches are synced for PVC protection
	I0816 13:31:15.438548       1 shared_informer.go:320] Caches are synced for disruption
	I0816 13:31:15.441879       1 shared_informer.go:320] Caches are synced for daemon sets
	I0816 13:31:15.512802       1 shared_informer.go:320] Caches are synced for cronjob
	I0816 13:31:15.556095       1 shared_informer.go:320] Caches are synced for resource quota
	I0816 13:31:15.597988       1 shared_informer.go:320] Caches are synced for resource quota
	I0816 13:31:16.023242       1 shared_informer.go:320] Caches are synced for garbage collector
	I0816 13:31:16.039103       1 shared_informer.go:320] Caches are synced for garbage collector
	I0816 13:31:16.039184       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [e431d9d28b0dc358a0fe60ad154b50e31ef53490c30b10c8499efce9d2be37c3] <==
	
	
	==> kube-proxy [869400086fb4715dcf21b51f25a0efb8f01b8b1804934eb092e2211d6e56c9bc] <==
	
	
	==> kube-proxy [f367ca3640a8d316a76d5ff8e4016f6225dfda551875f398bacd2dcf777c6169] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0816 13:31:13.581749       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0816 13:31:13.605116       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.95"]
	E0816 13:31:13.605215       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 13:31:13.660959       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0816 13:31:13.660994       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0816 13:31:13.661020       1 server_linux.go:169] "Using iptables Proxier"
	I0816 13:31:13.665142       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 13:31:13.665563       1 server.go:483] "Version info" version="v1.31.0"
	I0816 13:31:13.665647       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 13:31:13.667623       1 config.go:197] "Starting service config controller"
	I0816 13:31:13.667782       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 13:31:13.667860       1 config.go:104] "Starting endpoint slice config controller"
	I0816 13:31:13.667866       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 13:31:13.667967       1 config.go:326] "Starting node config controller"
	I0816 13:31:13.667997       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 13:31:13.768800       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0816 13:31:13.768956       1 shared_informer.go:320] Caches are synced for node config
	I0816 13:31:13.768969       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [62c7f69b6d2512a2b1b776c66258191dc56b3e8a4e953194df93038a150312aa] <==
	W0816 13:31:12.061781       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 13:31:12.061812       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 13:31:12.061892       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 13:31:12.061980       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 13:31:12.062263       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 13:31:12.062365       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 13:31:12.062572       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0816 13:31:12.062654       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 13:31:12.062797       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 13:31:12.062876       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0816 13:31:12.063015       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 13:31:12.063098       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0816 13:31:12.063198       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 13:31:12.063280       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 13:31:12.063482       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 13:31:12.063511       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 13:31:12.063617       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 13:31:12.063701       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 13:31:12.063817       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 13:31:12.063881       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 13:31:12.066093       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0816 13:31:12.066180       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0816 13:31:12.067067       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 13:31:12.067125       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0816 13:31:14.997823       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [bb18c4333d0f5062cf53db13c5b390aa62949c93454fc9c61990dc021b9f1676] <==
	
	
	==> kubelet <==
	Aug 16 13:31:08 pause-356375 kubelet[3460]: E0816 13:31:08.396568    3460 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.95:8443: connect: connection refused" node="pause-356375"
	Aug 16 13:31:08 pause-356375 kubelet[3460]: I0816 13:31:08.478033    3460 scope.go:117] "RemoveContainer" containerID="3ee07c27f2fa921f98b92431882892c3ca2512d03804b097206361e54779f73d"
	Aug 16 13:31:08 pause-356375 kubelet[3460]: I0816 13:31:08.482025    3460 scope.go:117] "RemoveContainer" containerID="e431d9d28b0dc358a0fe60ad154b50e31ef53490c30b10c8499efce9d2be37c3"
	Aug 16 13:31:08 pause-356375 kubelet[3460]: I0816 13:31:08.482639    3460 scope.go:117] "RemoveContainer" containerID="bb18c4333d0f5062cf53db13c5b390aa62949c93454fc9c61990dc021b9f1676"
	Aug 16 13:31:08 pause-356375 kubelet[3460]: E0816 13:31:08.592083    3460 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-356375?timeout=10s\": dial tcp 192.168.61.95:8443: connect: connection refused" interval="800ms"
	Aug 16 13:31:09 pause-356375 kubelet[3460]: I0816 13:31:09.208308    3460 scope.go:117] "RemoveContainer" containerID="9e52df5a61660e485bc705c956685309c88e3f61191ad892f220c0aa905c4a6f"
	Aug 16 13:31:09 pause-356375 kubelet[3460]: E0816 13:31:09.393421    3460 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-356375?timeout=10s\": dial tcp 192.168.61.95:8443: connect: connection refused" interval="1.6s"
	Aug 16 13:31:09 pause-356375 kubelet[3460]: I0816 13:31:09.998122    3460 kubelet_node_status.go:72] "Attempting to register node" node="pause-356375"
	Aug 16 13:31:10 pause-356375 kubelet[3460]: E0816 13:31:10.098862    3460 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723815070098096195,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:31:10 pause-356375 kubelet[3460]: E0816 13:31:10.099619    3460 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723815070098096195,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:31:12 pause-356375 kubelet[3460]: I0816 13:31:12.365137    3460 kubelet_node_status.go:111] "Node was previously registered" node="pause-356375"
	Aug 16 13:31:12 pause-356375 kubelet[3460]: I0816 13:31:12.365256    3460 kubelet_node_status.go:75] "Successfully registered node" node="pause-356375"
	Aug 16 13:31:12 pause-356375 kubelet[3460]: I0816 13:31:12.365296    3460 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 16 13:31:12 pause-356375 kubelet[3460]: I0816 13:31:12.366579    3460 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 16 13:31:12 pause-356375 kubelet[3460]: E0816 13:31:12.385528    3460 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-pause-356375\" already exists" pod="kube-system/kube-apiserver-pause-356375"
	Aug 16 13:31:13 pause-356375 kubelet[3460]: I0816 13:31:13.003648    3460 apiserver.go:52] "Watching apiserver"
	Aug 16 13:31:13 pause-356375 kubelet[3460]: I0816 13:31:13.095297    3460 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Aug 16 13:31:13 pause-356375 kubelet[3460]: I0816 13:31:13.096610    3460 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e5bc83bc-fa37-4011-868b-0b47230d3c6e-xtables-lock\") pod \"kube-proxy-s5r7l\" (UID: \"e5bc83bc-fa37-4011-868b-0b47230d3c6e\") " pod="kube-system/kube-proxy-s5r7l"
	Aug 16 13:31:13 pause-356375 kubelet[3460]: I0816 13:31:13.096661    3460 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e5bc83bc-fa37-4011-868b-0b47230d3c6e-lib-modules\") pod \"kube-proxy-s5r7l\" (UID: \"e5bc83bc-fa37-4011-868b-0b47230d3c6e\") " pod="kube-system/kube-proxy-s5r7l"
	Aug 16 13:31:13 pause-356375 kubelet[3460]: I0816 13:31:13.307829    3460 scope.go:117] "RemoveContainer" containerID="869400086fb4715dcf21b51f25a0efb8f01b8b1804934eb092e2211d6e56c9bc"
	Aug 16 13:31:13 pause-356375 kubelet[3460]: I0816 13:31:13.308344    3460 scope.go:117] "RemoveContainer" containerID="c83fb4bd3f7e647a65b5b4f9ef499a8273b6bf543ecd4095eb1df915e8fa7fe7"
	Aug 16 13:31:20 pause-356375 kubelet[3460]: E0816 13:31:20.101015    3460 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723815080100750489,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:31:20 pause-356375 kubelet[3460]: E0816 13:31:20.101059    3460 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723815080100750489,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:31:30 pause-356375 kubelet[3460]: E0816 13:31:30.102549    3460 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723815090102124101,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:31:30 pause-356375 kubelet[3460]: E0816 13:31:30.102965    3460 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723815090102124101,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-356375 -n pause-356375
helpers_test.go:261: (dbg) Run:  kubectl --context pause-356375 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (69.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (72.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-169820 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-169820 --driver=kvm2  --container-runtime=crio: exit status 80 (1m11.958157372s)

                                                
                                                
-- stdout --
	* [NoKubernetes-169820] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-3966/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-3966/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-169820
	* Restarting existing kvm2 VM for "NoKubernetes-169820" ...
	* Updating the running kvm2 "NoKubernetes-169820" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	* Failed to start kvm2 VM. Running "minikube delete -p NoKubernetes-169820" may fix it: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-linux-amd64 start -p NoKubernetes-169820 --driver=kvm2  --container-runtime=crio" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p NoKubernetes-169820 -n NoKubernetes-169820
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p NoKubernetes-169820 -n NoKubernetes-169820: exit status 6 (260.710713ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 13:32:40.272256   52436 status.go:417] kubeconfig endpoint: get endpoint: "NoKubernetes-169820" does not appear in /home/jenkins/minikube-integration/19423-3966/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "NoKubernetes-169820" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (72.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (319s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-882237 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0816 13:33:56.822752   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-882237 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (5m18.724424637s)

                                                
                                                
-- stdout --
	* [old-k8s-version-882237] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-3966/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-3966/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-882237" primary control-plane node in "old-k8s-version-882237" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 13:33:44.407892   53711 out.go:345] Setting OutFile to fd 1 ...
	I0816 13:33:44.407990   53711 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 13:33:44.407997   53711 out.go:358] Setting ErrFile to fd 2...
	I0816 13:33:44.408001   53711 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 13:33:44.408176   53711 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-3966/.minikube/bin
	I0816 13:33:44.408753   53711 out.go:352] Setting JSON to false
	I0816 13:33:44.409664   53711 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4569,"bootTime":1723810655,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 13:33:44.409725   53711 start.go:139] virtualization: kvm guest
	I0816 13:33:44.412042   53711 out.go:177] * [old-k8s-version-882237] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 13:33:44.413394   53711 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 13:33:44.413442   53711 notify.go:220] Checking for updates...
	I0816 13:33:44.415798   53711 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 13:33:44.417062   53711 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 13:33:44.418453   53711 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-3966/.minikube
	I0816 13:33:44.419910   53711 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 13:33:44.421387   53711 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 13:33:44.423213   53711 config.go:182] Loaded profile config "cert-expiration-050553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 13:33:44.423336   53711 config.go:182] Loaded profile config "cert-options-779306": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 13:33:44.423440   53711 config.go:182] Loaded profile config "kubernetes-upgrade-759623": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 13:33:44.423540   53711 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 13:33:44.466805   53711 out.go:177] * Using the kvm2 driver based on user configuration
	I0816 13:33:44.468339   53711 start.go:297] selected driver: kvm2
	I0816 13:33:44.468359   53711 start.go:901] validating driver "kvm2" against <nil>
	I0816 13:33:44.468371   53711 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 13:33:44.469167   53711 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 13:33:44.469249   53711 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-3966/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 13:33:44.484971   53711 install.go:137] /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0816 13:33:44.485040   53711 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 13:33:44.485285   53711 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 13:33:44.485332   53711 cni.go:84] Creating CNI manager for ""
	I0816 13:33:44.485355   53711 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:33:44.485373   53711 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0816 13:33:44.485432   53711 start.go:340] cluster config:
	{Name:old-k8s-version-882237 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-882237 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 13:33:44.485537   53711 iso.go:125] acquiring lock: {Name:mk00139b359c6fe4c84d80e843a2099303fb7c5b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 13:33:44.488730   53711 out.go:177] * Starting "old-k8s-version-882237" primary control-plane node in "old-k8s-version-882237" cluster
	I0816 13:33:44.490183   53711 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0816 13:33:44.490246   53711 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0816 13:33:44.490258   53711 cache.go:56] Caching tarball of preloaded images
	I0816 13:33:44.490339   53711 preload.go:172] Found /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 13:33:44.490351   53711 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0816 13:33:44.490448   53711 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/config.json ...
	I0816 13:33:44.490472   53711 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/config.json: {Name:mk3af00a1f07964ba25654c40fbab5e5ad86df61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:33:44.490632   53711 start.go:360] acquireMachinesLock for old-k8s-version-882237: {Name:mk0b4b88ee8893cefe753073bc08069ac50e5641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 13:34:34.218290   53711 start.go:364] duration metric: took 49.727592736s to acquireMachinesLock for "old-k8s-version-882237"
	I0816 13:34:34.218400   53711 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-882237 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-882237 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 13:34:34.218526   53711 start.go:125] createHost starting for "" (driver="kvm2")
	I0816 13:34:34.220753   53711 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 13:34:34.220995   53711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:34:34.221038   53711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:34:34.237180   53711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45243
	I0816 13:34:34.237652   53711 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:34:34.238284   53711 main.go:141] libmachine: Using API Version  1
	I0816 13:34:34.238307   53711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:34:34.238630   53711 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:34:34.238864   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetMachineName
	I0816 13:34:34.239070   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:34:34.239236   53711 start.go:159] libmachine.API.Create for "old-k8s-version-882237" (driver="kvm2")
	I0816 13:34:34.239275   53711 client.go:168] LocalClient.Create starting
	I0816 13:34:34.239308   53711 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem
	I0816 13:34:34.239344   53711 main.go:141] libmachine: Decoding PEM data...
	I0816 13:34:34.239364   53711 main.go:141] libmachine: Parsing certificate...
	I0816 13:34:34.239433   53711 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem
	I0816 13:34:34.239458   53711 main.go:141] libmachine: Decoding PEM data...
	I0816 13:34:34.239475   53711 main.go:141] libmachine: Parsing certificate...
	I0816 13:34:34.239498   53711 main.go:141] libmachine: Running pre-create checks...
	I0816 13:34:34.239510   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .PreCreateCheck
	I0816 13:34:34.239839   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetConfigRaw
	I0816 13:34:34.240199   53711 main.go:141] libmachine: Creating machine...
	I0816 13:34:34.240212   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .Create
	I0816 13:34:34.240320   53711 main.go:141] libmachine: (old-k8s-version-882237) Creating KVM machine...
	I0816 13:34:34.241488   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | found existing default KVM network
	I0816 13:34:34.242677   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:34:34.242529   54366 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:5d:d7:42} reservation:<nil>}
	I0816 13:34:34.243523   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:34:34.243458   54366 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:cc:5c:97} reservation:<nil>}
	I0816 13:34:34.244656   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:34:34.244595   54366 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:be:96:5e} reservation:<nil>}
	I0816 13:34:34.245824   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:34:34.245742   54366 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003091b0}
	I0816 13:34:34.245856   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | created network xml: 
	I0816 13:34:34.245873   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | <network>
	I0816 13:34:34.245884   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG |   <name>mk-old-k8s-version-882237</name>
	I0816 13:34:34.245897   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG |   <dns enable='no'/>
	I0816 13:34:34.245907   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG |   
	I0816 13:34:34.245917   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0816 13:34:34.245932   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG |     <dhcp>
	I0816 13:34:34.245946   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0816 13:34:34.245955   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG |     </dhcp>
	I0816 13:34:34.245963   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG |   </ip>
	I0816 13:34:34.245972   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG |   
	I0816 13:34:34.245982   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | </network>
	I0816 13:34:34.245998   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | 
	I0816 13:34:34.251486   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | trying to create private KVM network mk-old-k8s-version-882237 192.168.72.0/24...
	I0816 13:34:34.320574   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | private KVM network mk-old-k8s-version-882237 192.168.72.0/24 created
	I0816 13:34:34.320629   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:34:34.320541   54366 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19423-3966/.minikube
	I0816 13:34:34.320650   53711 main.go:141] libmachine: (old-k8s-version-882237) Setting up store path in /home/jenkins/minikube-integration/19423-3966/.minikube/machines/old-k8s-version-882237 ...
	I0816 13:34:34.320680   53711 main.go:141] libmachine: (old-k8s-version-882237) Building disk image from file:///home/jenkins/minikube-integration/19423-3966/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso
	I0816 13:34:34.320703   53711 main.go:141] libmachine: (old-k8s-version-882237) Downloading /home/jenkins/minikube-integration/19423-3966/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19423-3966/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso...
	I0816 13:34:34.560675   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:34:34.560552   54366 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/old-k8s-version-882237/id_rsa...
	I0816 13:34:34.704560   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:34:34.704434   54366 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/old-k8s-version-882237/old-k8s-version-882237.rawdisk...
	I0816 13:34:34.704595   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | Writing magic tar header
	I0816 13:34:34.704613   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | Writing SSH key tar header
	I0816 13:34:34.704631   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:34:34.704578   54366 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19423-3966/.minikube/machines/old-k8s-version-882237 ...
	I0816 13:34:34.704763   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/old-k8s-version-882237
	I0816 13:34:34.704797   53711 main.go:141] libmachine: (old-k8s-version-882237) Setting executable bit set on /home/jenkins/minikube-integration/19423-3966/.minikube/machines/old-k8s-version-882237 (perms=drwx------)
	I0816 13:34:34.704811   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-3966/.minikube/machines
	I0816 13:34:34.704840   53711 main.go:141] libmachine: (old-k8s-version-882237) Setting executable bit set on /home/jenkins/minikube-integration/19423-3966/.minikube/machines (perms=drwxr-xr-x)
	I0816 13:34:34.704862   53711 main.go:141] libmachine: (old-k8s-version-882237) Setting executable bit set on /home/jenkins/minikube-integration/19423-3966/.minikube (perms=drwxr-xr-x)
	I0816 13:34:34.704876   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-3966/.minikube
	I0816 13:34:34.704931   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-3966
	I0816 13:34:34.704953   53711 main.go:141] libmachine: (old-k8s-version-882237) Setting executable bit set on /home/jenkins/minikube-integration/19423-3966 (perms=drwxrwxr-x)
	I0816 13:34:34.704963   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0816 13:34:34.704975   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | Checking permissions on dir: /home/jenkins
	I0816 13:34:34.704989   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | Checking permissions on dir: /home
	I0816 13:34:34.705002   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | Skipping /home - not owner
	I0816 13:34:34.705017   53711 main.go:141] libmachine: (old-k8s-version-882237) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0816 13:34:34.705035   53711 main.go:141] libmachine: (old-k8s-version-882237) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0816 13:34:34.705048   53711 main.go:141] libmachine: (old-k8s-version-882237) Creating domain...
	I0816 13:34:34.706018   53711 main.go:141] libmachine: (old-k8s-version-882237) define libvirt domain using xml: 
	I0816 13:34:34.706039   53711 main.go:141] libmachine: (old-k8s-version-882237) <domain type='kvm'>
	I0816 13:34:34.706065   53711 main.go:141] libmachine: (old-k8s-version-882237)   <name>old-k8s-version-882237</name>
	I0816 13:34:34.706087   53711 main.go:141] libmachine: (old-k8s-version-882237)   <memory unit='MiB'>2200</memory>
	I0816 13:34:34.706108   53711 main.go:141] libmachine: (old-k8s-version-882237)   <vcpu>2</vcpu>
	I0816 13:34:34.706124   53711 main.go:141] libmachine: (old-k8s-version-882237)   <features>
	I0816 13:34:34.706144   53711 main.go:141] libmachine: (old-k8s-version-882237)     <acpi/>
	I0816 13:34:34.706155   53711 main.go:141] libmachine: (old-k8s-version-882237)     <apic/>
	I0816 13:34:34.706170   53711 main.go:141] libmachine: (old-k8s-version-882237)     <pae/>
	I0816 13:34:34.706186   53711 main.go:141] libmachine: (old-k8s-version-882237)     
	I0816 13:34:34.706201   53711 main.go:141] libmachine: (old-k8s-version-882237)   </features>
	I0816 13:34:34.706214   53711 main.go:141] libmachine: (old-k8s-version-882237)   <cpu mode='host-passthrough'>
	I0816 13:34:34.706223   53711 main.go:141] libmachine: (old-k8s-version-882237)   
	I0816 13:34:34.706234   53711 main.go:141] libmachine: (old-k8s-version-882237)   </cpu>
	I0816 13:34:34.706246   53711 main.go:141] libmachine: (old-k8s-version-882237)   <os>
	I0816 13:34:34.706257   53711 main.go:141] libmachine: (old-k8s-version-882237)     <type>hvm</type>
	I0816 13:34:34.706266   53711 main.go:141] libmachine: (old-k8s-version-882237)     <boot dev='cdrom'/>
	I0816 13:34:34.706276   53711 main.go:141] libmachine: (old-k8s-version-882237)     <boot dev='hd'/>
	I0816 13:34:34.706285   53711 main.go:141] libmachine: (old-k8s-version-882237)     <bootmenu enable='no'/>
	I0816 13:34:34.706295   53711 main.go:141] libmachine: (old-k8s-version-882237)   </os>
	I0816 13:34:34.706303   53711 main.go:141] libmachine: (old-k8s-version-882237)   <devices>
	I0816 13:34:34.706319   53711 main.go:141] libmachine: (old-k8s-version-882237)     <disk type='file' device='cdrom'>
	I0816 13:34:34.706337   53711 main.go:141] libmachine: (old-k8s-version-882237)       <source file='/home/jenkins/minikube-integration/19423-3966/.minikube/machines/old-k8s-version-882237/boot2docker.iso'/>
	I0816 13:34:34.706348   53711 main.go:141] libmachine: (old-k8s-version-882237)       <target dev='hdc' bus='scsi'/>
	I0816 13:34:34.706362   53711 main.go:141] libmachine: (old-k8s-version-882237)       <readonly/>
	I0816 13:34:34.706368   53711 main.go:141] libmachine: (old-k8s-version-882237)     </disk>
	I0816 13:34:34.706377   53711 main.go:141] libmachine: (old-k8s-version-882237)     <disk type='file' device='disk'>
	I0816 13:34:34.706393   53711 main.go:141] libmachine: (old-k8s-version-882237)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0816 13:34:34.706419   53711 main.go:141] libmachine: (old-k8s-version-882237)       <source file='/home/jenkins/minikube-integration/19423-3966/.minikube/machines/old-k8s-version-882237/old-k8s-version-882237.rawdisk'/>
	I0816 13:34:34.706436   53711 main.go:141] libmachine: (old-k8s-version-882237)       <target dev='hda' bus='virtio'/>
	I0816 13:34:34.706459   53711 main.go:141] libmachine: (old-k8s-version-882237)     </disk>
	I0816 13:34:34.706473   53711 main.go:141] libmachine: (old-k8s-version-882237)     <interface type='network'>
	I0816 13:34:34.706484   53711 main.go:141] libmachine: (old-k8s-version-882237)       <source network='mk-old-k8s-version-882237'/>
	I0816 13:34:34.706491   53711 main.go:141] libmachine: (old-k8s-version-882237)       <model type='virtio'/>
	I0816 13:34:34.706501   53711 main.go:141] libmachine: (old-k8s-version-882237)     </interface>
	I0816 13:34:34.706513   53711 main.go:141] libmachine: (old-k8s-version-882237)     <interface type='network'>
	I0816 13:34:34.706525   53711 main.go:141] libmachine: (old-k8s-version-882237)       <source network='default'/>
	I0816 13:34:34.706536   53711 main.go:141] libmachine: (old-k8s-version-882237)       <model type='virtio'/>
	I0816 13:34:34.706548   53711 main.go:141] libmachine: (old-k8s-version-882237)     </interface>
	I0816 13:34:34.706564   53711 main.go:141] libmachine: (old-k8s-version-882237)     <serial type='pty'>
	I0816 13:34:34.706576   53711 main.go:141] libmachine: (old-k8s-version-882237)       <target port='0'/>
	I0816 13:34:34.706598   53711 main.go:141] libmachine: (old-k8s-version-882237)     </serial>
	I0816 13:34:34.706611   53711 main.go:141] libmachine: (old-k8s-version-882237)     <console type='pty'>
	I0816 13:34:34.706622   53711 main.go:141] libmachine: (old-k8s-version-882237)       <target type='serial' port='0'/>
	I0816 13:34:34.706639   53711 main.go:141] libmachine: (old-k8s-version-882237)     </console>
	I0816 13:34:34.706648   53711 main.go:141] libmachine: (old-k8s-version-882237)     <rng model='virtio'>
	I0816 13:34:34.706658   53711 main.go:141] libmachine: (old-k8s-version-882237)       <backend model='random'>/dev/random</backend>
	I0816 13:34:34.706668   53711 main.go:141] libmachine: (old-k8s-version-882237)     </rng>
	I0816 13:34:34.706679   53711 main.go:141] libmachine: (old-k8s-version-882237)     
	I0816 13:34:34.706689   53711 main.go:141] libmachine: (old-k8s-version-882237)     
	I0816 13:34:34.706701   53711 main.go:141] libmachine: (old-k8s-version-882237)   </devices>
	I0816 13:34:34.706711   53711 main.go:141] libmachine: (old-k8s-version-882237) </domain>
	I0816 13:34:34.706725   53711 main.go:141] libmachine: (old-k8s-version-882237) 
	I0816 13:34:34.714022   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:82:06:99 in network default
	I0816 13:34:34.714671   53711 main.go:141] libmachine: (old-k8s-version-882237) Ensuring networks are active...
	I0816 13:34:34.714693   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:34.715523   53711 main.go:141] libmachine: (old-k8s-version-882237) Ensuring network default is active
	I0816 13:34:34.715905   53711 main.go:141] libmachine: (old-k8s-version-882237) Ensuring network mk-old-k8s-version-882237 is active
	I0816 13:34:34.716459   53711 main.go:141] libmachine: (old-k8s-version-882237) Getting domain xml...
	I0816 13:34:34.717224   53711 main.go:141] libmachine: (old-k8s-version-882237) Creating domain...
	I0816 13:34:36.031875   53711 main.go:141] libmachine: (old-k8s-version-882237) Waiting to get IP...
	I0816 13:34:36.033014   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:36.033539   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:34:36.033569   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:34:36.033523   54366 retry.go:31] will retry after 212.711529ms: waiting for machine to come up
	I0816 13:34:36.248010   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:36.248581   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:34:36.248619   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:34:36.248544   54366 retry.go:31] will retry after 293.195468ms: waiting for machine to come up
	I0816 13:34:36.543033   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:36.543735   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:34:36.543757   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:34:36.543698   54366 retry.go:31] will retry after 438.589641ms: waiting for machine to come up
	I0816 13:34:36.984358   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:36.985255   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:34:36.985283   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:34:36.985171   54366 retry.go:31] will retry after 512.847303ms: waiting for machine to come up
	I0816 13:34:37.500115   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:37.500658   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:34:37.500686   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:34:37.500617   54366 retry.go:31] will retry after 642.752348ms: waiting for machine to come up
	I0816 13:34:38.145281   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:38.145725   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:34:38.145779   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:34:38.145679   54366 retry.go:31] will retry after 780.21508ms: waiting for machine to come up
	I0816 13:34:38.927793   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:38.928326   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:34:38.928353   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:34:38.928271   54366 retry.go:31] will retry after 839.130274ms: waiting for machine to come up
	I0816 13:34:39.768568   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:39.769088   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:34:39.769118   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:34:39.769027   54366 retry.go:31] will retry after 1.199891989s: waiting for machine to come up
	I0816 13:34:40.970186   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:40.970656   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:34:40.970693   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:34:40.970624   54366 retry.go:31] will retry after 1.210384761s: waiting for machine to come up
	I0816 13:34:42.183124   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:42.183703   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:34:42.183732   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:34:42.183654   54366 retry.go:31] will retry after 1.947114674s: waiting for machine to come up
	I0816 13:34:44.133062   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:44.133542   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:34:44.133572   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:34:44.133497   54366 retry.go:31] will retry after 2.610717042s: waiting for machine to come up
	I0816 13:34:46.746341   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:46.746725   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:34:46.746746   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:34:46.746696   54366 retry.go:31] will retry after 2.285613509s: waiting for machine to come up
	I0816 13:34:49.033450   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:49.033880   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:34:49.033912   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:34:49.033842   54366 retry.go:31] will retry after 4.372749182s: waiting for machine to come up
	I0816 13:34:53.410300   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:53.410713   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:34:53.410735   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:34:53.410666   54366 retry.go:31] will retry after 3.706660755s: waiting for machine to come up
	I0816 13:34:57.121176   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:57.121748   53711 main.go:141] libmachine: (old-k8s-version-882237) Found IP for machine: 192.168.72.105
	I0816 13:34:57.121779   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has current primary IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:57.121789   53711 main.go:141] libmachine: (old-k8s-version-882237) Reserving static IP address...
	I0816 13:34:57.122155   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-882237", mac: "52:54:00:ce:02:bd", ip: "192.168.72.105"} in network mk-old-k8s-version-882237
	I0816 13:34:57.195448   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | Getting to WaitForSSH function...
	I0816 13:34:57.195476   53711 main.go:141] libmachine: (old-k8s-version-882237) Reserved static IP address: 192.168.72.105
	I0816 13:34:57.195488   53711 main.go:141] libmachine: (old-k8s-version-882237) Waiting for SSH to be available...
	I0816 13:34:57.198202   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:57.198734   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:34:49 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ce:02:bd}
	I0816 13:34:57.198764   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:57.198892   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | Using SSH client type: external
	I0816 13:34:57.198921   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/old-k8s-version-882237/id_rsa (-rw-------)
	I0816 13:34:57.198962   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.105 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-3966/.minikube/machines/old-k8s-version-882237/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 13:34:57.198976   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | About to run SSH command:
	I0816 13:34:57.198991   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | exit 0
	I0816 13:34:57.329005   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | SSH cmd err, output: <nil>: 
	I0816 13:34:57.329301   53711 main.go:141] libmachine: (old-k8s-version-882237) KVM machine creation complete!
	I0816 13:34:57.329565   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetConfigRaw
	I0816 13:34:57.330207   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:34:57.330419   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:34:57.330592   53711 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0816 13:34:57.330604   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetState
	I0816 13:34:57.331893   53711 main.go:141] libmachine: Detecting operating system of created instance...
	I0816 13:34:57.331906   53711 main.go:141] libmachine: Waiting for SSH to be available...
	I0816 13:34:57.331913   53711 main.go:141] libmachine: Getting to WaitForSSH function...
	I0816 13:34:57.331922   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:34:57.334652   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:57.335087   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:34:49 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:34:57.335125   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:57.335299   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:34:57.335457   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:34:57.335598   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:34:57.335723   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:34:57.335909   53711 main.go:141] libmachine: Using SSH client type: native
	I0816 13:34:57.336157   53711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.105 22 <nil> <nil>}
	I0816 13:34:57.336173   53711 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0816 13:34:57.448252   53711 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 13:34:57.448272   53711 main.go:141] libmachine: Detecting the provisioner...
	I0816 13:34:57.448280   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:34:57.451473   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:57.451908   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:34:49 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:34:57.451935   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:57.452107   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:34:57.452439   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:34:57.452591   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:34:57.452775   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:34:57.452960   53711 main.go:141] libmachine: Using SSH client type: native
	I0816 13:34:57.453153   53711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.105 22 <nil> <nil>}
	I0816 13:34:57.453172   53711 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0816 13:34:57.569932   53711 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0816 13:34:57.570001   53711 main.go:141] libmachine: found compatible host: buildroot
	I0816 13:34:57.570014   53711 main.go:141] libmachine: Provisioning with buildroot...
	I0816 13:34:57.570025   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetMachineName
	I0816 13:34:57.570297   53711 buildroot.go:166] provisioning hostname "old-k8s-version-882237"
	I0816 13:34:57.570326   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetMachineName
	I0816 13:34:57.570564   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:34:57.573141   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:57.573547   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:34:49 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:34:57.573576   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:57.573743   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:34:57.573917   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:34:57.574087   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:34:57.574246   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:34:57.574406   53711 main.go:141] libmachine: Using SSH client type: native
	I0816 13:34:57.574561   53711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.105 22 <nil> <nil>}
	I0816 13:34:57.574573   53711 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-882237 && echo "old-k8s-version-882237" | sudo tee /etc/hostname
	I0816 13:34:57.705485   53711 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-882237
	
	I0816 13:34:57.705532   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:34:57.708686   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:57.709090   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:34:49 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:34:57.709150   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:57.709329   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:34:57.709536   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:34:57.709699   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:34:57.709857   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:34:57.710038   53711 main.go:141] libmachine: Using SSH client type: native
	I0816 13:34:57.710273   53711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.105 22 <nil> <nil>}
	I0816 13:34:57.710299   53711 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-882237' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-882237/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-882237' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 13:34:57.838160   53711 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 13:34:57.838185   53711 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-3966/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-3966/.minikube}
	I0816 13:34:57.838229   53711 buildroot.go:174] setting up certificates
	I0816 13:34:57.838241   53711 provision.go:84] configureAuth start
	I0816 13:34:57.838254   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetMachineName
	I0816 13:34:57.838563   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetIP
	I0816 13:34:57.841000   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:57.841392   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:34:49 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:34:57.841421   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:57.841548   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:34:57.843913   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:57.844296   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:34:49 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:34:57.844331   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:57.844433   53711 provision.go:143] copyHostCerts
	I0816 13:34:57.844493   53711 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem, removing ...
	I0816 13:34:57.844514   53711 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem
	I0816 13:34:57.844585   53711 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem (1082 bytes)
	I0816 13:34:57.844693   53711 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem, removing ...
	I0816 13:34:57.844703   53711 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem
	I0816 13:34:57.844734   53711 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem (1123 bytes)
	I0816 13:34:57.844811   53711 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem, removing ...
	I0816 13:34:57.844822   53711 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem
	I0816 13:34:57.844850   53711 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem (1675 bytes)
	I0816 13:34:57.844937   53711 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-882237 san=[127.0.0.1 192.168.72.105 localhost minikube old-k8s-version-882237]
	I0816 13:34:58.064405   53711 provision.go:177] copyRemoteCerts
	I0816 13:34:58.064456   53711 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 13:34:58.064485   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:34:58.067554   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:58.067899   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:34:49 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:34:58.067927   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:58.068052   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:34:58.068270   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:34:58.068399   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:34:58.068550   53711 sshutil.go:53] new ssh client: &{IP:192.168.72.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/old-k8s-version-882237/id_rsa Username:docker}
	I0816 13:34:58.155575   53711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 13:34:58.179075   53711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0816 13:34:58.201147   53711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 13:34:58.224748   53711 provision.go:87] duration metric: took 386.493505ms to configureAuth
	I0816 13:34:58.224776   53711 buildroot.go:189] setting minikube options for container-runtime
	I0816 13:34:58.224959   53711 config.go:182] Loaded profile config "old-k8s-version-882237": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0816 13:34:58.225028   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:34:58.227780   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:58.228089   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:34:49 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:34:58.228114   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:58.228260   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:34:58.228477   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:34:58.228659   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:34:58.228815   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:34:58.229002   53711 main.go:141] libmachine: Using SSH client type: native
	I0816 13:34:58.229166   53711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.105 22 <nil> <nil>}
	I0816 13:34:58.229186   53711 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 13:34:58.506517   53711 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 13:34:58.506541   53711 main.go:141] libmachine: Checking connection to Docker...
	I0816 13:34:58.506561   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetURL
	I0816 13:34:58.507727   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | Using libvirt version 6000000
	I0816 13:34:58.510310   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:58.510682   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:34:49 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:34:58.510713   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:58.510840   53711 main.go:141] libmachine: Docker is up and running!
	I0816 13:34:58.510853   53711 main.go:141] libmachine: Reticulating splines...
	I0816 13:34:58.510861   53711 client.go:171] duration metric: took 24.271575481s to LocalClient.Create
	I0816 13:34:58.510889   53711 start.go:167] duration metric: took 24.271653175s to libmachine.API.Create "old-k8s-version-882237"
	I0816 13:34:58.510918   53711 start.go:293] postStartSetup for "old-k8s-version-882237" (driver="kvm2")
	I0816 13:34:58.510935   53711 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 13:34:58.510958   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:34:58.511199   53711 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 13:34:58.511225   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:34:58.513287   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:58.513545   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:34:49 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:34:58.513563   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:58.513660   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:34:58.513828   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:34:58.513982   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:34:58.514110   53711 sshutil.go:53] new ssh client: &{IP:192.168.72.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/old-k8s-version-882237/id_rsa Username:docker}
	I0816 13:34:58.598806   53711 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 13:34:58.603081   53711 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 13:34:58.603107   53711 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/addons for local assets ...
	I0816 13:34:58.603179   53711 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/files for local assets ...
	I0816 13:34:58.603247   53711 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem -> 111492.pem in /etc/ssl/certs
	I0816 13:34:58.603332   53711 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 13:34:58.612634   53711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /etc/ssl/certs/111492.pem (1708 bytes)
	I0816 13:34:58.637919   53711 start.go:296] duration metric: took 126.985371ms for postStartSetup
	I0816 13:34:58.637970   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetConfigRaw
	I0816 13:34:58.638518   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetIP
	I0816 13:34:58.641270   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:58.641589   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:34:49 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:34:58.641624   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:58.641843   53711 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/config.json ...
	I0816 13:34:58.642092   53711 start.go:128] duration metric: took 24.42354905s to createHost
	I0816 13:34:58.642170   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:34:58.644268   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:58.644603   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:34:49 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:34:58.644630   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:58.644750   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:34:58.644936   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:34:58.645078   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:34:58.645251   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:34:58.645425   53711 main.go:141] libmachine: Using SSH client type: native
	I0816 13:34:58.645571   53711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.105 22 <nil> <nil>}
	I0816 13:34:58.645587   53711 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 13:34:58.757829   53711 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723815298.729789702
	
	I0816 13:34:58.757851   53711 fix.go:216] guest clock: 1723815298.729789702
	I0816 13:34:58.757861   53711 fix.go:229] Guest: 2024-08-16 13:34:58.729789702 +0000 UTC Remote: 2024-08-16 13:34:58.642108832 +0000 UTC m=+74.269001423 (delta=87.68087ms)
	I0816 13:34:58.757907   53711 fix.go:200] guest clock delta is within tolerance: 87.68087ms
	I0816 13:34:58.757915   53711 start.go:83] releasing machines lock for "old-k8s-version-882237", held for 24.53957757s
	I0816 13:34:58.757946   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:34:58.758281   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetIP
	I0816 13:34:58.762379   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:58.762858   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:34:49 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:34:58.762884   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:58.763027   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:34:58.763540   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:34:58.763710   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:34:58.763782   53711 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 13:34:58.763834   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:34:58.763918   53711 ssh_runner.go:195] Run: cat /version.json
	I0816 13:34:58.763932   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:34:58.766698   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:58.766833   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:58.767054   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:34:49 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:34:58.767114   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:34:49 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:34:58.767159   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:58.767189   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:34:58.767324   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:34:58.767417   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:34:58.767525   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:34:58.767633   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:34:58.767723   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:34:58.767755   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:34:58.767904   53711 sshutil.go:53] new ssh client: &{IP:192.168.72.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/old-k8s-version-882237/id_rsa Username:docker}
	I0816 13:34:58.767918   53711 sshutil.go:53] new ssh client: &{IP:192.168.72.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/old-k8s-version-882237/id_rsa Username:docker}
	I0816 13:34:58.850216   53711 ssh_runner.go:195] Run: systemctl --version
	I0816 13:34:58.872443   53711 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 13:34:59.035631   53711 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 13:34:59.042839   53711 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 13:34:59.042892   53711 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 13:34:59.060582   53711 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 13:34:59.060606   53711 start.go:495] detecting cgroup driver to use...
	I0816 13:34:59.060663   53711 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 13:34:59.078211   53711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 13:34:59.092212   53711 docker.go:217] disabling cri-docker service (if available) ...
	I0816 13:34:59.092267   53711 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 13:34:59.106310   53711 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 13:34:59.120574   53711 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 13:34:59.250306   53711 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 13:34:59.442694   53711 docker.go:233] disabling docker service ...
	I0816 13:34:59.442758   53711 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 13:34:59.458241   53711 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 13:34:59.471761   53711 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 13:34:59.626497   53711 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 13:34:59.750518   53711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 13:34:59.764995   53711 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 13:34:59.783327   53711 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0816 13:34:59.783417   53711 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:34:59.793730   53711 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 13:34:59.793793   53711 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:34:59.804302   53711 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:34:59.814819   53711 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:34:59.825277   53711 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 13:34:59.836486   53711 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 13:34:59.846152   53711 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 13:34:59.846210   53711 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 13:34:59.859922   53711 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 13:34:59.869090   53711 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:34:59.981748   53711 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 13:35:00.122606   53711 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 13:35:00.122689   53711 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 13:35:00.128485   53711 start.go:563] Will wait 60s for crictl version
	I0816 13:35:00.128555   53711 ssh_runner.go:195] Run: which crictl
	I0816 13:35:00.132597   53711 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 13:35:00.175998   53711 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 13:35:00.176074   53711 ssh_runner.go:195] Run: crio --version
	I0816 13:35:00.205444   53711 ssh_runner.go:195] Run: crio --version
	I0816 13:35:00.234496   53711 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0816 13:35:00.235753   53711 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetIP
	I0816 13:35:00.239045   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:35:00.239400   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:34:49 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:35:00.239422   53711 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:35:00.239612   53711 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0816 13:35:00.243917   53711 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 13:35:00.257355   53711 kubeadm.go:883] updating cluster {Name:old-k8s-version-882237 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-882237 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.105 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 13:35:00.257470   53711 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0816 13:35:00.257530   53711 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 13:35:00.290346   53711 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0816 13:35:00.290419   53711 ssh_runner.go:195] Run: which lz4
	I0816 13:35:00.294620   53711 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 13:35:00.298953   53711 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 13:35:00.298991   53711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0816 13:35:01.935214   53711 crio.go:462] duration metric: took 1.640622471s to copy over tarball
	I0816 13:35:01.935291   53711 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 13:35:04.384829   53711 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.449514781s)
	I0816 13:35:04.384857   53711 crio.go:469] duration metric: took 2.449613683s to extract the tarball
	I0816 13:35:04.384864   53711 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 13:35:04.427565   53711 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 13:35:04.473700   53711 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0816 13:35:04.473722   53711 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0816 13:35:04.473791   53711 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:35:04.473837   53711 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:35:04.473855   53711 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0816 13:35:04.473861   53711 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0816 13:35:04.473819   53711 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:35:04.473926   53711 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0816 13:35:04.473923   53711 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:35:04.473799   53711 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:35:04.475025   53711 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:35:04.475178   53711 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:35:04.475206   53711 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:35:04.475234   53711 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0816 13:35:04.475268   53711 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:35:04.475271   53711 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0816 13:35:04.475280   53711 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0816 13:35:04.475268   53711 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:35:04.634782   53711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0816 13:35:04.658625   53711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0816 13:35:04.661891   53711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:35:04.668677   53711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:35:04.672009   53711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:35:04.682629   53711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:35:04.692347   53711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0816 13:35:04.710938   53711 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0816 13:35:04.710988   53711 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0816 13:35:04.711037   53711 ssh_runner.go:195] Run: which crictl
	I0816 13:35:04.799542   53711 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0816 13:35:04.799585   53711 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0816 13:35:04.799593   53711 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:35:04.799616   53711 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0816 13:35:04.799644   53711 ssh_runner.go:195] Run: which crictl
	I0816 13:35:04.799657   53711 ssh_runner.go:195] Run: which crictl
	I0816 13:35:04.799724   53711 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0816 13:35:04.799765   53711 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:35:04.799798   53711 ssh_runner.go:195] Run: which crictl
	I0816 13:35:04.833801   53711 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0816 13:35:04.833838   53711 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0816 13:35:04.833845   53711 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:35:04.833855   53711 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:35:04.833895   53711 ssh_runner.go:195] Run: which crictl
	I0816 13:35:04.833895   53711 ssh_runner.go:195] Run: which crictl
	I0816 13:35:04.846807   53711 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0816 13:35:04.846856   53711 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0816 13:35:04.846895   53711 ssh_runner.go:195] Run: which crictl
	I0816 13:35:04.846897   53711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 13:35:04.846917   53711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:35:04.846951   53711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 13:35:04.846973   53711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:35:04.847041   53711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:35:04.847041   53711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:35:04.997860   53711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:35:04.997860   53711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:35:04.997937   53711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 13:35:04.997973   53711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 13:35:04.998019   53711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:35:04.998047   53711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 13:35:04.998093   53711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:35:05.152528   53711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:35:05.152609   53711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:35:05.166733   53711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 13:35:05.166803   53711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 13:35:05.166852   53711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:35:05.166954   53711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:35:05.166968   53711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 13:35:05.305180   53711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0816 13:35:05.305407   53711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0816 13:35:05.321669   53711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0816 13:35:05.332541   53711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0816 13:35:05.333945   53711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0816 13:35:05.333978   53711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0816 13:35:05.334083   53711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 13:35:05.341815   53711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:35:05.380730   53711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0816 13:35:05.498648   53711 cache_images.go:92] duration metric: took 1.024910728s to LoadCachedImages
	W0816 13:35:05.498780   53711 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0816 13:35:05.498810   53711 kubeadm.go:934] updating node { 192.168.72.105 8443 v1.20.0 crio true true} ...
	I0816 13:35:05.498935   53711 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-882237 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.105
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-882237 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 13:35:05.499016   53711 ssh_runner.go:195] Run: crio config
	I0816 13:35:05.570096   53711 cni.go:84] Creating CNI manager for ""
	I0816 13:35:05.570118   53711 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:35:05.570130   53711 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 13:35:05.570152   53711 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.105 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-882237 NodeName:old-k8s-version-882237 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.105"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.105 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0816 13:35:05.570311   53711 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.105
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-882237"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.105
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.105"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 13:35:05.570374   53711 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0816 13:35:05.582463   53711 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 13:35:05.582536   53711 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 13:35:05.594526   53711 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0816 13:35:05.614392   53711 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 13:35:05.632339   53711 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0816 13:35:05.650063   53711 ssh_runner.go:195] Run: grep 192.168.72.105	control-plane.minikube.internal$ /etc/hosts
	I0816 13:35:05.654505   53711 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.105	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 13:35:05.667261   53711 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:35:05.794746   53711 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 13:35:05.812919   53711 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237 for IP: 192.168.72.105
	I0816 13:35:05.812938   53711 certs.go:194] generating shared ca certs ...
	I0816 13:35:05.812951   53711 certs.go:226] acquiring lock for ca certs: {Name:mkdb46755373c135acf8239d2d4352f4b0b3d1d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:35:05.813111   53711 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key
	I0816 13:35:05.813192   53711 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key
	I0816 13:35:05.813204   53711 certs.go:256] generating profile certs ...
	I0816 13:35:05.813266   53711 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/client.key
	I0816 13:35:05.813283   53711 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/client.crt with IP's: []
	I0816 13:35:05.899586   53711 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/client.crt ...
	I0816 13:35:05.899616   53711 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/client.crt: {Name:mkb8ad7deb29a0014c885f5dd3b2339661a5f1ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:35:05.899770   53711 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/client.key ...
	I0816 13:35:05.899783   53711 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/client.key: {Name:mk770a549b659846f110d19a24ba4442cf7bc258 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:35:05.899855   53711 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/apiserver.key.e63f19d8
	I0816 13:35:05.899878   53711 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/apiserver.crt.e63f19d8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.105]
	I0816 13:35:06.086072   53711 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/apiserver.crt.e63f19d8 ...
	I0816 13:35:06.086098   53711 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/apiserver.crt.e63f19d8: {Name:mkacec3a3ad6fe417dd5c97ef6e2a1bdb6b021bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:35:06.086231   53711 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/apiserver.key.e63f19d8 ...
	I0816 13:35:06.086250   53711 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/apiserver.key.e63f19d8: {Name:mkb38dabae87f5f624dad03f3ba3ce14d833fa38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:35:06.086314   53711 certs.go:381] copying /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/apiserver.crt.e63f19d8 -> /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/apiserver.crt
	I0816 13:35:06.086384   53711 certs.go:385] copying /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/apiserver.key.e63f19d8 -> /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/apiserver.key
	I0816 13:35:06.086440   53711 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/proxy-client.key
	I0816 13:35:06.086455   53711 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/proxy-client.crt with IP's: []
	I0816 13:35:06.145219   53711 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/proxy-client.crt ...
	I0816 13:35:06.145241   53711 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/proxy-client.crt: {Name:mk9bf8840b3de3673b5ab193a6173b7c35470d3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:35:06.145387   53711 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/proxy-client.key ...
	I0816 13:35:06.145403   53711 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/proxy-client.key: {Name:mk7153ded6f2def9061b6e4db01262050549c214 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:35:06.145591   53711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem (1338 bytes)
	W0816 13:35:06.145625   53711 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149_empty.pem, impossibly tiny 0 bytes
	I0816 13:35:06.145639   53711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 13:35:06.145662   53711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem (1082 bytes)
	I0816 13:35:06.145687   53711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem (1123 bytes)
	I0816 13:35:06.145711   53711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem (1675 bytes)
	I0816 13:35:06.145756   53711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem (1708 bytes)
	I0816 13:35:06.146398   53711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 13:35:06.178670   53711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 13:35:06.208175   53711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 13:35:06.238389   53711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 13:35:06.281924   53711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0816 13:35:06.309605   53711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 13:35:06.336991   53711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 13:35:06.364518   53711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 13:35:06.391596   53711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem --> /usr/share/ca-certificates/11149.pem (1338 bytes)
	I0816 13:35:06.427798   53711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /usr/share/ca-certificates/111492.pem (1708 bytes)
	I0816 13:35:06.457117   53711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 13:35:06.492950   53711 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 13:35:06.512623   53711 ssh_runner.go:195] Run: openssl version
	I0816 13:35:06.519471   53711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11149.pem && ln -fs /usr/share/ca-certificates/11149.pem /etc/ssl/certs/11149.pem"
	I0816 13:35:06.536102   53711 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11149.pem
	I0816 13:35:06.541621   53711 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 12:33 /usr/share/ca-certificates/11149.pem
	I0816 13:35:06.541685   53711 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11149.pem
	I0816 13:35:06.548197   53711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11149.pem /etc/ssl/certs/51391683.0"
	I0816 13:35:06.561299   53711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111492.pem && ln -fs /usr/share/ca-certificates/111492.pem /etc/ssl/certs/111492.pem"
	I0816 13:35:06.573610   53711 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111492.pem
	I0816 13:35:06.578287   53711 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 12:33 /usr/share/ca-certificates/111492.pem
	I0816 13:35:06.578343   53711 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111492.pem
	I0816 13:35:06.586521   53711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111492.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 13:35:06.599697   53711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 13:35:06.612616   53711 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:35:06.620194   53711 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 12:22 /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:35:06.620266   53711 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:35:06.627787   53711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 13:35:06.642597   53711 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 13:35:06.647944   53711 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0816 13:35:06.648000   53711 kubeadm.go:392] StartCluster: {Name:old-k8s-version-882237 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-882237 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.105 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 13:35:06.648067   53711 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 13:35:06.648118   53711 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 13:35:06.715446   53711 cri.go:89] found id: ""
	I0816 13:35:06.715523   53711 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 13:35:06.731071   53711 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 13:35:06.750235   53711 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 13:35:06.768053   53711 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 13:35:06.768068   53711 kubeadm.go:157] found existing configuration files:
	
	I0816 13:35:06.768113   53711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 13:35:06.778751   53711 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 13:35:06.778815   53711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 13:35:06.791285   53711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 13:35:06.801308   53711 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 13:35:06.801378   53711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 13:35:06.811475   53711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 13:35:06.820924   53711 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 13:35:06.820981   53711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 13:35:06.831409   53711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 13:35:06.841061   53711 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 13:35:06.841132   53711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 13:35:06.851070   53711 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 13:35:06.971768   53711 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0816 13:35:06.971889   53711 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 13:35:07.122602   53711 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 13:35:07.122793   53711 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 13:35:07.122959   53711 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0816 13:35:07.321792   53711 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 13:35:07.408811   53711 out.go:235]   - Generating certificates and keys ...
	I0816 13:35:07.408963   53711 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 13:35:07.409065   53711 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 13:35:07.583136   53711 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0816 13:35:07.659833   53711 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0816 13:35:07.873225   53711 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0816 13:35:08.185445   53711 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0816 13:35:08.304854   53711 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0816 13:35:08.305142   53711 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-882237] and IPs [192.168.72.105 127.0.0.1 ::1]
	I0816 13:35:08.455565   53711 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0816 13:35:08.455867   53711 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-882237] and IPs [192.168.72.105 127.0.0.1 ::1]
	I0816 13:35:08.830735   53711 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0816 13:35:09.528588   53711 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0816 13:35:09.636503   53711 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0816 13:35:09.636821   53711 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 13:35:09.817796   53711 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 13:35:10.095902   53711 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 13:35:10.355911   53711 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 13:35:10.474564   53711 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 13:35:10.495265   53711 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 13:35:10.495443   53711 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 13:35:10.495521   53711 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 13:35:10.629692   53711 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 13:35:10.631500   53711 out.go:235]   - Booting up control plane ...
	I0816 13:35:10.631632   53711 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 13:35:10.635885   53711 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 13:35:10.637781   53711 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 13:35:10.638679   53711 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 13:35:10.655477   53711 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0816 13:35:50.648056   53711 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0816 13:35:50.648815   53711 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:35:50.649102   53711 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:35:55.649500   53711 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:35:55.649768   53711 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:36:05.648829   53711 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:36:05.649094   53711 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:36:25.648519   53711 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:36:25.648812   53711 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:37:05.649952   53711 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:37:05.650183   53711 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:37:05.650201   53711 kubeadm.go:310] 
	I0816 13:37:05.650244   53711 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0816 13:37:05.650287   53711 kubeadm.go:310] 		timed out waiting for the condition
	I0816 13:37:05.650310   53711 kubeadm.go:310] 
	I0816 13:37:05.650368   53711 kubeadm.go:310] 	This error is likely caused by:
	I0816 13:37:05.650417   53711 kubeadm.go:310] 		- The kubelet is not running
	I0816 13:37:05.650561   53711 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0816 13:37:05.650589   53711 kubeadm.go:310] 
	I0816 13:37:05.650748   53711 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0816 13:37:05.650789   53711 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0816 13:37:05.650820   53711 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0816 13:37:05.650826   53711 kubeadm.go:310] 
	I0816 13:37:05.650919   53711 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0816 13:37:05.651003   53711 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0816 13:37:05.651012   53711 kubeadm.go:310] 
	I0816 13:37:05.651116   53711 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0816 13:37:05.651226   53711 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0816 13:37:05.651335   53711 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0816 13:37:05.651397   53711 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0816 13:37:05.651404   53711 kubeadm.go:310] 
	I0816 13:37:05.651935   53711 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 13:37:05.652054   53711 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0816 13:37:05.652219   53711 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0816 13:37:05.652319   53711 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-882237] and IPs [192.168.72.105 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-882237] and IPs [192.168.72.105 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-882237] and IPs [192.168.72.105 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-882237] and IPs [192.168.72.105 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0816 13:37:05.652366   53711 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 13:37:06.145040   53711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 13:37:06.158983   53711 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 13:37:06.168476   53711 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 13:37:06.168493   53711 kubeadm.go:157] found existing configuration files:
	
	I0816 13:37:06.168536   53711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 13:37:06.177364   53711 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 13:37:06.177411   53711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 13:37:06.186527   53711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 13:37:06.195240   53711 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 13:37:06.195303   53711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 13:37:06.204509   53711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 13:37:06.213812   53711 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 13:37:06.213860   53711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 13:37:06.223206   53711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 13:37:06.232254   53711 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 13:37:06.232296   53711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 13:37:06.241763   53711 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 13:37:06.325403   53711 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0816 13:37:06.325459   53711 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 13:37:06.461033   53711 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 13:37:06.461133   53711 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 13:37:06.461228   53711 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0816 13:37:06.648333   53711 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 13:37:06.650181   53711 out.go:235]   - Generating certificates and keys ...
	I0816 13:37:06.650333   53711 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 13:37:06.650506   53711 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 13:37:06.650693   53711 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 13:37:06.650861   53711 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 13:37:06.651019   53711 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 13:37:06.651169   53711 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 13:37:06.651438   53711 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 13:37:06.651705   53711 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 13:37:06.651825   53711 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 13:37:06.651951   53711 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 13:37:06.652011   53711 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 13:37:06.652128   53711 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 13:37:06.783952   53711 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 13:37:06.925764   53711 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 13:37:07.066990   53711 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 13:37:07.256989   53711 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 13:37:07.272730   53711 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 13:37:07.273946   53711 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 13:37:07.274012   53711 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 13:37:07.420183   53711 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 13:37:07.422242   53711 out.go:235]   - Booting up control plane ...
	I0816 13:37:07.422375   53711 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 13:37:07.425396   53711 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 13:37:07.430977   53711 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 13:37:07.432029   53711 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 13:37:07.434797   53711 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0816 13:37:47.437635   53711 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0816 13:37:47.437723   53711 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:37:47.437935   53711 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:37:52.438481   53711 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:37:52.438718   53711 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:38:02.439340   53711 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:38:02.439586   53711 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:38:22.438791   53711 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:38:22.438993   53711 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:39:02.438919   53711 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:39:02.439142   53711 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:39:02.439267   53711 kubeadm.go:310] 
	I0816 13:39:02.439336   53711 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0816 13:39:02.439414   53711 kubeadm.go:310] 		timed out waiting for the condition
	I0816 13:39:02.439431   53711 kubeadm.go:310] 
	I0816 13:39:02.439504   53711 kubeadm.go:310] 	This error is likely caused by:
	I0816 13:39:02.439554   53711 kubeadm.go:310] 		- The kubelet is not running
	I0816 13:39:02.439700   53711 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0816 13:39:02.439720   53711 kubeadm.go:310] 
	I0816 13:39:02.439862   53711 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0816 13:39:02.439924   53711 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0816 13:39:02.439973   53711 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0816 13:39:02.439983   53711 kubeadm.go:310] 
	I0816 13:39:02.440096   53711 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0816 13:39:02.440213   53711 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0816 13:39:02.440222   53711 kubeadm.go:310] 
	I0816 13:39:02.440364   53711 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0816 13:39:02.440502   53711 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0816 13:39:02.440607   53711 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0816 13:39:02.440687   53711 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0816 13:39:02.440710   53711 kubeadm.go:310] 
	I0816 13:39:02.440851   53711 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 13:39:02.440988   53711 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0816 13:39:02.441141   53711 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0816 13:39:02.441170   53711 kubeadm.go:394] duration metric: took 3m55.79317321s to StartCluster
	I0816 13:39:02.441245   53711 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:39:02.441314   53711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:39:02.481446   53711 cri.go:89] found id: ""
	I0816 13:39:02.481479   53711 logs.go:276] 0 containers: []
	W0816 13:39:02.481488   53711 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:39:02.481494   53711 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:39:02.481552   53711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:39:02.517661   53711 cri.go:89] found id: ""
	I0816 13:39:02.517693   53711 logs.go:276] 0 containers: []
	W0816 13:39:02.517705   53711 logs.go:278] No container was found matching "etcd"
	I0816 13:39:02.517712   53711 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:39:02.517765   53711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:39:02.551238   53711 cri.go:89] found id: ""
	I0816 13:39:02.551268   53711 logs.go:276] 0 containers: []
	W0816 13:39:02.551288   53711 logs.go:278] No container was found matching "coredns"
	I0816 13:39:02.551296   53711 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:39:02.551357   53711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:39:02.584960   53711 cri.go:89] found id: ""
	I0816 13:39:02.584987   53711 logs.go:276] 0 containers: []
	W0816 13:39:02.584996   53711 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:39:02.585001   53711 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:39:02.585067   53711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:39:02.621463   53711 cri.go:89] found id: ""
	I0816 13:39:02.621493   53711 logs.go:276] 0 containers: []
	W0816 13:39:02.621505   53711 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:39:02.621513   53711 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:39:02.621575   53711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:39:02.660769   53711 cri.go:89] found id: ""
	I0816 13:39:02.660790   53711 logs.go:276] 0 containers: []
	W0816 13:39:02.660797   53711 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:39:02.660803   53711 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:39:02.660847   53711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:39:02.705491   53711 cri.go:89] found id: ""
	I0816 13:39:02.705524   53711 logs.go:276] 0 containers: []
	W0816 13:39:02.705532   53711 logs.go:278] No container was found matching "kindnet"
	I0816 13:39:02.705541   53711 logs.go:123] Gathering logs for dmesg ...
	I0816 13:39:02.705555   53711 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:39:02.725498   53711 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:39:02.725526   53711 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:39:02.876744   53711 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:39:02.876766   53711 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:39:02.876782   53711 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:39:02.985309   53711 logs.go:123] Gathering logs for container status ...
	I0816 13:39:02.985348   53711 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:39:03.026198   53711 logs.go:123] Gathering logs for kubelet ...
	I0816 13:39:03.026226   53711 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0816 13:39:03.082718   53711 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0816 13:39:03.082776   53711 out.go:270] * 
	* 
	W0816 13:39:03.082836   53711 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0816 13:39:03.082849   53711 out.go:270] * 
	* 
	W0816 13:39:03.083688   53711 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 13:39:03.086308   53711 out.go:201] 
	W0816 13:39:03.087583   53711 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0816 13:39:03.087630   53711 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0816 13:39:03.087646   53711 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0816 13:39:03.089132   53711 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-882237 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-882237 -n old-k8s-version-882237
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-882237 -n old-k8s-version-882237: exit status 6 (228.876782ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 13:39:03.354597   57110 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-882237" does not appear in /home/jenkins/minikube-integration/19423-3966/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-882237" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (319.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-302520 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-302520 --alsologtostderr -v=3: exit status 82 (2m0.558652466s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-302520"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 13:36:33.565134   55896 out.go:345] Setting OutFile to fd 1 ...
	I0816 13:36:33.565234   55896 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 13:36:33.565244   55896 out.go:358] Setting ErrFile to fd 2...
	I0816 13:36:33.565249   55896 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 13:36:33.565523   55896 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-3966/.minikube/bin
	I0816 13:36:33.565808   55896 out.go:352] Setting JSON to false
	I0816 13:36:33.565917   55896 mustload.go:65] Loading cluster: embed-certs-302520
	I0816 13:36:33.566344   55896 config.go:182] Loaded profile config "embed-certs-302520": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 13:36:33.566422   55896 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/embed-certs-302520/config.json ...
	I0816 13:36:33.566593   55896 mustload.go:65] Loading cluster: embed-certs-302520
	I0816 13:36:33.566692   55896 config.go:182] Loaded profile config "embed-certs-302520": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 13:36:33.566730   55896 stop.go:39] StopHost: embed-certs-302520
	I0816 13:36:33.567113   55896 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:36:33.567149   55896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:36:33.582571   55896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45877
	I0816 13:36:33.582982   55896 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:36:33.583514   55896 main.go:141] libmachine: Using API Version  1
	I0816 13:36:33.583532   55896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:36:33.583875   55896 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:36:33.586189   55896 out.go:177] * Stopping node "embed-certs-302520"  ...
	I0816 13:36:33.587373   55896 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0816 13:36:33.587418   55896 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	I0816 13:36:33.587666   55896 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0816 13:36:33.587694   55896 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:36:33.590580   55896 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:36:33.590978   55896 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:35:43 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:36:33.591013   55896 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:36:33.591150   55896 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:36:33.591310   55896 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:36:33.591486   55896 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:36:33.591664   55896 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/embed-certs-302520/id_rsa Username:docker}
	I0816 13:36:33.711237   55896 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0816 13:36:33.769610   55896 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0816 13:36:33.830002   55896 main.go:141] libmachine: Stopping "embed-certs-302520"...
	I0816 13:36:33.830049   55896 main.go:141] libmachine: (embed-certs-302520) Calling .GetState
	I0816 13:36:33.831770   55896 main.go:141] libmachine: (embed-certs-302520) Calling .Stop
	I0816 13:36:33.835398   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 0/120
	I0816 13:36:34.836771   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 1/120
	I0816 13:36:35.838305   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 2/120
	I0816 13:36:36.839776   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 3/120
	I0816 13:36:37.841344   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 4/120
	I0816 13:36:38.843562   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 5/120
	I0816 13:36:39.844770   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 6/120
	I0816 13:36:40.846045   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 7/120
	I0816 13:36:41.847614   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 8/120
	I0816 13:36:42.849652   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 9/120
	I0816 13:36:43.851939   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 10/120
	I0816 13:36:44.853493   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 11/120
	I0816 13:36:45.855603   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 12/120
	I0816 13:36:46.856926   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 13/120
	I0816 13:36:47.858231   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 14/120
	I0816 13:36:48.860659   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 15/120
	I0816 13:36:49.862267   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 16/120
	I0816 13:36:50.863689   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 17/120
	I0816 13:36:51.864960   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 18/120
	I0816 13:36:52.866416   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 19/120
	I0816 13:36:53.868594   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 20/120
	I0816 13:36:54.870093   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 21/120
	I0816 13:36:55.871902   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 22/120
	I0816 13:36:56.873391   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 23/120
	I0816 13:36:57.874777   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 24/120
	I0816 13:36:58.876222   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 25/120
	I0816 13:36:59.877537   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 26/120
	I0816 13:37:00.879692   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 27/120
	I0816 13:37:01.881200   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 28/120
	I0816 13:37:02.882757   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 29/120
	I0816 13:37:03.885142   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 30/120
	I0816 13:37:04.887378   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 31/120
	I0816 13:37:05.888847   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 32/120
	I0816 13:37:06.890221   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 33/120
	I0816 13:37:07.891556   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 34/120
	I0816 13:37:08.893399   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 35/120
	I0816 13:37:09.894823   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 36/120
	I0816 13:37:10.896076   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 37/120
	I0816 13:37:11.897474   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 38/120
	I0816 13:37:12.898727   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 39/120
	I0816 13:37:13.900928   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 40/120
	I0816 13:37:14.902339   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 41/120
	I0816 13:37:15.903665   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 42/120
	I0816 13:37:16.905120   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 43/120
	I0816 13:37:17.906443   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 44/120
	I0816 13:37:18.908343   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 45/120
	I0816 13:37:19.909754   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 46/120
	I0816 13:37:20.911462   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 47/120
	I0816 13:37:21.912792   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 48/120
	I0816 13:37:22.914239   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 49/120
	I0816 13:37:23.916380   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 50/120
	I0816 13:37:24.917798   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 51/120
	I0816 13:37:25.919218   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 52/120
	I0816 13:37:26.920600   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 53/120
	I0816 13:37:27.921938   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 54/120
	I0816 13:37:28.923200   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 55/120
	I0816 13:37:29.924507   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 56/120
	I0816 13:37:30.925808   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 57/120
	I0816 13:37:31.927242   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 58/120
	I0816 13:37:32.928582   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 59/120
	I0816 13:37:33.930751   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 60/120
	I0816 13:37:34.931929   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 61/120
	I0816 13:37:35.933381   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 62/120
	I0816 13:37:36.935512   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 63/120
	I0816 13:37:37.937010   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 64/120
	I0816 13:37:38.938863   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 65/120
	I0816 13:37:39.940270   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 66/120
	I0816 13:37:40.941773   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 67/120
	I0816 13:37:41.943379   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 68/120
	I0816 13:37:42.944686   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 69/120
	I0816 13:37:43.946774   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 70/120
	I0816 13:37:44.948066   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 71/120
	I0816 13:37:45.949379   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 72/120
	I0816 13:37:46.950637   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 73/120
	I0816 13:37:47.951938   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 74/120
	I0816 13:37:48.953740   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 75/120
	I0816 13:37:49.955011   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 76/120
	I0816 13:37:50.956257   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 77/120
	I0816 13:37:51.957554   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 78/120
	I0816 13:37:52.958899   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 79/120
	I0816 13:37:53.960976   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 80/120
	I0816 13:37:54.962388   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 81/120
	I0816 13:37:55.963758   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 82/120
	I0816 13:37:56.965274   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 83/120
	I0816 13:37:57.966557   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 84/120
	I0816 13:37:58.968857   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 85/120
	I0816 13:37:59.970217   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 86/120
	I0816 13:38:01.016560   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 87/120
	I0816 13:38:02.017984   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 88/120
	I0816 13:38:03.019568   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 89/120
	I0816 13:38:04.021744   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 90/120
	I0816 13:38:05.023980   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 91/120
	I0816 13:38:06.025521   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 92/120
	I0816 13:38:07.027605   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 93/120
	I0816 13:38:08.029218   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 94/120
	I0816 13:38:09.031604   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 95/120
	I0816 13:38:10.033633   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 96/120
	I0816 13:38:11.035242   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 97/120
	I0816 13:38:12.036704   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 98/120
	I0816 13:38:13.038229   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 99/120
	I0816 13:38:14.040415   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 100/120
	I0816 13:38:15.042219   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 101/120
	I0816 13:38:16.043945   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 102/120
	I0816 13:38:17.045443   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 103/120
	I0816 13:38:18.047306   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 104/120
	I0816 13:38:19.048997   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 105/120
	I0816 13:38:20.050017   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 106/120
	I0816 13:38:21.051466   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 107/120
	I0816 13:38:22.053672   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 108/120
	I0816 13:38:23.055349   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 109/120
	I0816 13:38:24.057333   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 110/120
	I0816 13:38:25.058814   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 111/120
	I0816 13:38:26.060441   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 112/120
	I0816 13:38:27.061823   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 113/120
	I0816 13:38:28.063206   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 114/120
	I0816 13:38:29.065233   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 115/120
	I0816 13:38:30.066660   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 116/120
	I0816 13:38:31.068994   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 117/120
	I0816 13:38:32.070641   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 118/120
	I0816 13:38:33.072167   55896 main.go:141] libmachine: (embed-certs-302520) Waiting for machine to stop 119/120
	I0816 13:38:34.072769   55896 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0816 13:38:34.072849   55896 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0816 13:38:34.074895   55896 out.go:201] 
	W0816 13:38:34.076336   55896 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0816 13:38:34.076358   55896 out.go:270] * 
	* 
	W0816 13:38:34.080023   55896 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 13:38:34.081416   55896 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-302520 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-302520 -n embed-certs-302520
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-302520 -n embed-certs-302520: exit status 3 (18.513430649s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 13:38:52.597231   56839 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.125:22: connect: no route to host
	E0816 13:38:52.597253   56839 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.125:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-302520" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-311070 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-311070 --alsologtostderr -v=3: exit status 82 (2m0.464021865s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-311070"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 13:37:00.700336   56068 out.go:345] Setting OutFile to fd 1 ...
	I0816 13:37:00.700470   56068 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 13:37:00.700481   56068 out.go:358] Setting ErrFile to fd 2...
	I0816 13:37:00.700487   56068 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 13:37:00.700655   56068 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-3966/.minikube/bin
	I0816 13:37:00.700874   56068 out.go:352] Setting JSON to false
	I0816 13:37:00.700982   56068 mustload.go:65] Loading cluster: no-preload-311070
	I0816 13:37:00.701341   56068 config.go:182] Loaded profile config "no-preload-311070": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 13:37:00.701407   56068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070/config.json ...
	I0816 13:37:00.701574   56068 mustload.go:65] Loading cluster: no-preload-311070
	I0816 13:37:00.701666   56068 config.go:182] Loaded profile config "no-preload-311070": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 13:37:00.701706   56068 stop.go:39] StopHost: no-preload-311070
	I0816 13:37:00.702091   56068 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:37:00.702143   56068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:37:00.716790   56068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33839
	I0816 13:37:00.717296   56068 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:37:00.717932   56068 main.go:141] libmachine: Using API Version  1
	I0816 13:37:00.717958   56068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:37:00.718295   56068 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:37:00.720278   56068 out.go:177] * Stopping node "no-preload-311070"  ...
	I0816 13:37:00.721617   56068 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0816 13:37:00.721658   56068 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	I0816 13:37:00.721926   56068 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0816 13:37:00.721953   56068 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:37:00.725254   56068 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:37:00.725746   56068 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:35:21 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:37:00.725781   56068 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:37:00.725965   56068 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:37:00.726151   56068 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:37:00.726299   56068 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:37:00.726474   56068 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/no-preload-311070/id_rsa Username:docker}
	I0816 13:37:00.813313   56068 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0816 13:37:00.874486   56068 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0816 13:37:00.921255   56068 main.go:141] libmachine: Stopping "no-preload-311070"...
	I0816 13:37:00.921287   56068 main.go:141] libmachine: (no-preload-311070) Calling .GetState
	I0816 13:37:00.922758   56068 main.go:141] libmachine: (no-preload-311070) Calling .Stop
	I0816 13:37:00.926482   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 0/120
	I0816 13:37:01.927888   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 1/120
	I0816 13:37:02.929149   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 2/120
	I0816 13:37:03.930534   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 3/120
	I0816 13:37:04.931885   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 4/120
	I0816 13:37:05.933979   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 5/120
	I0816 13:37:06.935338   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 6/120
	I0816 13:37:07.936710   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 7/120
	I0816 13:37:08.938021   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 8/120
	I0816 13:37:09.939370   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 9/120
	I0816 13:37:10.941243   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 10/120
	I0816 13:37:11.942702   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 11/120
	I0816 13:37:12.943967   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 12/120
	I0816 13:37:13.945430   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 13/120
	I0816 13:37:14.947040   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 14/120
	I0816 13:37:15.949057   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 15/120
	I0816 13:37:16.950249   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 16/120
	I0816 13:37:17.951533   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 17/120
	I0816 13:37:18.952948   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 18/120
	I0816 13:37:19.954134   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 19/120
	I0816 13:37:20.956299   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 20/120
	I0816 13:37:21.958660   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 21/120
	I0816 13:37:22.959850   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 22/120
	I0816 13:37:23.961372   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 23/120
	I0816 13:37:24.962725   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 24/120
	I0816 13:37:25.965011   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 25/120
	I0816 13:37:26.966177   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 26/120
	I0816 13:37:27.967363   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 27/120
	I0816 13:37:28.968525   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 28/120
	I0816 13:37:29.969626   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 29/120
	I0816 13:37:30.972228   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 30/120
	I0816 13:37:31.973536   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 31/120
	I0816 13:37:32.975110   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 32/120
	I0816 13:37:33.976375   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 33/120
	I0816 13:37:34.977939   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 34/120
	I0816 13:37:35.980077   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 35/120
	I0816 13:37:36.981751   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 36/120
	I0816 13:37:37.983456   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 37/120
	I0816 13:37:38.984732   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 38/120
	I0816 13:37:39.986304   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 39/120
	I0816 13:37:40.987991   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 40/120
	I0816 13:37:41.989326   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 41/120
	I0816 13:37:42.991252   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 42/120
	I0816 13:37:43.992679   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 43/120
	I0816 13:37:44.993959   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 44/120
	I0816 13:37:45.995749   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 45/120
	I0816 13:37:46.996899   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 46/120
	I0816 13:37:47.998478   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 47/120
	I0816 13:37:49.000434   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 48/120
	I0816 13:37:50.001635   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 49/120
	I0816 13:37:51.003724   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 50/120
	I0816 13:37:52.004936   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 51/120
	I0816 13:37:53.006164   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 52/120
	I0816 13:37:54.008130   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 53/120
	I0816 13:37:55.009797   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 54/120
	I0816 13:37:56.011611   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 55/120
	I0816 13:37:57.012958   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 56/120
	I0816 13:37:58.014434   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 57/120
	I0816 13:37:59.015719   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 58/120
	I0816 13:38:00.017357   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 59/120
	I0816 13:38:01.019201   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 60/120
	I0816 13:38:02.020347   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 61/120
	I0816 13:38:03.021645   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 62/120
	I0816 13:38:04.023205   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 63/120
	I0816 13:38:05.024614   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 64/120
	I0816 13:38:06.026544   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 65/120
	I0816 13:38:07.028006   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 66/120
	I0816 13:38:08.029943   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 67/120
	I0816 13:38:09.031394   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 68/120
	I0816 13:38:10.032633   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 69/120
	I0816 13:38:11.035087   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 70/120
	I0816 13:38:12.036600   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 71/120
	I0816 13:38:13.038064   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 72/120
	I0816 13:38:14.039808   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 73/120
	I0816 13:38:15.041572   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 74/120
	I0816 13:38:16.043709   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 75/120
	I0816 13:38:17.045059   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 76/120
	I0816 13:38:18.046385   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 77/120
	I0816 13:38:19.047767   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 78/120
	I0816 13:38:20.049383   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 79/120
	I0816 13:38:21.051283   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 80/120
	I0816 13:38:22.053130   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 81/120
	I0816 13:38:23.055041   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 82/120
	I0816 13:38:24.056538   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 83/120
	I0816 13:38:25.058146   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 84/120
	I0816 13:38:26.060673   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 85/120
	I0816 13:38:27.062233   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 86/120
	I0816 13:38:28.063522   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 87/120
	I0816 13:38:29.064995   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 88/120
	I0816 13:38:30.066428   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 89/120
	I0816 13:38:31.068649   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 90/120
	I0816 13:38:32.070488   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 91/120
	I0816 13:38:33.072011   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 92/120
	I0816 13:38:34.073868   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 93/120
	I0816 13:38:35.075289   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 94/120
	I0816 13:38:36.077595   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 95/120
	I0816 13:38:37.079092   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 96/120
	I0816 13:38:38.080636   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 97/120
	I0816 13:38:39.081954   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 98/120
	I0816 13:38:40.083345   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 99/120
	I0816 13:38:41.085088   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 100/120
	I0816 13:38:42.086516   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 101/120
	I0816 13:38:43.088054   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 102/120
	I0816 13:38:44.090373   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 103/120
	I0816 13:38:45.091662   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 104/120
	I0816 13:38:46.093620   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 105/120
	I0816 13:38:47.095215   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 106/120
	I0816 13:38:48.096561   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 107/120
	I0816 13:38:49.098223   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 108/120
	I0816 13:38:50.099727   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 109/120
	I0816 13:38:51.102025   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 110/120
	I0816 13:38:52.103286   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 111/120
	I0816 13:38:53.104586   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 112/120
	I0816 13:38:54.105862   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 113/120
	I0816 13:38:55.107289   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 114/120
	I0816 13:38:56.109260   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 115/120
	I0816 13:38:57.111489   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 116/120
	I0816 13:38:58.112795   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 117/120
	I0816 13:38:59.114209   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 118/120
	I0816 13:39:00.115667   56068 main.go:141] libmachine: (no-preload-311070) Waiting for machine to stop 119/120
	I0816 13:39:01.116949   56068 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0816 13:39:01.116996   56068 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0816 13:39:01.118882   56068 out.go:201] 
	W0816 13:39:01.119973   56068 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0816 13:39:01.119989   56068 out.go:270] * 
	* 
	W0816 13:39:01.122665   56068 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 13:39:01.124274   56068 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-311070 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-311070 -n no-preload-311070
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-311070 -n no-preload-311070: exit status 3 (18.607344643s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 13:39:19.733301   57032 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.116:22: connect: no route to host
	E0816 13:39:19.733323   57032 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.116:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-311070" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-302520 -n embed-certs-302520
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-302520 -n embed-certs-302520: exit status 3 (3.167541684s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 13:38:55.765259   56946 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.125:22: connect: no route to host
	E0816 13:38:55.765280   56946 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.125:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-302520 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0816 13:38:56.823323   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-302520 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152093774s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.125:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-302520 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-302520 -n embed-certs-302520
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-302520 -n embed-certs-302520: exit status 3 (3.063447515s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 13:39:04.981320   57062 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.125:22: connect: no route to host
	E0816 13:39:04.981346   57062 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.125:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-302520" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-882237 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-882237 create -f testdata/busybox.yaml: exit status 1 (42.851047ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-882237" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-882237 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-882237 -n old-k8s-version-882237
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-882237 -n old-k8s-version-882237: exit status 6 (224.032186ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 13:39:03.623863   57150 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-882237" does not appear in /home/jenkins/minikube-integration/19423-3966/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-882237" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-882237 -n old-k8s-version-882237
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-882237 -n old-k8s-version-882237: exit status 6 (220.508426ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 13:39:03.845266   57180 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-882237" does not appear in /home/jenkins/minikube-integration/19423-3966/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-882237" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (82.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-882237 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-882237 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m22.482298372s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-882237 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-882237 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-882237 describe deploy/metrics-server -n kube-system: exit status 1 (42.040497ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-882237" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-882237 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-882237 -n old-k8s-version-882237
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-882237 -n old-k8s-version-882237: exit status 6 (224.693248ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 13:40:26.593974   57814 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-882237" does not appear in /home/jenkins/minikube-integration/19423-3966/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-882237" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (82.75s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-311070 -n no-preload-311070
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-311070 -n no-preload-311070: exit status 3 (3.167655162s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 13:39:22.901302   57329 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.116:22: connect: no route to host
	E0816 13:39:22.901324   57329 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.116:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-311070 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-311070 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153372505s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.116:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-311070 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-311070 -n no-preload-311070
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-311070 -n no-preload-311070: exit status 3 (3.06214595s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 13:39:32.117290   57410 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.116:22: connect: no route to host
	E0816 13:39:32.117309   57410 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.116:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-311070" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-893736 --alsologtostderr -v=3
E0816 13:40:19.894479   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-893736 --alsologtostderr -v=3: exit status 82 (2m0.503633461s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-893736"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 13:39:44.524154   57613 out.go:345] Setting OutFile to fd 1 ...
	I0816 13:39:44.524278   57613 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 13:39:44.524287   57613 out.go:358] Setting ErrFile to fd 2...
	I0816 13:39:44.524294   57613 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 13:39:44.524492   57613 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-3966/.minikube/bin
	I0816 13:39:44.524726   57613 out.go:352] Setting JSON to false
	I0816 13:39:44.524825   57613 mustload.go:65] Loading cluster: default-k8s-diff-port-893736
	I0816 13:39:44.525181   57613 config.go:182] Loaded profile config "default-k8s-diff-port-893736": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 13:39:44.525261   57613 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/default-k8s-diff-port-893736/config.json ...
	I0816 13:39:44.525447   57613 mustload.go:65] Loading cluster: default-k8s-diff-port-893736
	I0816 13:39:44.525572   57613 config.go:182] Loaded profile config "default-k8s-diff-port-893736": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 13:39:44.525610   57613 stop.go:39] StopHost: default-k8s-diff-port-893736
	I0816 13:39:44.525995   57613 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:39:44.526040   57613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:39:44.540500   57613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41755
	I0816 13:39:44.540991   57613 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:39:44.541554   57613 main.go:141] libmachine: Using API Version  1
	I0816 13:39:44.541585   57613 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:39:44.541911   57613 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:39:44.544499   57613 out.go:177] * Stopping node "default-k8s-diff-port-893736"  ...
	I0816 13:39:44.546126   57613 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0816 13:39:44.546153   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:39:44.546346   57613 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0816 13:39:44.546368   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:39:44.548957   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:39:44.549311   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:38:15 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:39:44.549339   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:39:44.549499   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:39:44.549643   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:39:44.549812   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:39:44.549956   57613 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/default-k8s-diff-port-893736/id_rsa Username:docker}
	I0816 13:39:44.661284   57613 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0816 13:39:44.725700   57613 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0816 13:39:44.782197   57613 main.go:141] libmachine: Stopping "default-k8s-diff-port-893736"...
	I0816 13:39:44.782233   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetState
	I0816 13:39:44.783736   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .Stop
	I0816 13:39:44.787344   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 0/120
	I0816 13:39:45.788564   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 1/120
	I0816 13:39:46.789975   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 2/120
	I0816 13:39:47.791229   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 3/120
	I0816 13:39:48.792828   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 4/120
	I0816 13:39:49.794710   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 5/120
	I0816 13:39:50.796320   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 6/120
	I0816 13:39:51.797711   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 7/120
	I0816 13:39:52.799072   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 8/120
	I0816 13:39:53.800394   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 9/120
	I0816 13:39:54.802639   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 10/120
	I0816 13:39:55.803927   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 11/120
	I0816 13:39:56.805403   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 12/120
	I0816 13:39:57.806986   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 13/120
	I0816 13:39:58.808457   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 14/120
	I0816 13:39:59.810583   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 15/120
	I0816 13:40:00.812186   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 16/120
	I0816 13:40:01.813652   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 17/120
	I0816 13:40:02.815817   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 18/120
	I0816 13:40:03.817441   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 19/120
	I0816 13:40:04.819983   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 20/120
	I0816 13:40:05.821686   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 21/120
	I0816 13:40:06.823602   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 22/120
	I0816 13:40:07.824947   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 23/120
	I0816 13:40:08.826801   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 24/120
	I0816 13:40:09.829139   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 25/120
	I0816 13:40:10.830670   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 26/120
	I0816 13:40:11.832144   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 27/120
	I0816 13:40:12.833682   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 28/120
	I0816 13:40:13.834966   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 29/120
	I0816 13:40:14.837155   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 30/120
	I0816 13:40:15.838565   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 31/120
	I0816 13:40:16.840328   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 32/120
	I0816 13:40:17.841709   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 33/120
	I0816 13:40:18.843103   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 34/120
	I0816 13:40:19.845058   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 35/120
	I0816 13:40:20.846594   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 36/120
	I0816 13:40:21.847851   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 37/120
	I0816 13:40:22.849216   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 38/120
	I0816 13:40:23.850711   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 39/120
	I0816 13:40:24.852854   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 40/120
	I0816 13:40:25.854310   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 41/120
	I0816 13:40:26.855674   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 42/120
	I0816 13:40:27.856954   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 43/120
	I0816 13:40:28.858364   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 44/120
	I0816 13:40:29.860583   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 45/120
	I0816 13:40:30.862242   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 46/120
	I0816 13:40:31.863720   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 47/120
	I0816 13:40:32.865289   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 48/120
	I0816 13:40:33.866759   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 49/120
	I0816 13:40:34.869277   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 50/120
	I0816 13:40:35.870860   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 51/120
	I0816 13:40:36.872150   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 52/120
	I0816 13:40:37.873602   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 53/120
	I0816 13:40:38.875121   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 54/120
	I0816 13:40:39.877280   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 55/120
	I0816 13:40:40.878736   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 56/120
	I0816 13:40:41.880200   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 57/120
	I0816 13:40:42.881838   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 58/120
	I0816 13:40:43.883335   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 59/120
	I0816 13:40:44.884690   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 60/120
	I0816 13:40:45.886044   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 61/120
	I0816 13:40:46.887681   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 62/120
	I0816 13:40:47.889461   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 63/120
	I0816 13:40:48.890966   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 64/120
	I0816 13:40:49.893051   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 65/120
	I0816 13:40:50.894708   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 66/120
	I0816 13:40:51.896120   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 67/120
	I0816 13:40:52.897695   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 68/120
	I0816 13:40:53.899162   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 69/120
	I0816 13:40:54.901549   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 70/120
	I0816 13:40:55.902902   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 71/120
	I0816 13:40:56.904398   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 72/120
	I0816 13:40:57.906030   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 73/120
	I0816 13:40:58.907694   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 74/120
	I0816 13:40:59.909744   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 75/120
	I0816 13:41:00.911010   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 76/120
	I0816 13:41:01.912552   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 77/120
	I0816 13:41:02.914035   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 78/120
	I0816 13:41:03.915472   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 79/120
	I0816 13:41:04.917974   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 80/120
	I0816 13:41:05.919388   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 81/120
	I0816 13:41:06.920845   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 82/120
	I0816 13:41:07.922579   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 83/120
	I0816 13:41:08.924058   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 84/120
	I0816 13:41:09.926204   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 85/120
	I0816 13:41:10.927661   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 86/120
	I0816 13:41:11.929080   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 87/120
	I0816 13:41:12.930511   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 88/120
	I0816 13:41:13.931937   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 89/120
	I0816 13:41:14.934182   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 90/120
	I0816 13:41:15.935681   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 91/120
	I0816 13:41:16.937210   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 92/120
	I0816 13:41:17.938562   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 93/120
	I0816 13:41:18.939893   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 94/120
	I0816 13:41:19.941964   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 95/120
	I0816 13:41:20.943286   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 96/120
	I0816 13:41:21.944659   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 97/120
	I0816 13:41:22.946016   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 98/120
	I0816 13:41:23.947496   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 99/120
	I0816 13:41:24.948864   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 100/120
	I0816 13:41:25.950220   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 101/120
	I0816 13:41:26.951777   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 102/120
	I0816 13:41:27.953238   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 103/120
	I0816 13:41:28.954737   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 104/120
	I0816 13:41:29.956642   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 105/120
	I0816 13:41:30.957940   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 106/120
	I0816 13:41:31.959237   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 107/120
	I0816 13:41:32.960542   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 108/120
	I0816 13:41:33.962279   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 109/120
	I0816 13:41:34.963604   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 110/120
	I0816 13:41:35.965071   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 111/120
	I0816 13:41:36.966553   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 112/120
	I0816 13:41:37.968020   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 113/120
	I0816 13:41:38.969465   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 114/120
	I0816 13:41:39.971749   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 115/120
	I0816 13:41:40.973212   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 116/120
	I0816 13:41:41.974485   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 117/120
	I0816 13:41:42.975871   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 118/120
	I0816 13:41:43.977356   57613 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for machine to stop 119/120
	I0816 13:41:44.978036   57613 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0816 13:41:44.978107   57613 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0816 13:41:44.980215   57613 out.go:201] 
	W0816 13:41:44.981584   57613 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0816 13:41:44.981600   57613 out.go:270] * 
	* 
	W0816 13:41:44.984176   57613 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 13:41:44.985788   57613 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-893736 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-893736 -n default-k8s-diff-port-893736
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-893736 -n default-k8s-diff-port-893736: exit status 3 (18.585589124s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 13:42:03.573216   58225 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.186:22: connect: no route to host
	E0816 13:42:03.573235   58225 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.186:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-893736" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (723.78s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-882237 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0816 13:40:40.921091   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/functional-756697/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-882237 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m0.310951319s)

                                                
                                                
-- stdout --
	* [old-k8s-version-882237] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-3966/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-3966/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-882237" primary control-plane node in "old-k8s-version-882237" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-882237" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 13:40:30.100100   57945 out.go:345] Setting OutFile to fd 1 ...
	I0816 13:40:30.100210   57945 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 13:40:30.100220   57945 out.go:358] Setting ErrFile to fd 2...
	I0816 13:40:30.100224   57945 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 13:40:30.100416   57945 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-3966/.minikube/bin
	I0816 13:40:30.100977   57945 out.go:352] Setting JSON to false
	I0816 13:40:30.101951   57945 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4975,"bootTime":1723810655,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 13:40:30.102007   57945 start.go:139] virtualization: kvm guest
	I0816 13:40:30.104231   57945 out.go:177] * [old-k8s-version-882237] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 13:40:30.105527   57945 notify.go:220] Checking for updates...
	I0816 13:40:30.105537   57945 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 13:40:30.107058   57945 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 13:40:30.108413   57945 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 13:40:30.109651   57945 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-3966/.minikube
	I0816 13:40:30.110854   57945 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 13:40:30.112019   57945 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 13:40:30.113392   57945 config.go:182] Loaded profile config "old-k8s-version-882237": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0816 13:40:30.113818   57945 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:40:30.113881   57945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:40:30.128775   57945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37107
	I0816 13:40:30.129188   57945 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:40:30.129746   57945 main.go:141] libmachine: Using API Version  1
	I0816 13:40:30.129770   57945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:40:30.130073   57945 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:40:30.130277   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:40:30.131887   57945 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0816 13:40:30.132983   57945 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 13:40:30.133297   57945 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:40:30.133335   57945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:40:30.147902   57945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39369
	I0816 13:40:30.148258   57945 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:40:30.148711   57945 main.go:141] libmachine: Using API Version  1
	I0816 13:40:30.148729   57945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:40:30.149021   57945 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:40:30.149182   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:40:30.182740   57945 out.go:177] * Using the kvm2 driver based on existing profile
	I0816 13:40:30.183982   57945 start.go:297] selected driver: kvm2
	I0816 13:40:30.183998   57945 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-882237 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-882237 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.105 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 13:40:30.184104   57945 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 13:40:30.184747   57945 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 13:40:30.184812   57945 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-3966/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 13:40:30.199213   57945 install.go:137] /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0816 13:40:30.199577   57945 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 13:40:30.199614   57945 cni.go:84] Creating CNI manager for ""
	I0816 13:40:30.199622   57945 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:40:30.199657   57945 start.go:340] cluster config:
	{Name:old-k8s-version-882237 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-882237 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.105 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 13:40:30.199746   57945 iso.go:125] acquiring lock: {Name:mk00139b359c6fe4c84d80e843a2099303fb7c5b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 13:40:30.201950   57945 out.go:177] * Starting "old-k8s-version-882237" primary control-plane node in "old-k8s-version-882237" cluster
	I0816 13:40:30.203152   57945 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0816 13:40:30.203187   57945 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0816 13:40:30.203196   57945 cache.go:56] Caching tarball of preloaded images
	I0816 13:40:30.203260   57945 preload.go:172] Found /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 13:40:30.203269   57945 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0816 13:40:30.203357   57945 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/config.json ...
	I0816 13:40:30.203526   57945 start.go:360] acquireMachinesLock for old-k8s-version-882237: {Name:mk0b4b88ee8893cefe753073bc08069ac50e5641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 13:44:03.389847   57945 start.go:364] duration metric: took 3m33.186277254s to acquireMachinesLock for "old-k8s-version-882237"
	I0816 13:44:03.389911   57945 start.go:96] Skipping create...Using existing machine configuration
	I0816 13:44:03.389923   57945 fix.go:54] fixHost starting: 
	I0816 13:44:03.390344   57945 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:03.390384   57945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:03.406808   57945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40841
	I0816 13:44:03.407227   57945 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:03.407790   57945 main.go:141] libmachine: Using API Version  1
	I0816 13:44:03.407819   57945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:03.408124   57945 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:03.408341   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:44:03.408506   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetState
	I0816 13:44:03.409993   57945 fix.go:112] recreateIfNeeded on old-k8s-version-882237: state=Stopped err=<nil>
	I0816 13:44:03.410029   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	W0816 13:44:03.410200   57945 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 13:44:03.412299   57945 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-882237" ...
	I0816 13:44:03.413613   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .Start
	I0816 13:44:03.413783   57945 main.go:141] libmachine: (old-k8s-version-882237) Ensuring networks are active...
	I0816 13:44:03.414567   57945 main.go:141] libmachine: (old-k8s-version-882237) Ensuring network default is active
	I0816 13:44:03.414873   57945 main.go:141] libmachine: (old-k8s-version-882237) Ensuring network mk-old-k8s-version-882237 is active
	I0816 13:44:03.415336   57945 main.go:141] libmachine: (old-k8s-version-882237) Getting domain xml...
	I0816 13:44:03.416198   57945 main.go:141] libmachine: (old-k8s-version-882237) Creating domain...
	I0816 13:44:04.671017   57945 main.go:141] libmachine: (old-k8s-version-882237) Waiting to get IP...
	I0816 13:44:04.672035   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:04.672467   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:04.672560   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:04.672467   58914 retry.go:31] will retry after 271.707338ms: waiting for machine to come up
	I0816 13:44:04.946147   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:04.946549   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:04.946577   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:04.946506   58914 retry.go:31] will retry after 324.872897ms: waiting for machine to come up
	I0816 13:44:05.273252   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:05.273730   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:05.273758   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:05.273682   58914 retry.go:31] will retry after 300.46858ms: waiting for machine to come up
	I0816 13:44:05.576567   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:05.577060   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:05.577088   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:05.577023   58914 retry.go:31] will retry after 471.968976ms: waiting for machine to come up
	I0816 13:44:06.050651   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:06.051035   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:06.051075   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:06.051005   58914 retry.go:31] will retry after 696.85088ms: waiting for machine to come up
	I0816 13:44:06.750108   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:06.750611   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:06.750643   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:06.750548   58914 retry.go:31] will retry after 752.204898ms: waiting for machine to come up
	I0816 13:44:07.504321   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:07.504741   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:07.504766   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:07.504706   58914 retry.go:31] will retry after 734.932569ms: waiting for machine to come up
	I0816 13:44:08.241587   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:08.241950   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:08.241977   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:08.241895   58914 retry.go:31] will retry after 1.245731112s: waiting for machine to come up
	I0816 13:44:09.488787   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:09.489326   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:09.489370   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:09.489282   58914 retry.go:31] will retry after 1.454286295s: waiting for machine to come up
	I0816 13:44:10.944947   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:10.945395   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:10.945459   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:10.945352   58914 retry.go:31] will retry after 1.738238967s: waiting for machine to come up
	I0816 13:44:12.686147   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:12.686673   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:12.686701   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:12.686630   58914 retry.go:31] will retry after 2.778761596s: waiting for machine to come up
	I0816 13:44:15.468356   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:15.468788   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:15.468817   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:15.468739   58914 retry.go:31] will retry after 2.807621726s: waiting for machine to come up
	I0816 13:44:18.277604   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:18.277980   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:18.278013   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:18.277937   58914 retry.go:31] will retry after 4.131806684s: waiting for machine to come up
	I0816 13:44:22.413954   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.414552   57945 main.go:141] libmachine: (old-k8s-version-882237) Found IP for machine: 192.168.72.105
	I0816 13:44:22.414575   57945 main.go:141] libmachine: (old-k8s-version-882237) Reserving static IP address...
	I0816 13:44:22.414591   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has current primary IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.415085   57945 main.go:141] libmachine: (old-k8s-version-882237) Reserved static IP address: 192.168.72.105
	I0816 13:44:22.415142   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "old-k8s-version-882237", mac: "52:54:00:ce:02:bd", ip: "192.168.72.105"} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:22.415157   57945 main.go:141] libmachine: (old-k8s-version-882237) Waiting for SSH to be available...
	I0816 13:44:22.415183   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | skip adding static IP to network mk-old-k8s-version-882237 - found existing host DHCP lease matching {name: "old-k8s-version-882237", mac: "52:54:00:ce:02:bd", ip: "192.168.72.105"}
	I0816 13:44:22.415195   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | Getting to WaitForSSH function...
	I0816 13:44:22.417524   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.417844   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:22.417875   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.417987   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | Using SSH client type: external
	I0816 13:44:22.418014   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/old-k8s-version-882237/id_rsa (-rw-------)
	I0816 13:44:22.418052   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.105 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-3966/.minikube/machines/old-k8s-version-882237/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 13:44:22.418072   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | About to run SSH command:
	I0816 13:44:22.418086   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | exit 0
	I0816 13:44:22.536890   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | SSH cmd err, output: <nil>: 
	I0816 13:44:22.537284   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetConfigRaw
	I0816 13:44:22.537843   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetIP
	I0816 13:44:22.540100   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.540454   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:22.540490   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.540683   57945 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/config.json ...
	I0816 13:44:22.540939   57945 machine.go:93] provisionDockerMachine start ...
	I0816 13:44:22.540960   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:44:22.541184   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:22.543102   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.543385   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:22.543413   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.543505   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:44:22.543664   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:22.543798   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:22.543991   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:44:22.544177   57945 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:22.544497   57945 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.105 22 <nil> <nil>}
	I0816 13:44:22.544519   57945 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 13:44:22.641319   57945 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 13:44:22.641355   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetMachineName
	I0816 13:44:22.641606   57945 buildroot.go:166] provisioning hostname "old-k8s-version-882237"
	I0816 13:44:22.641630   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetMachineName
	I0816 13:44:22.641820   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:22.644657   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.645053   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:22.645085   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.645279   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:44:22.645476   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:22.645656   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:22.645827   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:44:22.646013   57945 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:22.646233   57945 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.105 22 <nil> <nil>}
	I0816 13:44:22.646248   57945 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-882237 && echo "old-k8s-version-882237" | sudo tee /etc/hostname
	I0816 13:44:22.759488   57945 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-882237
	
	I0816 13:44:22.759526   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:22.762382   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.762774   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:22.762811   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.762959   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:44:22.763163   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:22.763353   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:22.763534   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:44:22.763738   57945 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:22.763967   57945 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.105 22 <nil> <nil>}
	I0816 13:44:22.763995   57945 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-882237' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-882237/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-882237' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 13:44:22.878120   57945 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 13:44:22.878158   57945 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-3966/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-3966/.minikube}
	I0816 13:44:22.878215   57945 buildroot.go:174] setting up certificates
	I0816 13:44:22.878230   57945 provision.go:84] configureAuth start
	I0816 13:44:22.878244   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetMachineName
	I0816 13:44:22.878581   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetIP
	I0816 13:44:22.881426   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.881808   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:22.881843   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.881971   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:22.884352   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.884750   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:22.884778   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.884932   57945 provision.go:143] copyHostCerts
	I0816 13:44:22.884994   57945 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem, removing ...
	I0816 13:44:22.885016   57945 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem
	I0816 13:44:22.885084   57945 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem (1082 bytes)
	I0816 13:44:22.885230   57945 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem, removing ...
	I0816 13:44:22.885242   57945 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem
	I0816 13:44:22.885276   57945 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem (1123 bytes)
	I0816 13:44:22.885374   57945 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem, removing ...
	I0816 13:44:22.885383   57945 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem
	I0816 13:44:22.885415   57945 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem (1675 bytes)
	I0816 13:44:22.885503   57945 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-882237 san=[127.0.0.1 192.168.72.105 localhost minikube old-k8s-version-882237]
	I0816 13:44:23.017446   57945 provision.go:177] copyRemoteCerts
	I0816 13:44:23.017519   57945 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 13:44:23.017555   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:23.020030   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.020423   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:23.020460   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.020678   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:44:23.020888   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:23.021076   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:44:23.021199   57945 sshutil.go:53] new ssh client: &{IP:192.168.72.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/old-k8s-version-882237/id_rsa Username:docker}
	I0816 13:44:23.100006   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0816 13:44:23.128795   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 13:44:23.157542   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 13:44:23.182619   57945 provision.go:87] duration metric: took 304.375843ms to configureAuth
	I0816 13:44:23.182652   57945 buildroot.go:189] setting minikube options for container-runtime
	I0816 13:44:23.182882   57945 config.go:182] Loaded profile config "old-k8s-version-882237": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0816 13:44:23.182984   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:23.186043   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.186441   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:23.186474   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.186648   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:44:23.186844   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:23.187015   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:23.187196   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:44:23.187383   57945 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:23.187566   57945 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.105 22 <nil> <nil>}
	I0816 13:44:23.187587   57945 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 13:44:23.459221   57945 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 13:44:23.459248   57945 machine.go:96] duration metric: took 918.295024ms to provisionDockerMachine
	I0816 13:44:23.459261   57945 start.go:293] postStartSetup for "old-k8s-version-882237" (driver="kvm2")
	I0816 13:44:23.459275   57945 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 13:44:23.459305   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:44:23.459614   57945 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 13:44:23.459649   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:23.462624   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.463010   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:23.463033   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.463210   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:44:23.463405   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:23.463584   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:44:23.463715   57945 sshutil.go:53] new ssh client: &{IP:192.168.72.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/old-k8s-version-882237/id_rsa Username:docker}
	I0816 13:44:23.550785   57945 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 13:44:23.554984   57945 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 13:44:23.555009   57945 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/addons for local assets ...
	I0816 13:44:23.555078   57945 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/files for local assets ...
	I0816 13:44:23.555171   57945 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem -> 111492.pem in /etc/ssl/certs
	I0816 13:44:23.555290   57945 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 13:44:23.564655   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /etc/ssl/certs/111492.pem (1708 bytes)
	I0816 13:44:23.588471   57945 start.go:296] duration metric: took 129.196791ms for postStartSetup
	I0816 13:44:23.588515   57945 fix.go:56] duration metric: took 20.198590598s for fixHost
	I0816 13:44:23.588544   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:23.591443   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.591805   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:23.591835   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.591959   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:44:23.592145   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:23.592354   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:23.592492   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:44:23.592668   57945 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:23.592868   57945 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.105 22 <nil> <nil>}
	I0816 13:44:23.592885   57945 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 13:44:23.689724   57945 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723815863.663875328
	
	I0816 13:44:23.689760   57945 fix.go:216] guest clock: 1723815863.663875328
	I0816 13:44:23.689771   57945 fix.go:229] Guest: 2024-08-16 13:44:23.663875328 +0000 UTC Remote: 2024-08-16 13:44:23.588520483 +0000 UTC m=+233.521229154 (delta=75.354845ms)
	I0816 13:44:23.689796   57945 fix.go:200] guest clock delta is within tolerance: 75.354845ms
	I0816 13:44:23.689806   57945 start.go:83] releasing machines lock for "old-k8s-version-882237", held for 20.299922092s
	I0816 13:44:23.689839   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:44:23.690115   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetIP
	I0816 13:44:23.692683   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.693079   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:23.693108   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.693268   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:44:23.693753   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:44:23.693926   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:44:23.694009   57945 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 13:44:23.694062   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:23.694142   57945 ssh_runner.go:195] Run: cat /version.json
	I0816 13:44:23.694167   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:23.696872   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.696897   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.697247   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:23.697281   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:23.697309   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.697340   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.697622   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:44:23.697801   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:44:23.697830   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:23.697974   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:44:23.698010   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:23.698144   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:44:23.698155   57945 sshutil.go:53] new ssh client: &{IP:192.168.72.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/old-k8s-version-882237/id_rsa Username:docker}
	I0816 13:44:23.698312   57945 sshutil.go:53] new ssh client: &{IP:192.168.72.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/old-k8s-version-882237/id_rsa Username:docker}
	I0816 13:44:23.774706   57945 ssh_runner.go:195] Run: systemctl --version
	I0816 13:44:23.802788   57945 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 13:44:23.955361   57945 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 13:44:23.963291   57945 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 13:44:23.963363   57945 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 13:44:23.979542   57945 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 13:44:23.979579   57945 start.go:495] detecting cgroup driver to use...
	I0816 13:44:23.979645   57945 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 13:44:24.002509   57945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 13:44:24.019715   57945 docker.go:217] disabling cri-docker service (if available) ...
	I0816 13:44:24.019773   57945 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 13:44:24.033677   57945 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 13:44:24.049195   57945 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 13:44:24.168789   57945 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 13:44:24.346709   57945 docker.go:233] disabling docker service ...
	I0816 13:44:24.346772   57945 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 13:44:24.363739   57945 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 13:44:24.378785   57945 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 13:44:24.547705   57945 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 13:44:24.738866   57945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 13:44:24.756139   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 13:44:24.775999   57945 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0816 13:44:24.776060   57945 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:24.786682   57945 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 13:44:24.786783   57945 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:24.797385   57945 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:24.807825   57945 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:24.817919   57945 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 13:44:24.828884   57945 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 13:44:24.838725   57945 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 13:44:24.838782   57945 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 13:44:24.852544   57945 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 13:44:24.868302   57945 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:44:24.980614   57945 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 13:44:25.122584   57945 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 13:44:25.122660   57945 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 13:44:25.128622   57945 start.go:563] Will wait 60s for crictl version
	I0816 13:44:25.128694   57945 ssh_runner.go:195] Run: which crictl
	I0816 13:44:25.133726   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 13:44:25.188714   57945 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 13:44:25.188801   57945 ssh_runner.go:195] Run: crio --version
	I0816 13:44:25.223719   57945 ssh_runner.go:195] Run: crio --version
	I0816 13:44:25.263894   57945 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0816 13:44:25.265126   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetIP
	I0816 13:44:25.268186   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:25.268630   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:25.268662   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:25.268927   57945 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0816 13:44:25.274101   57945 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 13:44:25.288155   57945 kubeadm.go:883] updating cluster {Name:old-k8s-version-882237 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-882237 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.105 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 13:44:25.288260   57945 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0816 13:44:25.288311   57945 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 13:44:25.342303   57945 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0816 13:44:25.342377   57945 ssh_runner.go:195] Run: which lz4
	I0816 13:44:25.346641   57945 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 13:44:25.350761   57945 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 13:44:25.350793   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0816 13:44:27.052140   57945 crio.go:462] duration metric: took 1.705504554s to copy over tarball
	I0816 13:44:27.052223   57945 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 13:44:30.191146   57945 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.138885293s)
	I0816 13:44:30.191188   57945 crio.go:469] duration metric: took 3.139020745s to extract the tarball
	I0816 13:44:30.191198   57945 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 13:44:30.249011   57945 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 13:44:30.285826   57945 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0816 13:44:30.285847   57945 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0816 13:44:30.285918   57945 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:30.285940   57945 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:44:30.285947   57945 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0816 13:44:30.285922   57945 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:44:30.285971   57945 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0816 13:44:30.286019   57945 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:44:30.285922   57945 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:44:30.285979   57945 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0816 13:44:30.288208   57945 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:44:30.288272   57945 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:30.288275   57945 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0816 13:44:30.288205   57945 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0816 13:44:30.288303   57945 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0816 13:44:30.288320   57945 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:44:30.288211   57945 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:44:30.288207   57945 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:44:30.434593   57945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:44:30.434847   57945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:44:30.438852   57945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:44:30.449704   57945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0816 13:44:30.451130   57945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:44:30.454848   57945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0816 13:44:30.513569   57945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0816 13:44:30.594404   57945 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0816 13:44:30.594453   57945 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:44:30.594509   57945 ssh_runner.go:195] Run: which crictl
	I0816 13:44:30.612653   57945 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0816 13:44:30.612699   57945 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:44:30.612746   57945 ssh_runner.go:195] Run: which crictl
	I0816 13:44:30.652117   57945 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0816 13:44:30.652162   57945 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:44:30.652214   57945 ssh_runner.go:195] Run: which crictl
	I0816 13:44:30.681057   57945 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0816 13:44:30.681116   57945 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0816 13:44:30.681163   57945 ssh_runner.go:195] Run: which crictl
	I0816 13:44:30.681239   57945 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0816 13:44:30.681296   57945 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:44:30.681341   57945 ssh_runner.go:195] Run: which crictl
	I0816 13:44:30.688696   57945 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0816 13:44:30.688739   57945 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0816 13:44:30.688785   57945 ssh_runner.go:195] Run: which crictl
	I0816 13:44:30.706749   57945 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0816 13:44:30.706802   57945 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0816 13:44:30.706816   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:44:30.706843   57945 ssh_runner.go:195] Run: which crictl
	I0816 13:44:30.706911   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:44:30.706938   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:44:30.706987   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 13:44:30.707031   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:44:30.707045   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 13:44:30.913446   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 13:44:30.913520   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 13:44:30.913548   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:44:30.913611   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:44:30.913653   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 13:44:30.913675   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:44:30.913813   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:44:31.079066   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:44:31.079100   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 13:44:31.079140   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:44:31.103707   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:44:31.103890   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 13:44:31.106587   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:44:31.106723   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 13:44:31.210359   57945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:31.226549   57945 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0816 13:44:31.226605   57945 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0816 13:44:31.226648   57945 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0816 13:44:31.266238   57945 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0816 13:44:31.266256   57945 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0816 13:44:31.269423   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 13:44:31.270551   57945 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0816 13:44:31.399144   57945 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0816 13:44:31.399220   57945 cache_images.go:92] duration metric: took 1.113354806s to LoadCachedImages
	W0816 13:44:31.399297   57945 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0816 13:44:31.399311   57945 kubeadm.go:934] updating node { 192.168.72.105 8443 v1.20.0 crio true true} ...
	I0816 13:44:31.399426   57945 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-882237 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.105
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-882237 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 13:44:31.399515   57945 ssh_runner.go:195] Run: crio config
	I0816 13:44:31.459182   57945 cni.go:84] Creating CNI manager for ""
	I0816 13:44:31.459226   57945 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:44:31.459244   57945 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 13:44:31.459270   57945 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.105 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-882237 NodeName:old-k8s-version-882237 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.105"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.105 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0816 13:44:31.459439   57945 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.105
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-882237"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.105
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.105"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 13:44:31.459521   57945 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0816 13:44:31.470415   57945 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 13:44:31.470500   57945 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 13:44:31.480890   57945 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0816 13:44:31.498797   57945 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 13:44:31.516425   57945 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0816 13:44:31.536528   57945 ssh_runner.go:195] Run: grep 192.168.72.105	control-plane.minikube.internal$ /etc/hosts
	I0816 13:44:31.540569   57945 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.105	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 13:44:31.553530   57945 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:44:31.693191   57945 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 13:44:31.711162   57945 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237 for IP: 192.168.72.105
	I0816 13:44:31.711190   57945 certs.go:194] generating shared ca certs ...
	I0816 13:44:31.711209   57945 certs.go:226] acquiring lock for ca certs: {Name:mkdb46755373c135acf8239d2d4352f4b0b3d1d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:44:31.711382   57945 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key
	I0816 13:44:31.711465   57945 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key
	I0816 13:44:31.711478   57945 certs.go:256] generating profile certs ...
	I0816 13:44:31.711596   57945 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/client.key
	I0816 13:44:31.711676   57945 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/apiserver.key.e63f19d8
	I0816 13:44:31.711739   57945 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/proxy-client.key
	I0816 13:44:31.711906   57945 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem (1338 bytes)
	W0816 13:44:31.711969   57945 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149_empty.pem, impossibly tiny 0 bytes
	I0816 13:44:31.711984   57945 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 13:44:31.712019   57945 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem (1082 bytes)
	I0816 13:44:31.712058   57945 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem (1123 bytes)
	I0816 13:44:31.712089   57945 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem (1675 bytes)
	I0816 13:44:31.712146   57945 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem (1708 bytes)
	I0816 13:44:31.713101   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 13:44:31.748701   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 13:44:31.789308   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 13:44:31.814410   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 13:44:31.841281   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0816 13:44:31.867939   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 13:44:31.894410   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 13:44:31.921591   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 13:44:31.952356   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /usr/share/ca-certificates/111492.pem (1708 bytes)
	I0816 13:44:31.982171   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 13:44:32.008849   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem --> /usr/share/ca-certificates/11149.pem (1338 bytes)
	I0816 13:44:32.034750   57945 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 13:44:32.051812   57945 ssh_runner.go:195] Run: openssl version
	I0816 13:44:32.057774   57945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11149.pem && ln -fs /usr/share/ca-certificates/11149.pem /etc/ssl/certs/11149.pem"
	I0816 13:44:32.068553   57945 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11149.pem
	I0816 13:44:32.073022   57945 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 12:33 /usr/share/ca-certificates/11149.pem
	I0816 13:44:32.073095   57945 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11149.pem
	I0816 13:44:32.079239   57945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11149.pem /etc/ssl/certs/51391683.0"
	I0816 13:44:32.089825   57945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111492.pem && ln -fs /usr/share/ca-certificates/111492.pem /etc/ssl/certs/111492.pem"
	I0816 13:44:32.100630   57945 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111492.pem
	I0816 13:44:32.105792   57945 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 12:33 /usr/share/ca-certificates/111492.pem
	I0816 13:44:32.105851   57945 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111492.pem
	I0816 13:44:32.112004   57945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111492.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 13:44:32.122723   57945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 13:44:32.133560   57945 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:44:32.138215   57945 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 12:22 /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:44:32.138260   57945 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:44:32.144059   57945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 13:44:32.155210   57945 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 13:44:32.159746   57945 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 13:44:32.165984   57945 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 13:44:32.171617   57945 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 13:44:32.177778   57945 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 13:44:32.183623   57945 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 13:44:32.189537   57945 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 13:44:32.195627   57945 kubeadm.go:392] StartCluster: {Name:old-k8s-version-882237 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-882237 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.105 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 13:44:32.195706   57945 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 13:44:32.195741   57945 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 13:44:32.235910   57945 cri.go:89] found id: ""
	I0816 13:44:32.235978   57945 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 13:44:32.248201   57945 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 13:44:32.248223   57945 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 13:44:32.248286   57945 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 13:44:32.258917   57945 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 13:44:32.260386   57945 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-882237" does not appear in /home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 13:44:32.261475   57945 kubeconfig.go:62] /home/jenkins/minikube-integration/19423-3966/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-882237" cluster setting kubeconfig missing "old-k8s-version-882237" context setting]
	I0816 13:44:32.263041   57945 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/kubeconfig: {Name:mk102d7d0e1fecb6a50b9d6c1ee82dcff7f7a898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:44:32.335150   57945 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 13:44:32.346103   57945 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.105
	I0816 13:44:32.346141   57945 kubeadm.go:1160] stopping kube-system containers ...
	I0816 13:44:32.346155   57945 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 13:44:32.346212   57945 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 13:44:32.390110   57945 cri.go:89] found id: ""
	I0816 13:44:32.390197   57945 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 13:44:32.408685   57945 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 13:44:32.419119   57945 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 13:44:32.419146   57945 kubeadm.go:157] found existing configuration files:
	
	I0816 13:44:32.419227   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 13:44:32.429282   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 13:44:32.429352   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 13:44:32.439444   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 13:44:32.449342   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 13:44:32.449409   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 13:44:32.459836   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 13:44:32.469581   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 13:44:32.469653   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 13:44:32.479655   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 13:44:32.489139   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 13:44:32.489204   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 13:44:32.499439   57945 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 13:44:32.509706   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:32.672388   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:33.787722   57945 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.115294487s)
	I0816 13:44:33.787763   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:34.027016   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:34.141852   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:34.247190   57945 api_server.go:52] waiting for apiserver process to appear ...
	I0816 13:44:34.247286   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:34.747781   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:35.248075   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:35.747575   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:36.247693   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:36.748219   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:37.247519   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:37.748189   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:38.248143   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:38.748193   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:39.247412   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:39.748043   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:40.247541   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:40.747938   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:41.247408   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:41.747777   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:42.248295   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:42.747393   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:43.247508   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:43.748151   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:44.247958   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:44.747978   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:45.247523   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:45.747694   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:46.248397   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:46.747660   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:47.247382   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:47.748220   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:48.248130   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:48.747818   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:49.248360   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:49.747962   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:50.247710   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:50.747741   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:51.248099   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:51.748052   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:52.247958   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:52.748141   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:53.247751   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:53.747353   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:54.247624   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:54.747699   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:55.248021   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:55.747406   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:56.247470   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:56.747399   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:57.247462   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:57.747637   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:58.248194   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:58.747381   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:59.247772   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:59.748373   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:00.247513   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:00.748342   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:01.248179   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:01.747757   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:02.247789   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:02.748162   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:03.247936   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:03.747434   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:04.247832   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:04.747704   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:05.247343   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:05.747420   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:06.247801   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:06.747978   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:07.248393   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:07.747801   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:08.248388   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:08.747624   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:09.247530   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:09.748311   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:10.247689   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:10.747756   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:11.247963   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:11.747523   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:12.247397   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:12.748146   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:13.247976   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:13.748109   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:14.247662   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:14.748041   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:15.248141   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:15.747452   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:16.247654   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:16.747569   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:17.248203   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:17.747951   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:18.248147   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:18.747490   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:19.248135   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:19.748201   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:20.247741   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:20.747432   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:21.247600   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:21.748309   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:22.247438   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:22.748379   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:23.247577   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:23.747950   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:24.247733   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:24.748079   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:25.247402   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:25.747623   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:26.248101   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:26.747403   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:27.248040   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:27.747380   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:28.247857   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:28.748374   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:29.247819   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:29.747331   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:30.247771   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:30.747706   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:31.247762   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:31.748013   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:32.247551   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:32.748020   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:33.247432   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:33.747594   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:34.247750   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:45:34.247831   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:45:34.295412   57945 cri.go:89] found id: ""
	I0816 13:45:34.295439   57945 logs.go:276] 0 containers: []
	W0816 13:45:34.295461   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:45:34.295468   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:45:34.295529   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:45:34.332061   57945 cri.go:89] found id: ""
	I0816 13:45:34.332085   57945 logs.go:276] 0 containers: []
	W0816 13:45:34.332093   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:45:34.332100   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:45:34.332158   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:45:34.369512   57945 cri.go:89] found id: ""
	I0816 13:45:34.369535   57945 logs.go:276] 0 containers: []
	W0816 13:45:34.369546   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:45:34.369553   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:45:34.369617   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:45:34.406324   57945 cri.go:89] found id: ""
	I0816 13:45:34.406351   57945 logs.go:276] 0 containers: []
	W0816 13:45:34.406362   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:45:34.406370   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:45:34.406436   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:45:34.442193   57945 cri.go:89] found id: ""
	I0816 13:45:34.442229   57945 logs.go:276] 0 containers: []
	W0816 13:45:34.442239   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:45:34.442244   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:45:34.442301   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:45:34.476563   57945 cri.go:89] found id: ""
	I0816 13:45:34.476600   57945 logs.go:276] 0 containers: []
	W0816 13:45:34.476616   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:45:34.476622   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:45:34.476670   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:45:34.515841   57945 cri.go:89] found id: ""
	I0816 13:45:34.515869   57945 logs.go:276] 0 containers: []
	W0816 13:45:34.515877   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:45:34.515883   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:45:34.515940   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:45:34.551242   57945 cri.go:89] found id: ""
	I0816 13:45:34.551276   57945 logs.go:276] 0 containers: []
	W0816 13:45:34.551288   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:45:34.551305   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:45:34.551321   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:45:34.564902   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:45:34.564944   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:45:34.694323   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:45:34.694349   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:45:34.694366   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:45:34.770548   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:45:34.770589   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:45:34.818339   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:45:34.818366   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:45:37.370390   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:37.383474   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:45:37.383558   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:45:37.419911   57945 cri.go:89] found id: ""
	I0816 13:45:37.419943   57945 logs.go:276] 0 containers: []
	W0816 13:45:37.419954   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:45:37.419961   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:45:37.420027   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:45:37.453845   57945 cri.go:89] found id: ""
	I0816 13:45:37.453876   57945 logs.go:276] 0 containers: []
	W0816 13:45:37.453884   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:45:37.453889   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:45:37.453949   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:45:37.489053   57945 cri.go:89] found id: ""
	I0816 13:45:37.489088   57945 logs.go:276] 0 containers: []
	W0816 13:45:37.489099   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:45:37.489106   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:45:37.489176   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:45:37.525631   57945 cri.go:89] found id: ""
	I0816 13:45:37.525664   57945 logs.go:276] 0 containers: []
	W0816 13:45:37.525676   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:45:37.525684   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:45:37.525743   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:45:37.560064   57945 cri.go:89] found id: ""
	I0816 13:45:37.560089   57945 logs.go:276] 0 containers: []
	W0816 13:45:37.560101   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:45:37.560109   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:45:37.560168   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:45:37.593856   57945 cri.go:89] found id: ""
	I0816 13:45:37.593888   57945 logs.go:276] 0 containers: []
	W0816 13:45:37.593899   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:45:37.593907   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:45:37.593969   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:45:37.627775   57945 cri.go:89] found id: ""
	I0816 13:45:37.627808   57945 logs.go:276] 0 containers: []
	W0816 13:45:37.627818   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:45:37.627828   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:45:37.627888   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:45:37.660926   57945 cri.go:89] found id: ""
	I0816 13:45:37.660962   57945 logs.go:276] 0 containers: []
	W0816 13:45:37.660973   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:45:37.660991   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:45:37.661008   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:45:37.738954   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:45:37.738993   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:45:37.778976   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:45:37.779006   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:45:37.831361   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:45:37.831397   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:45:37.845096   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:45:37.845122   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:45:37.930797   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:45:40.431616   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:40.445298   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:45:40.445365   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:45:40.478229   57945 cri.go:89] found id: ""
	I0816 13:45:40.478252   57945 logs.go:276] 0 containers: []
	W0816 13:45:40.478259   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:45:40.478265   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:45:40.478313   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:45:40.514721   57945 cri.go:89] found id: ""
	I0816 13:45:40.514744   57945 logs.go:276] 0 containers: []
	W0816 13:45:40.514754   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:45:40.514761   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:45:40.514819   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:45:40.550604   57945 cri.go:89] found id: ""
	I0816 13:45:40.550629   57945 logs.go:276] 0 containers: []
	W0816 13:45:40.550637   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:45:40.550644   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:45:40.550700   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:45:40.589286   57945 cri.go:89] found id: ""
	I0816 13:45:40.589312   57945 logs.go:276] 0 containers: []
	W0816 13:45:40.589320   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:45:40.589326   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:45:40.589382   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:45:40.622689   57945 cri.go:89] found id: ""
	I0816 13:45:40.622709   57945 logs.go:276] 0 containers: []
	W0816 13:45:40.622717   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:45:40.622722   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:45:40.622778   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:45:40.660872   57945 cri.go:89] found id: ""
	I0816 13:45:40.660897   57945 logs.go:276] 0 containers: []
	W0816 13:45:40.660915   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:45:40.660925   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:45:40.660986   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:45:40.697369   57945 cri.go:89] found id: ""
	I0816 13:45:40.697395   57945 logs.go:276] 0 containers: []
	W0816 13:45:40.697404   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:45:40.697415   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:45:40.697521   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:45:40.733565   57945 cri.go:89] found id: ""
	I0816 13:45:40.733594   57945 logs.go:276] 0 containers: []
	W0816 13:45:40.733604   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:45:40.733615   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:45:40.733629   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:45:40.770951   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:45:40.770993   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:45:40.824983   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:45:40.825025   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:45:40.838846   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:45:40.838876   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:45:40.915687   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:45:40.915718   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:45:40.915733   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:45:43.496409   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:43.511419   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:45:43.511485   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:45:43.556996   57945 cri.go:89] found id: ""
	I0816 13:45:43.557031   57945 logs.go:276] 0 containers: []
	W0816 13:45:43.557042   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:45:43.557050   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:45:43.557102   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:45:43.609200   57945 cri.go:89] found id: ""
	I0816 13:45:43.609228   57945 logs.go:276] 0 containers: []
	W0816 13:45:43.609237   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:45:43.609244   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:45:43.609305   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:45:43.648434   57945 cri.go:89] found id: ""
	I0816 13:45:43.648458   57945 logs.go:276] 0 containers: []
	W0816 13:45:43.648467   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:45:43.648474   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:45:43.648538   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:45:43.687179   57945 cri.go:89] found id: ""
	I0816 13:45:43.687214   57945 logs.go:276] 0 containers: []
	W0816 13:45:43.687222   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:45:43.687228   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:45:43.687293   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:45:43.721723   57945 cri.go:89] found id: ""
	I0816 13:45:43.721751   57945 logs.go:276] 0 containers: []
	W0816 13:45:43.721762   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:45:43.721769   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:45:43.721847   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:45:43.756469   57945 cri.go:89] found id: ""
	I0816 13:45:43.756492   57945 logs.go:276] 0 containers: []
	W0816 13:45:43.756501   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:45:43.756506   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:45:43.756560   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:45:43.790241   57945 cri.go:89] found id: ""
	I0816 13:45:43.790267   57945 logs.go:276] 0 containers: []
	W0816 13:45:43.790275   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:45:43.790281   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:45:43.790329   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:45:43.828620   57945 cri.go:89] found id: ""
	I0816 13:45:43.828646   57945 logs.go:276] 0 containers: []
	W0816 13:45:43.828654   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:45:43.828662   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:45:43.828677   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:45:43.879573   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:45:43.879607   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:45:43.893813   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:45:43.893842   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:45:43.975188   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:45:43.975209   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:45:43.975220   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:45:44.054231   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:45:44.054266   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:45:46.593190   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:46.607472   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:45:46.607568   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:45:46.642764   57945 cri.go:89] found id: ""
	I0816 13:45:46.642787   57945 logs.go:276] 0 containers: []
	W0816 13:45:46.642795   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:45:46.642800   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:45:46.642848   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:45:46.678965   57945 cri.go:89] found id: ""
	I0816 13:45:46.678992   57945 logs.go:276] 0 containers: []
	W0816 13:45:46.679000   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:45:46.679005   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:45:46.679051   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:45:46.717632   57945 cri.go:89] found id: ""
	I0816 13:45:46.717657   57945 logs.go:276] 0 containers: []
	W0816 13:45:46.717666   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:45:46.717671   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:45:46.717720   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:45:46.758359   57945 cri.go:89] found id: ""
	I0816 13:45:46.758407   57945 logs.go:276] 0 containers: []
	W0816 13:45:46.758419   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:45:46.758427   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:45:46.758487   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:45:46.798405   57945 cri.go:89] found id: ""
	I0816 13:45:46.798437   57945 logs.go:276] 0 containers: []
	W0816 13:45:46.798448   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:45:46.798472   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:45:46.798547   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:45:46.834977   57945 cri.go:89] found id: ""
	I0816 13:45:46.835008   57945 logs.go:276] 0 containers: []
	W0816 13:45:46.835019   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:45:46.835026   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:45:46.835077   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:45:46.873589   57945 cri.go:89] found id: ""
	I0816 13:45:46.873622   57945 logs.go:276] 0 containers: []
	W0816 13:45:46.873631   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:45:46.873638   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:45:46.873689   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:45:46.912649   57945 cri.go:89] found id: ""
	I0816 13:45:46.912680   57945 logs.go:276] 0 containers: []
	W0816 13:45:46.912691   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:45:46.912701   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:45:46.912720   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:45:46.966998   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:45:46.967038   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:45:46.980897   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:45:46.980937   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:45:47.053055   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:45:47.053079   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:45:47.053091   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:45:47.136251   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:45:47.136291   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:45:49.678283   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:49.691134   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:45:49.691244   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:45:49.726598   57945 cri.go:89] found id: ""
	I0816 13:45:49.726644   57945 logs.go:276] 0 containers: []
	W0816 13:45:49.726656   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:45:49.726665   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:45:49.726729   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:45:49.760499   57945 cri.go:89] found id: ""
	I0816 13:45:49.760526   57945 logs.go:276] 0 containers: []
	W0816 13:45:49.760536   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:45:49.760543   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:45:49.760602   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:45:49.794064   57945 cri.go:89] found id: ""
	I0816 13:45:49.794087   57945 logs.go:276] 0 containers: []
	W0816 13:45:49.794094   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:45:49.794099   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:45:49.794162   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:45:49.830016   57945 cri.go:89] found id: ""
	I0816 13:45:49.830045   57945 logs.go:276] 0 containers: []
	W0816 13:45:49.830057   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:45:49.830071   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:45:49.830125   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:45:49.865230   57945 cri.go:89] found id: ""
	I0816 13:45:49.865248   57945 logs.go:276] 0 containers: []
	W0816 13:45:49.865255   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:45:49.865261   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:45:49.865310   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:45:49.898715   57945 cri.go:89] found id: ""
	I0816 13:45:49.898743   57945 logs.go:276] 0 containers: []
	W0816 13:45:49.898752   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:45:49.898758   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:45:49.898807   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:45:49.932831   57945 cri.go:89] found id: ""
	I0816 13:45:49.932857   57945 logs.go:276] 0 containers: []
	W0816 13:45:49.932868   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:45:49.932875   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:45:49.932948   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:45:49.965580   57945 cri.go:89] found id: ""
	I0816 13:45:49.965609   57945 logs.go:276] 0 containers: []
	W0816 13:45:49.965617   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:45:49.965626   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:45:49.965642   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:45:50.058462   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:45:50.058516   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:45:50.111179   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:45:50.111206   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:45:50.162529   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:45:50.162561   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:45:50.176552   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:45:50.176579   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:45:50.243818   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:45:52.744808   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:52.757430   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:45:52.757513   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:45:52.793177   57945 cri.go:89] found id: ""
	I0816 13:45:52.793209   57945 logs.go:276] 0 containers: []
	W0816 13:45:52.793217   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:45:52.793224   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:45:52.793276   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:45:52.827846   57945 cri.go:89] found id: ""
	I0816 13:45:52.827874   57945 logs.go:276] 0 containers: []
	W0816 13:45:52.827886   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:45:52.827894   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:45:52.827959   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:45:52.864662   57945 cri.go:89] found id: ""
	I0816 13:45:52.864693   57945 logs.go:276] 0 containers: []
	W0816 13:45:52.864705   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:45:52.864711   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:45:52.864761   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:45:52.901124   57945 cri.go:89] found id: ""
	I0816 13:45:52.901154   57945 logs.go:276] 0 containers: []
	W0816 13:45:52.901166   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:45:52.901174   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:45:52.901234   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:45:52.939763   57945 cri.go:89] found id: ""
	I0816 13:45:52.939791   57945 logs.go:276] 0 containers: []
	W0816 13:45:52.939799   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:45:52.939805   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:45:52.939858   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:45:52.975045   57945 cri.go:89] found id: ""
	I0816 13:45:52.975075   57945 logs.go:276] 0 containers: []
	W0816 13:45:52.975086   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:45:52.975092   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:45:52.975141   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:45:53.014686   57945 cri.go:89] found id: ""
	I0816 13:45:53.014714   57945 logs.go:276] 0 containers: []
	W0816 13:45:53.014725   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:45:53.014732   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:45:53.014794   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:45:53.049445   57945 cri.go:89] found id: ""
	I0816 13:45:53.049466   57945 logs.go:276] 0 containers: []
	W0816 13:45:53.049473   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:45:53.049482   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:45:53.049492   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:45:53.101819   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:45:53.101850   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:45:53.116165   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:45:53.116191   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:45:53.191022   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:45:53.191047   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:45:53.191062   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:45:53.268901   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:45:53.268952   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:45:55.814862   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:55.828817   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:45:55.828875   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:45:55.877556   57945 cri.go:89] found id: ""
	I0816 13:45:55.877586   57945 logs.go:276] 0 containers: []
	W0816 13:45:55.877595   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:45:55.877606   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:45:55.877667   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:45:55.912820   57945 cri.go:89] found id: ""
	I0816 13:45:55.912848   57945 logs.go:276] 0 containers: []
	W0816 13:45:55.912855   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:45:55.912862   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:45:55.912918   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:45:55.947419   57945 cri.go:89] found id: ""
	I0816 13:45:55.947449   57945 logs.go:276] 0 containers: []
	W0816 13:45:55.947460   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:45:55.947467   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:45:55.947532   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:45:55.980964   57945 cri.go:89] found id: ""
	I0816 13:45:55.980990   57945 logs.go:276] 0 containers: []
	W0816 13:45:55.981001   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:45:55.981008   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:45:55.981068   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:45:56.019021   57945 cri.go:89] found id: ""
	I0816 13:45:56.019045   57945 logs.go:276] 0 containers: []
	W0816 13:45:56.019053   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:45:56.019059   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:45:56.019116   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:45:56.054950   57945 cri.go:89] found id: ""
	I0816 13:45:56.054974   57945 logs.go:276] 0 containers: []
	W0816 13:45:56.054985   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:45:56.054992   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:45:56.055057   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:45:56.091165   57945 cri.go:89] found id: ""
	I0816 13:45:56.091192   57945 logs.go:276] 0 containers: []
	W0816 13:45:56.091202   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:45:56.091211   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:45:56.091268   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:45:56.125748   57945 cri.go:89] found id: ""
	I0816 13:45:56.125775   57945 logs.go:276] 0 containers: []
	W0816 13:45:56.125787   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:45:56.125797   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:45:56.125811   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:45:56.174836   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:45:56.174870   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:45:56.188501   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:45:56.188529   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:45:56.266017   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:45:56.266038   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:45:56.266050   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:45:56.346482   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:45:56.346519   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:45:58.887176   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:58.900464   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:45:58.900531   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:45:58.939526   57945 cri.go:89] found id: ""
	I0816 13:45:58.939558   57945 logs.go:276] 0 containers: []
	W0816 13:45:58.939568   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:45:58.939576   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:45:58.939639   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:45:58.975256   57945 cri.go:89] found id: ""
	I0816 13:45:58.975281   57945 logs.go:276] 0 containers: []
	W0816 13:45:58.975289   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:45:58.975294   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:45:58.975350   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:45:59.012708   57945 cri.go:89] found id: ""
	I0816 13:45:59.012736   57945 logs.go:276] 0 containers: []
	W0816 13:45:59.012746   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:45:59.012754   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:45:59.012820   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:45:59.049385   57945 cri.go:89] found id: ""
	I0816 13:45:59.049417   57945 logs.go:276] 0 containers: []
	W0816 13:45:59.049430   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:45:59.049438   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:45:59.049505   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:45:59.084750   57945 cri.go:89] found id: ""
	I0816 13:45:59.084773   57945 logs.go:276] 0 containers: []
	W0816 13:45:59.084781   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:45:59.084786   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:45:59.084834   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:45:59.129464   57945 cri.go:89] found id: ""
	I0816 13:45:59.129495   57945 logs.go:276] 0 containers: []
	W0816 13:45:59.129506   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:45:59.129514   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:45:59.129578   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:45:59.166772   57945 cri.go:89] found id: ""
	I0816 13:45:59.166794   57945 logs.go:276] 0 containers: []
	W0816 13:45:59.166802   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:45:59.166808   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:45:59.166867   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:45:59.203843   57945 cri.go:89] found id: ""
	I0816 13:45:59.203876   57945 logs.go:276] 0 containers: []
	W0816 13:45:59.203886   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:45:59.203897   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:45:59.203911   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:45:59.285798   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:45:59.285837   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:45:59.324704   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:45:59.324729   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:45:59.377532   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:45:59.377566   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:45:59.391209   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:45:59.391236   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:45:59.463420   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:01.964395   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:01.977380   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:01.977452   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:02.014480   57945 cri.go:89] found id: ""
	I0816 13:46:02.014504   57945 logs.go:276] 0 containers: []
	W0816 13:46:02.014511   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:02.014517   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:02.014578   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:02.057233   57945 cri.go:89] found id: ""
	I0816 13:46:02.057262   57945 logs.go:276] 0 containers: []
	W0816 13:46:02.057270   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:02.057277   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:02.057326   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:02.095936   57945 cri.go:89] found id: ""
	I0816 13:46:02.095962   57945 logs.go:276] 0 containers: []
	W0816 13:46:02.095970   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:02.095976   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:02.096020   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:02.136949   57945 cri.go:89] found id: ""
	I0816 13:46:02.136980   57945 logs.go:276] 0 containers: []
	W0816 13:46:02.136992   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:02.136998   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:02.137047   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:02.172610   57945 cri.go:89] found id: ""
	I0816 13:46:02.172648   57945 logs.go:276] 0 containers: []
	W0816 13:46:02.172658   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:02.172666   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:02.172729   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:02.211216   57945 cri.go:89] found id: ""
	I0816 13:46:02.211247   57945 logs.go:276] 0 containers: []
	W0816 13:46:02.211257   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:02.211266   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:02.211334   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:02.245705   57945 cri.go:89] found id: ""
	I0816 13:46:02.245735   57945 logs.go:276] 0 containers: []
	W0816 13:46:02.245746   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:02.245753   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:02.245823   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:02.281057   57945 cri.go:89] found id: ""
	I0816 13:46:02.281082   57945 logs.go:276] 0 containers: []
	W0816 13:46:02.281093   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:02.281103   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:02.281128   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:02.333334   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:02.333377   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:02.347520   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:02.347546   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:02.427543   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:02.427572   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:02.427587   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:02.514871   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:02.514908   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:05.057817   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:05.070491   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:05.070554   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:05.108262   57945 cri.go:89] found id: ""
	I0816 13:46:05.108290   57945 logs.go:276] 0 containers: []
	W0816 13:46:05.108301   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:05.108308   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:05.108361   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:05.143962   57945 cri.go:89] found id: ""
	I0816 13:46:05.143995   57945 logs.go:276] 0 containers: []
	W0816 13:46:05.144005   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:05.144011   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:05.144067   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:05.180032   57945 cri.go:89] found id: ""
	I0816 13:46:05.180058   57945 logs.go:276] 0 containers: []
	W0816 13:46:05.180068   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:05.180076   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:05.180128   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:05.214077   57945 cri.go:89] found id: ""
	I0816 13:46:05.214107   57945 logs.go:276] 0 containers: []
	W0816 13:46:05.214115   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:05.214121   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:05.214171   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:05.250887   57945 cri.go:89] found id: ""
	I0816 13:46:05.250920   57945 logs.go:276] 0 containers: []
	W0816 13:46:05.250930   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:05.250937   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:05.251000   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:05.285592   57945 cri.go:89] found id: ""
	I0816 13:46:05.285615   57945 logs.go:276] 0 containers: []
	W0816 13:46:05.285623   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:05.285629   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:05.285675   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:05.325221   57945 cri.go:89] found id: ""
	I0816 13:46:05.325248   57945 logs.go:276] 0 containers: []
	W0816 13:46:05.325258   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:05.325264   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:05.325307   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:05.364025   57945 cri.go:89] found id: ""
	I0816 13:46:05.364047   57945 logs.go:276] 0 containers: []
	W0816 13:46:05.364055   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:05.364062   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:05.364074   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:05.413364   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:05.413395   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:05.427328   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:05.427358   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:05.504040   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:05.504067   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:05.504086   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:05.580975   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:05.581010   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:08.123111   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:08.136822   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:08.136902   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:08.169471   57945 cri.go:89] found id: ""
	I0816 13:46:08.169495   57945 logs.go:276] 0 containers: []
	W0816 13:46:08.169503   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:08.169508   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:08.169556   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:08.211041   57945 cri.go:89] found id: ""
	I0816 13:46:08.211069   57945 logs.go:276] 0 containers: []
	W0816 13:46:08.211081   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:08.211087   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:08.211148   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:08.247564   57945 cri.go:89] found id: ""
	I0816 13:46:08.247590   57945 logs.go:276] 0 containers: []
	W0816 13:46:08.247600   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:08.247607   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:08.247670   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:08.284283   57945 cri.go:89] found id: ""
	I0816 13:46:08.284312   57945 logs.go:276] 0 containers: []
	W0816 13:46:08.284324   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:08.284332   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:08.284384   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:08.320287   57945 cri.go:89] found id: ""
	I0816 13:46:08.320311   57945 logs.go:276] 0 containers: []
	W0816 13:46:08.320319   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:08.320325   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:08.320371   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:08.358294   57945 cri.go:89] found id: ""
	I0816 13:46:08.358324   57945 logs.go:276] 0 containers: []
	W0816 13:46:08.358342   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:08.358356   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:08.358423   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:08.394386   57945 cri.go:89] found id: ""
	I0816 13:46:08.394414   57945 logs.go:276] 0 containers: []
	W0816 13:46:08.394424   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:08.394432   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:08.394502   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:08.439608   57945 cri.go:89] found id: ""
	I0816 13:46:08.439635   57945 logs.go:276] 0 containers: []
	W0816 13:46:08.439643   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:08.439653   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:08.439668   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:08.493878   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:08.493918   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:08.508080   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:08.508114   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:08.584703   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:08.584727   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:08.584745   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:08.663741   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:08.663776   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:11.204946   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:11.218720   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:11.218800   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:11.257825   57945 cri.go:89] found id: ""
	I0816 13:46:11.257852   57945 logs.go:276] 0 containers: []
	W0816 13:46:11.257862   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:11.257870   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:11.257930   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:11.293910   57945 cri.go:89] found id: ""
	I0816 13:46:11.293946   57945 logs.go:276] 0 containers: []
	W0816 13:46:11.293958   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:11.293966   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:11.294023   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:11.330005   57945 cri.go:89] found id: ""
	I0816 13:46:11.330031   57945 logs.go:276] 0 containers: []
	W0816 13:46:11.330039   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:11.330045   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:11.330101   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:11.365057   57945 cri.go:89] found id: ""
	I0816 13:46:11.365083   57945 logs.go:276] 0 containers: []
	W0816 13:46:11.365093   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:11.365101   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:11.365159   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:11.401440   57945 cri.go:89] found id: ""
	I0816 13:46:11.401467   57945 logs.go:276] 0 containers: []
	W0816 13:46:11.401475   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:11.401481   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:11.401532   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:11.435329   57945 cri.go:89] found id: ""
	I0816 13:46:11.435354   57945 logs.go:276] 0 containers: []
	W0816 13:46:11.435361   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:11.435368   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:11.435427   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:11.468343   57945 cri.go:89] found id: ""
	I0816 13:46:11.468373   57945 logs.go:276] 0 containers: []
	W0816 13:46:11.468393   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:11.468401   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:11.468465   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:11.503326   57945 cri.go:89] found id: ""
	I0816 13:46:11.503347   57945 logs.go:276] 0 containers: []
	W0816 13:46:11.503362   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:11.503370   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:11.503386   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:11.554681   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:11.554712   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:11.568056   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:11.568087   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:11.646023   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:11.646049   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:11.646060   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:11.726154   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:11.726191   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:14.266008   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:14.280328   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:14.280408   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:14.316359   57945 cri.go:89] found id: ""
	I0816 13:46:14.316388   57945 logs.go:276] 0 containers: []
	W0816 13:46:14.316398   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:14.316406   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:14.316470   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:14.360143   57945 cri.go:89] found id: ""
	I0816 13:46:14.360165   57945 logs.go:276] 0 containers: []
	W0816 13:46:14.360172   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:14.360183   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:14.360234   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:14.394692   57945 cri.go:89] found id: ""
	I0816 13:46:14.394717   57945 logs.go:276] 0 containers: []
	W0816 13:46:14.394724   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:14.394730   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:14.394789   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:14.431928   57945 cri.go:89] found id: ""
	I0816 13:46:14.431957   57945 logs.go:276] 0 containers: []
	W0816 13:46:14.431968   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:14.431975   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:14.432041   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:14.469223   57945 cri.go:89] found id: ""
	I0816 13:46:14.469253   57945 logs.go:276] 0 containers: []
	W0816 13:46:14.469265   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:14.469272   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:14.469334   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:14.506893   57945 cri.go:89] found id: ""
	I0816 13:46:14.506917   57945 logs.go:276] 0 containers: []
	W0816 13:46:14.506925   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:14.506931   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:14.506991   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:14.544801   57945 cri.go:89] found id: ""
	I0816 13:46:14.544825   57945 logs.go:276] 0 containers: []
	W0816 13:46:14.544833   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:14.544839   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:14.544891   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:14.579489   57945 cri.go:89] found id: ""
	I0816 13:46:14.579528   57945 logs.go:276] 0 containers: []
	W0816 13:46:14.579541   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:14.579556   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:14.579572   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:14.656527   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:14.656551   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:14.656573   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:14.736792   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:14.736823   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:14.775976   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:14.776010   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:14.827804   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:14.827836   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:17.341506   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:17.357136   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:17.357214   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:17.397810   57945 cri.go:89] found id: ""
	I0816 13:46:17.397839   57945 logs.go:276] 0 containers: []
	W0816 13:46:17.397867   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:17.397874   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:17.397936   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:17.435170   57945 cri.go:89] found id: ""
	I0816 13:46:17.435198   57945 logs.go:276] 0 containers: []
	W0816 13:46:17.435208   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:17.435214   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:17.435260   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:17.468837   57945 cri.go:89] found id: ""
	I0816 13:46:17.468871   57945 logs.go:276] 0 containers: []
	W0816 13:46:17.468882   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:17.468891   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:17.468962   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:17.503884   57945 cri.go:89] found id: ""
	I0816 13:46:17.503910   57945 logs.go:276] 0 containers: []
	W0816 13:46:17.503921   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:17.503930   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:17.503977   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:17.541204   57945 cri.go:89] found id: ""
	I0816 13:46:17.541232   57945 logs.go:276] 0 containers: []
	W0816 13:46:17.541244   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:17.541251   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:17.541312   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:17.577007   57945 cri.go:89] found id: ""
	I0816 13:46:17.577031   57945 logs.go:276] 0 containers: []
	W0816 13:46:17.577038   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:17.577045   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:17.577092   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:17.611352   57945 cri.go:89] found id: ""
	I0816 13:46:17.611373   57945 logs.go:276] 0 containers: []
	W0816 13:46:17.611380   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:17.611386   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:17.611433   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:17.648108   57945 cri.go:89] found id: ""
	I0816 13:46:17.648147   57945 logs.go:276] 0 containers: []
	W0816 13:46:17.648155   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:17.648164   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:17.648176   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:17.720475   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:17.720500   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:17.720512   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:17.797602   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:17.797636   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:17.842985   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:17.843019   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:17.893581   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:17.893617   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:20.408415   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:20.423303   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:20.423384   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:20.459057   57945 cri.go:89] found id: ""
	I0816 13:46:20.459083   57945 logs.go:276] 0 containers: []
	W0816 13:46:20.459091   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:20.459096   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:20.459152   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:20.496447   57945 cri.go:89] found id: ""
	I0816 13:46:20.496471   57945 logs.go:276] 0 containers: []
	W0816 13:46:20.496479   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:20.496485   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:20.496532   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:20.538508   57945 cri.go:89] found id: ""
	I0816 13:46:20.538531   57945 logs.go:276] 0 containers: []
	W0816 13:46:20.538539   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:20.538544   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:20.538600   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:20.579350   57945 cri.go:89] found id: ""
	I0816 13:46:20.579382   57945 logs.go:276] 0 containers: []
	W0816 13:46:20.579390   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:20.579396   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:20.579465   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:20.615088   57945 cri.go:89] found id: ""
	I0816 13:46:20.615118   57945 logs.go:276] 0 containers: []
	W0816 13:46:20.615130   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:20.615137   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:20.615203   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:20.650849   57945 cri.go:89] found id: ""
	I0816 13:46:20.650877   57945 logs.go:276] 0 containers: []
	W0816 13:46:20.650884   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:20.650890   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:20.650950   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:20.691439   57945 cri.go:89] found id: ""
	I0816 13:46:20.691471   57945 logs.go:276] 0 containers: []
	W0816 13:46:20.691482   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:20.691490   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:20.691553   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:20.727795   57945 cri.go:89] found id: ""
	I0816 13:46:20.727820   57945 logs.go:276] 0 containers: []
	W0816 13:46:20.727829   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:20.727836   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:20.727847   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:20.806369   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:20.806390   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:20.806402   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:20.886313   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:20.886345   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:20.926079   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:20.926104   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:20.981052   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:20.981088   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:23.496179   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:23.509918   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:23.509983   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:23.546175   57945 cri.go:89] found id: ""
	I0816 13:46:23.546214   57945 logs.go:276] 0 containers: []
	W0816 13:46:23.546224   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:23.546231   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:23.546293   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:23.581553   57945 cri.go:89] found id: ""
	I0816 13:46:23.581581   57945 logs.go:276] 0 containers: []
	W0816 13:46:23.581594   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:23.581600   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:23.581648   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:23.614559   57945 cri.go:89] found id: ""
	I0816 13:46:23.614584   57945 logs.go:276] 0 containers: []
	W0816 13:46:23.614592   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:23.614597   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:23.614651   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:23.649239   57945 cri.go:89] found id: ""
	I0816 13:46:23.649272   57945 logs.go:276] 0 containers: []
	W0816 13:46:23.649283   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:23.649291   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:23.649354   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:23.688017   57945 cri.go:89] found id: ""
	I0816 13:46:23.688044   57945 logs.go:276] 0 containers: []
	W0816 13:46:23.688054   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:23.688062   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:23.688126   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:23.723475   57945 cri.go:89] found id: ""
	I0816 13:46:23.723507   57945 logs.go:276] 0 containers: []
	W0816 13:46:23.723517   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:23.723525   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:23.723585   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:23.756028   57945 cri.go:89] found id: ""
	I0816 13:46:23.756055   57945 logs.go:276] 0 containers: []
	W0816 13:46:23.756063   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:23.756069   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:23.756121   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:23.789965   57945 cri.go:89] found id: ""
	I0816 13:46:23.789993   57945 logs.go:276] 0 containers: []
	W0816 13:46:23.790000   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:23.790009   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:23.790029   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:23.803669   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:23.803696   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:23.882614   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:23.882642   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:23.882659   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:23.957733   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:23.957773   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:23.994270   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:23.994298   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:26.546600   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:26.560153   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:26.560221   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:26.594482   57945 cri.go:89] found id: ""
	I0816 13:46:26.594506   57945 logs.go:276] 0 containers: []
	W0816 13:46:26.594520   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:26.594528   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:26.594585   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:26.628020   57945 cri.go:89] found id: ""
	I0816 13:46:26.628051   57945 logs.go:276] 0 containers: []
	W0816 13:46:26.628061   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:26.628068   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:26.628126   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:26.664248   57945 cri.go:89] found id: ""
	I0816 13:46:26.664277   57945 logs.go:276] 0 containers: []
	W0816 13:46:26.664288   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:26.664295   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:26.664357   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:26.700365   57945 cri.go:89] found id: ""
	I0816 13:46:26.700389   57945 logs.go:276] 0 containers: []
	W0816 13:46:26.700397   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:26.700403   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:26.700464   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:26.736170   57945 cri.go:89] found id: ""
	I0816 13:46:26.736204   57945 logs.go:276] 0 containers: []
	W0816 13:46:26.736212   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:26.736219   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:26.736268   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:26.773411   57945 cri.go:89] found id: ""
	I0816 13:46:26.773441   57945 logs.go:276] 0 containers: []
	W0816 13:46:26.773449   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:26.773455   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:26.773514   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:26.811994   57945 cri.go:89] found id: ""
	I0816 13:46:26.812022   57945 logs.go:276] 0 containers: []
	W0816 13:46:26.812030   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:26.812036   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:26.812087   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:26.846621   57945 cri.go:89] found id: ""
	I0816 13:46:26.846647   57945 logs.go:276] 0 containers: []
	W0816 13:46:26.846654   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:26.846662   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:26.846680   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:26.902255   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:26.902293   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:26.916117   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:26.916148   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:26.986755   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:26.986786   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:26.986802   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:27.069607   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:27.069644   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:29.610859   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:29.624599   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:29.624654   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:29.660421   57945 cri.go:89] found id: ""
	I0816 13:46:29.660454   57945 logs.go:276] 0 containers: []
	W0816 13:46:29.660465   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:29.660474   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:29.660534   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:29.694828   57945 cri.go:89] found id: ""
	I0816 13:46:29.694853   57945 logs.go:276] 0 containers: []
	W0816 13:46:29.694861   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:29.694867   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:29.694933   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:29.734054   57945 cri.go:89] found id: ""
	I0816 13:46:29.734083   57945 logs.go:276] 0 containers: []
	W0816 13:46:29.734093   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:29.734100   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:29.734159   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:29.771358   57945 cri.go:89] found id: ""
	I0816 13:46:29.771383   57945 logs.go:276] 0 containers: []
	W0816 13:46:29.771391   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:29.771397   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:29.771464   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:29.806781   57945 cri.go:89] found id: ""
	I0816 13:46:29.806804   57945 logs.go:276] 0 containers: []
	W0816 13:46:29.806812   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:29.806819   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:29.806879   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:29.841716   57945 cri.go:89] found id: ""
	I0816 13:46:29.841743   57945 logs.go:276] 0 containers: []
	W0816 13:46:29.841754   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:29.841762   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:29.841827   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:29.880115   57945 cri.go:89] found id: ""
	I0816 13:46:29.880144   57945 logs.go:276] 0 containers: []
	W0816 13:46:29.880152   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:29.880158   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:29.880226   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:29.916282   57945 cri.go:89] found id: ""
	I0816 13:46:29.916311   57945 logs.go:276] 0 containers: []
	W0816 13:46:29.916321   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:29.916331   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:29.916347   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:29.996027   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:29.996059   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:30.035284   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:30.035315   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:30.085336   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:30.085368   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:30.099534   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:30.099562   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:30.174105   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:32.674746   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:32.688631   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:32.688699   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:32.722967   57945 cri.go:89] found id: ""
	I0816 13:46:32.722997   57945 logs.go:276] 0 containers: []
	W0816 13:46:32.723007   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:32.723014   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:32.723075   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:32.757223   57945 cri.go:89] found id: ""
	I0816 13:46:32.757257   57945 logs.go:276] 0 containers: []
	W0816 13:46:32.757267   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:32.757275   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:32.757342   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:32.793773   57945 cri.go:89] found id: ""
	I0816 13:46:32.793795   57945 logs.go:276] 0 containers: []
	W0816 13:46:32.793804   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:32.793811   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:32.793879   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:32.829541   57945 cri.go:89] found id: ""
	I0816 13:46:32.829565   57945 logs.go:276] 0 containers: []
	W0816 13:46:32.829573   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:32.829578   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:32.829626   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:32.864053   57945 cri.go:89] found id: ""
	I0816 13:46:32.864079   57945 logs.go:276] 0 containers: []
	W0816 13:46:32.864090   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:32.864097   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:32.864155   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:32.901420   57945 cri.go:89] found id: ""
	I0816 13:46:32.901451   57945 logs.go:276] 0 containers: []
	W0816 13:46:32.901459   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:32.901466   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:32.901522   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:32.933082   57945 cri.go:89] found id: ""
	I0816 13:46:32.933110   57945 logs.go:276] 0 containers: []
	W0816 13:46:32.933118   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:32.933125   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:32.933186   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:32.966640   57945 cri.go:89] found id: ""
	I0816 13:46:32.966664   57945 logs.go:276] 0 containers: []
	W0816 13:46:32.966672   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:32.966680   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:32.966692   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:33.048593   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:33.048627   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:33.089329   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:33.089366   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:33.144728   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:33.144764   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:33.158639   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:33.158666   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:33.227076   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:35.727465   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:35.740850   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:35.740940   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:35.777294   57945 cri.go:89] found id: ""
	I0816 13:46:35.777317   57945 logs.go:276] 0 containers: []
	W0816 13:46:35.777325   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:35.777330   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:35.777394   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:35.815582   57945 cri.go:89] found id: ""
	I0816 13:46:35.815604   57945 logs.go:276] 0 containers: []
	W0816 13:46:35.815612   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:35.815618   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:35.815672   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:35.848338   57945 cri.go:89] found id: ""
	I0816 13:46:35.848363   57945 logs.go:276] 0 containers: []
	W0816 13:46:35.848370   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:35.848376   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:35.848432   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:35.884834   57945 cri.go:89] found id: ""
	I0816 13:46:35.884862   57945 logs.go:276] 0 containers: []
	W0816 13:46:35.884870   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:35.884876   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:35.884953   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:35.919022   57945 cri.go:89] found id: ""
	I0816 13:46:35.919046   57945 logs.go:276] 0 containers: []
	W0816 13:46:35.919058   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:35.919063   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:35.919150   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:35.953087   57945 cri.go:89] found id: ""
	I0816 13:46:35.953111   57945 logs.go:276] 0 containers: []
	W0816 13:46:35.953119   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:35.953124   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:35.953182   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:35.984776   57945 cri.go:89] found id: ""
	I0816 13:46:35.984804   57945 logs.go:276] 0 containers: []
	W0816 13:46:35.984814   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:35.984821   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:35.984882   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:36.028921   57945 cri.go:89] found id: ""
	I0816 13:46:36.028946   57945 logs.go:276] 0 containers: []
	W0816 13:46:36.028954   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:36.028964   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:36.028976   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:36.091313   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:36.091342   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:36.116881   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:36.116915   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:36.186758   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:36.186778   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:36.186791   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:36.268618   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:36.268653   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:38.808419   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:38.821646   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:38.821708   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:38.860623   57945 cri.go:89] found id: ""
	I0816 13:46:38.860647   57945 logs.go:276] 0 containers: []
	W0816 13:46:38.860655   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:38.860660   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:38.860712   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:38.894728   57945 cri.go:89] found id: ""
	I0816 13:46:38.894782   57945 logs.go:276] 0 containers: []
	W0816 13:46:38.894795   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:38.894804   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:38.894870   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:38.928945   57945 cri.go:89] found id: ""
	I0816 13:46:38.928974   57945 logs.go:276] 0 containers: []
	W0816 13:46:38.928988   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:38.928994   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:38.929048   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:38.966450   57945 cri.go:89] found id: ""
	I0816 13:46:38.966474   57945 logs.go:276] 0 containers: []
	W0816 13:46:38.966482   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:38.966487   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:38.966548   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:39.001554   57945 cri.go:89] found id: ""
	I0816 13:46:39.001577   57945 logs.go:276] 0 containers: []
	W0816 13:46:39.001589   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:39.001595   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:39.001656   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:39.036621   57945 cri.go:89] found id: ""
	I0816 13:46:39.036646   57945 logs.go:276] 0 containers: []
	W0816 13:46:39.036654   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:39.036660   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:39.036725   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:39.071244   57945 cri.go:89] found id: ""
	I0816 13:46:39.071271   57945 logs.go:276] 0 containers: []
	W0816 13:46:39.071281   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:39.071289   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:39.071355   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:39.107325   57945 cri.go:89] found id: ""
	I0816 13:46:39.107352   57945 logs.go:276] 0 containers: []
	W0816 13:46:39.107361   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:39.107371   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:39.107401   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:39.189172   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:39.189208   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:39.229060   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:39.229094   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:39.281983   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:39.282025   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:39.296515   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:39.296545   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:39.368488   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:41.868721   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:41.883796   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:41.883869   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:41.922181   57945 cri.go:89] found id: ""
	I0816 13:46:41.922211   57945 logs.go:276] 0 containers: []
	W0816 13:46:41.922222   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:41.922232   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:41.922297   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:41.962213   57945 cri.go:89] found id: ""
	I0816 13:46:41.962239   57945 logs.go:276] 0 containers: []
	W0816 13:46:41.962249   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:41.962257   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:41.962321   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:42.003214   57945 cri.go:89] found id: ""
	I0816 13:46:42.003243   57945 logs.go:276] 0 containers: []
	W0816 13:46:42.003251   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:42.003257   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:42.003316   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:42.038594   57945 cri.go:89] found id: ""
	I0816 13:46:42.038622   57945 logs.go:276] 0 containers: []
	W0816 13:46:42.038635   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:42.038641   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:42.038691   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:42.071377   57945 cri.go:89] found id: ""
	I0816 13:46:42.071409   57945 logs.go:276] 0 containers: []
	W0816 13:46:42.071421   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:42.071429   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:42.071489   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:42.104777   57945 cri.go:89] found id: ""
	I0816 13:46:42.104804   57945 logs.go:276] 0 containers: []
	W0816 13:46:42.104815   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:42.104823   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:42.104879   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:42.140292   57945 cri.go:89] found id: ""
	I0816 13:46:42.140324   57945 logs.go:276] 0 containers: []
	W0816 13:46:42.140335   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:42.140342   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:42.140404   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:42.174823   57945 cri.go:89] found id: ""
	I0816 13:46:42.174861   57945 logs.go:276] 0 containers: []
	W0816 13:46:42.174870   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:42.174887   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:42.174906   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:42.216308   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:42.216337   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:42.269277   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:42.269304   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:42.282347   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:42.282374   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:42.358776   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:42.358796   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:42.358807   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:44.942195   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:44.955384   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:44.955465   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:44.994181   57945 cri.go:89] found id: ""
	I0816 13:46:44.994212   57945 logs.go:276] 0 containers: []
	W0816 13:46:44.994223   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:44.994230   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:44.994286   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:45.028937   57945 cri.go:89] found id: ""
	I0816 13:46:45.028972   57945 logs.go:276] 0 containers: []
	W0816 13:46:45.028984   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:45.028991   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:45.029049   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:45.068193   57945 cri.go:89] found id: ""
	I0816 13:46:45.068223   57945 logs.go:276] 0 containers: []
	W0816 13:46:45.068237   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:45.068249   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:45.068309   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:45.100553   57945 cri.go:89] found id: ""
	I0816 13:46:45.100653   57945 logs.go:276] 0 containers: []
	W0816 13:46:45.100667   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:45.100674   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:45.100734   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:45.135676   57945 cri.go:89] found id: ""
	I0816 13:46:45.135704   57945 logs.go:276] 0 containers: []
	W0816 13:46:45.135714   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:45.135721   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:45.135784   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:45.174611   57945 cri.go:89] found id: ""
	I0816 13:46:45.174642   57945 logs.go:276] 0 containers: []
	W0816 13:46:45.174653   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:45.174660   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:45.174713   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:45.209544   57945 cri.go:89] found id: ""
	I0816 13:46:45.209573   57945 logs.go:276] 0 containers: []
	W0816 13:46:45.209582   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:45.209588   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:45.209649   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:45.245622   57945 cri.go:89] found id: ""
	I0816 13:46:45.245654   57945 logs.go:276] 0 containers: []
	W0816 13:46:45.245664   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:45.245677   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:45.245692   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:45.284294   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:45.284322   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:45.335720   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:45.335751   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:45.350014   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:45.350039   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:45.419816   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:45.419839   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:45.419854   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:48.005991   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:48.019754   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:48.019814   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:48.053269   57945 cri.go:89] found id: ""
	I0816 13:46:48.053331   57945 logs.go:276] 0 containers: []
	W0816 13:46:48.053344   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:48.053351   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:48.053404   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:48.086992   57945 cri.go:89] found id: ""
	I0816 13:46:48.087024   57945 logs.go:276] 0 containers: []
	W0816 13:46:48.087032   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:48.087037   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:48.087098   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:48.123008   57945 cri.go:89] found id: ""
	I0816 13:46:48.123037   57945 logs.go:276] 0 containers: []
	W0816 13:46:48.123046   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:48.123053   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:48.123110   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:48.158035   57945 cri.go:89] found id: ""
	I0816 13:46:48.158064   57945 logs.go:276] 0 containers: []
	W0816 13:46:48.158075   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:48.158082   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:48.158146   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:48.194576   57945 cri.go:89] found id: ""
	I0816 13:46:48.194605   57945 logs.go:276] 0 containers: []
	W0816 13:46:48.194616   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:48.194624   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:48.194687   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:48.232844   57945 cri.go:89] found id: ""
	I0816 13:46:48.232870   57945 logs.go:276] 0 containers: []
	W0816 13:46:48.232878   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:48.232883   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:48.232955   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:48.267525   57945 cri.go:89] found id: ""
	I0816 13:46:48.267551   57945 logs.go:276] 0 containers: []
	W0816 13:46:48.267559   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:48.267564   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:48.267629   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:48.305436   57945 cri.go:89] found id: ""
	I0816 13:46:48.305465   57945 logs.go:276] 0 containers: []
	W0816 13:46:48.305477   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:48.305487   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:48.305502   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:48.357755   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:48.357781   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:48.372672   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:48.372703   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:48.439076   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:48.439099   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:48.439114   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:48.524142   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:48.524181   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:51.065770   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:51.078797   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:51.078868   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:51.118864   57945 cri.go:89] found id: ""
	I0816 13:46:51.118891   57945 logs.go:276] 0 containers: []
	W0816 13:46:51.118899   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:51.118905   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:51.118964   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:51.153024   57945 cri.go:89] found id: ""
	I0816 13:46:51.153049   57945 logs.go:276] 0 containers: []
	W0816 13:46:51.153057   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:51.153062   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:51.153111   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:51.189505   57945 cri.go:89] found id: ""
	I0816 13:46:51.189531   57945 logs.go:276] 0 containers: []
	W0816 13:46:51.189542   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:51.189550   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:51.189611   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:51.228456   57945 cri.go:89] found id: ""
	I0816 13:46:51.228483   57945 logs.go:276] 0 containers: []
	W0816 13:46:51.228494   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:51.228502   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:51.228565   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:51.264436   57945 cri.go:89] found id: ""
	I0816 13:46:51.264463   57945 logs.go:276] 0 containers: []
	W0816 13:46:51.264474   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:51.264482   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:51.264542   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:51.300291   57945 cri.go:89] found id: ""
	I0816 13:46:51.300315   57945 logs.go:276] 0 containers: []
	W0816 13:46:51.300323   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:51.300329   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:51.300379   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:51.334878   57945 cri.go:89] found id: ""
	I0816 13:46:51.334902   57945 logs.go:276] 0 containers: []
	W0816 13:46:51.334909   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:51.334917   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:51.334969   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:51.376467   57945 cri.go:89] found id: ""
	I0816 13:46:51.376491   57945 logs.go:276] 0 containers: []
	W0816 13:46:51.376499   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:51.376507   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:51.376518   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:51.420168   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:51.420194   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:51.470869   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:51.470900   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:51.484877   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:51.484903   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:51.557587   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:51.557614   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:51.557631   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:54.141123   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:54.154790   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:54.154864   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:54.189468   57945 cri.go:89] found id: ""
	I0816 13:46:54.189495   57945 logs.go:276] 0 containers: []
	W0816 13:46:54.189503   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:54.189509   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:54.189562   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:54.223774   57945 cri.go:89] found id: ""
	I0816 13:46:54.223805   57945 logs.go:276] 0 containers: []
	W0816 13:46:54.223817   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:54.223826   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:54.223883   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:54.257975   57945 cri.go:89] found id: ""
	I0816 13:46:54.258004   57945 logs.go:276] 0 containers: []
	W0816 13:46:54.258014   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:54.258022   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:54.258078   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:54.296144   57945 cri.go:89] found id: ""
	I0816 13:46:54.296174   57945 logs.go:276] 0 containers: []
	W0816 13:46:54.296193   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:54.296201   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:54.296276   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:54.336734   57945 cri.go:89] found id: ""
	I0816 13:46:54.336760   57945 logs.go:276] 0 containers: []
	W0816 13:46:54.336770   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:54.336775   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:54.336839   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:54.370572   57945 cri.go:89] found id: ""
	I0816 13:46:54.370602   57945 logs.go:276] 0 containers: []
	W0816 13:46:54.370609   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:54.370615   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:54.370676   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:54.405703   57945 cri.go:89] found id: ""
	I0816 13:46:54.405735   57945 logs.go:276] 0 containers: []
	W0816 13:46:54.405745   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:54.405753   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:54.405816   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:54.441466   57945 cri.go:89] found id: ""
	I0816 13:46:54.441492   57945 logs.go:276] 0 containers: []
	W0816 13:46:54.441500   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:54.441509   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:54.441521   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:54.492539   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:54.492570   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:54.506313   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:54.506341   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:54.580127   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:54.580151   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:54.580172   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:54.658597   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:54.658633   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:57.198267   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:57.213292   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:57.213354   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:57.248838   57945 cri.go:89] found id: ""
	I0816 13:46:57.248862   57945 logs.go:276] 0 containers: []
	W0816 13:46:57.248870   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:57.248876   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:57.248951   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:57.283868   57945 cri.go:89] found id: ""
	I0816 13:46:57.283895   57945 logs.go:276] 0 containers: []
	W0816 13:46:57.283903   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:57.283908   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:57.283958   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:57.319363   57945 cri.go:89] found id: ""
	I0816 13:46:57.319392   57945 logs.go:276] 0 containers: []
	W0816 13:46:57.319405   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:57.319412   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:57.319465   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:57.359895   57945 cri.go:89] found id: ""
	I0816 13:46:57.359937   57945 logs.go:276] 0 containers: []
	W0816 13:46:57.359949   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:57.359957   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:57.360024   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:57.398025   57945 cri.go:89] found id: ""
	I0816 13:46:57.398057   57945 logs.go:276] 0 containers: []
	W0816 13:46:57.398068   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:57.398075   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:57.398140   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:57.436101   57945 cri.go:89] found id: ""
	I0816 13:46:57.436132   57945 logs.go:276] 0 containers: []
	W0816 13:46:57.436140   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:57.436147   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:57.436223   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:57.471737   57945 cri.go:89] found id: ""
	I0816 13:46:57.471767   57945 logs.go:276] 0 containers: []
	W0816 13:46:57.471778   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:57.471785   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:57.471845   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:57.508664   57945 cri.go:89] found id: ""
	I0816 13:46:57.508694   57945 logs.go:276] 0 containers: []
	W0816 13:46:57.508705   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:57.508716   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:57.508730   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:57.559122   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:57.559155   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:57.572504   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:57.572529   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:57.646721   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:57.646743   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:57.646756   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:57.725107   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:57.725153   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:00.269137   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:00.284285   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:00.284363   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:00.325613   57945 cri.go:89] found id: ""
	I0816 13:47:00.325645   57945 logs.go:276] 0 containers: []
	W0816 13:47:00.325654   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:00.325662   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:00.325721   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:00.361706   57945 cri.go:89] found id: ""
	I0816 13:47:00.361732   57945 logs.go:276] 0 containers: []
	W0816 13:47:00.361742   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:00.361750   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:00.361808   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:00.398453   57945 cri.go:89] found id: ""
	I0816 13:47:00.398478   57945 logs.go:276] 0 containers: []
	W0816 13:47:00.398486   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:00.398491   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:00.398544   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:00.434233   57945 cri.go:89] found id: ""
	I0816 13:47:00.434265   57945 logs.go:276] 0 containers: []
	W0816 13:47:00.434278   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:00.434286   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:00.434391   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:00.473020   57945 cri.go:89] found id: ""
	I0816 13:47:00.473042   57945 logs.go:276] 0 containers: []
	W0816 13:47:00.473050   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:00.473056   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:00.473117   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:00.511480   57945 cri.go:89] found id: ""
	I0816 13:47:00.511507   57945 logs.go:276] 0 containers: []
	W0816 13:47:00.511518   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:00.511525   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:00.511595   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:00.546166   57945 cri.go:89] found id: ""
	I0816 13:47:00.546202   57945 logs.go:276] 0 containers: []
	W0816 13:47:00.546209   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:00.546216   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:00.546263   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:00.585285   57945 cri.go:89] found id: ""
	I0816 13:47:00.585310   57945 logs.go:276] 0 containers: []
	W0816 13:47:00.585320   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:00.585329   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:00.585348   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:00.633346   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:00.633373   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:00.687904   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:00.687937   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:00.703773   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:00.703801   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:00.775179   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:00.775210   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:00.775226   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:03.354676   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:03.370107   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:03.370178   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:03.406212   57945 cri.go:89] found id: ""
	I0816 13:47:03.406245   57945 logs.go:276] 0 containers: []
	W0816 13:47:03.406256   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:03.406263   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:03.406333   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:03.442887   57945 cri.go:89] found id: ""
	I0816 13:47:03.442925   57945 logs.go:276] 0 containers: []
	W0816 13:47:03.442937   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:03.442943   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:03.443000   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:03.479225   57945 cri.go:89] found id: ""
	I0816 13:47:03.479259   57945 logs.go:276] 0 containers: []
	W0816 13:47:03.479270   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:03.479278   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:03.479340   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:03.516145   57945 cri.go:89] found id: ""
	I0816 13:47:03.516181   57945 logs.go:276] 0 containers: []
	W0816 13:47:03.516192   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:03.516203   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:03.516265   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:03.548225   57945 cri.go:89] found id: ""
	I0816 13:47:03.548252   57945 logs.go:276] 0 containers: []
	W0816 13:47:03.548260   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:03.548267   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:03.548324   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:03.582038   57945 cri.go:89] found id: ""
	I0816 13:47:03.582071   57945 logs.go:276] 0 containers: []
	W0816 13:47:03.582082   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:03.582089   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:03.582160   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:03.618693   57945 cri.go:89] found id: ""
	I0816 13:47:03.618720   57945 logs.go:276] 0 containers: []
	W0816 13:47:03.618730   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:03.618737   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:03.618793   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:03.653717   57945 cri.go:89] found id: ""
	I0816 13:47:03.653742   57945 logs.go:276] 0 containers: []
	W0816 13:47:03.653751   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:03.653759   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:03.653771   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:03.705909   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:03.705942   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:03.720727   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:03.720751   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:03.795064   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:03.795089   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:03.795104   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:03.874061   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:03.874105   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:06.420149   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:06.437062   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:06.437124   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:06.473620   57945 cri.go:89] found id: ""
	I0816 13:47:06.473651   57945 logs.go:276] 0 containers: []
	W0816 13:47:06.473659   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:06.473664   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:06.473720   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:06.510281   57945 cri.go:89] found id: ""
	I0816 13:47:06.510307   57945 logs.go:276] 0 containers: []
	W0816 13:47:06.510315   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:06.510321   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:06.510372   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:06.546589   57945 cri.go:89] found id: ""
	I0816 13:47:06.546623   57945 logs.go:276] 0 containers: []
	W0816 13:47:06.546634   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:06.546642   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:06.546702   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:06.580629   57945 cri.go:89] found id: ""
	I0816 13:47:06.580652   57945 logs.go:276] 0 containers: []
	W0816 13:47:06.580665   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:06.580671   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:06.580718   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:06.617411   57945 cri.go:89] found id: ""
	I0816 13:47:06.617439   57945 logs.go:276] 0 containers: []
	W0816 13:47:06.617459   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:06.617468   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:06.617533   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:06.654017   57945 cri.go:89] found id: ""
	I0816 13:47:06.654045   57945 logs.go:276] 0 containers: []
	W0816 13:47:06.654057   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:06.654064   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:06.654124   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:06.695109   57945 cri.go:89] found id: ""
	I0816 13:47:06.695139   57945 logs.go:276] 0 containers: []
	W0816 13:47:06.695147   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:06.695153   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:06.695205   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:06.731545   57945 cri.go:89] found id: ""
	I0816 13:47:06.731620   57945 logs.go:276] 0 containers: []
	W0816 13:47:06.731635   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:06.731647   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:06.731668   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:06.782862   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:06.782900   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:06.797524   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:06.797550   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:06.877445   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:06.877476   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:06.877493   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:06.957932   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:06.957965   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:09.498843   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:09.513398   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:09.513468   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:09.551246   57945 cri.go:89] found id: ""
	I0816 13:47:09.551275   57945 logs.go:276] 0 containers: []
	W0816 13:47:09.551284   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:09.551290   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:09.551339   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:09.585033   57945 cri.go:89] found id: ""
	I0816 13:47:09.585059   57945 logs.go:276] 0 containers: []
	W0816 13:47:09.585066   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:09.585072   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:09.585120   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:09.623498   57945 cri.go:89] found id: ""
	I0816 13:47:09.623524   57945 logs.go:276] 0 containers: []
	W0816 13:47:09.623531   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:09.623537   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:09.623584   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:09.657476   57945 cri.go:89] found id: ""
	I0816 13:47:09.657504   57945 logs.go:276] 0 containers: []
	W0816 13:47:09.657515   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:09.657523   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:09.657578   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:09.693715   57945 cri.go:89] found id: ""
	I0816 13:47:09.693746   57945 logs.go:276] 0 containers: []
	W0816 13:47:09.693757   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:09.693765   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:09.693825   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:09.727396   57945 cri.go:89] found id: ""
	I0816 13:47:09.727426   57945 logs.go:276] 0 containers: []
	W0816 13:47:09.727437   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:09.727451   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:09.727511   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:09.764334   57945 cri.go:89] found id: ""
	I0816 13:47:09.764361   57945 logs.go:276] 0 containers: []
	W0816 13:47:09.764368   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:09.764374   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:09.764428   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:09.799460   57945 cri.go:89] found id: ""
	I0816 13:47:09.799485   57945 logs.go:276] 0 containers: []
	W0816 13:47:09.799497   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:09.799508   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:09.799521   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:09.849637   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:09.849678   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:09.869665   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:09.869702   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:09.954878   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:09.954907   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:09.954922   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:10.032473   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:10.032507   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:12.574303   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:12.587684   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:12.587746   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:12.625568   57945 cri.go:89] found id: ""
	I0816 13:47:12.625593   57945 logs.go:276] 0 containers: []
	W0816 13:47:12.625604   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:12.625611   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:12.625719   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:12.665018   57945 cri.go:89] found id: ""
	I0816 13:47:12.665048   57945 logs.go:276] 0 containers: []
	W0816 13:47:12.665059   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:12.665067   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:12.665128   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:12.701125   57945 cri.go:89] found id: ""
	I0816 13:47:12.701150   57945 logs.go:276] 0 containers: []
	W0816 13:47:12.701158   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:12.701163   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:12.701218   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:12.740613   57945 cri.go:89] found id: ""
	I0816 13:47:12.740644   57945 logs.go:276] 0 containers: []
	W0816 13:47:12.740654   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:12.740662   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:12.740727   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:12.779620   57945 cri.go:89] found id: ""
	I0816 13:47:12.779652   57945 logs.go:276] 0 containers: []
	W0816 13:47:12.779664   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:12.779678   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:12.779743   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:12.816222   57945 cri.go:89] found id: ""
	I0816 13:47:12.816248   57945 logs.go:276] 0 containers: []
	W0816 13:47:12.816269   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:12.816278   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:12.816327   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:12.853083   57945 cri.go:89] found id: ""
	I0816 13:47:12.853113   57945 logs.go:276] 0 containers: []
	W0816 13:47:12.853125   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:12.853133   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:12.853192   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:12.888197   57945 cri.go:89] found id: ""
	I0816 13:47:12.888223   57945 logs.go:276] 0 containers: []
	W0816 13:47:12.888232   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:12.888240   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:12.888255   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:12.941464   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:12.941502   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:12.955423   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:12.955456   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:13.025515   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:13.025537   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:13.025550   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:13.112409   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:13.112452   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:15.656240   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:15.669505   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:15.669568   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:15.703260   57945 cri.go:89] found id: ""
	I0816 13:47:15.703288   57945 logs.go:276] 0 containers: []
	W0816 13:47:15.703299   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:15.703306   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:15.703368   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:15.740555   57945 cri.go:89] found id: ""
	I0816 13:47:15.740580   57945 logs.go:276] 0 containers: []
	W0816 13:47:15.740590   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:15.740596   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:15.740660   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:15.776207   57945 cri.go:89] found id: ""
	I0816 13:47:15.776233   57945 logs.go:276] 0 containers: []
	W0816 13:47:15.776241   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:15.776247   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:15.776302   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:15.816845   57945 cri.go:89] found id: ""
	I0816 13:47:15.816871   57945 logs.go:276] 0 containers: []
	W0816 13:47:15.816879   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:15.816884   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:15.816953   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:15.851279   57945 cri.go:89] found id: ""
	I0816 13:47:15.851306   57945 logs.go:276] 0 containers: []
	W0816 13:47:15.851318   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:15.851325   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:15.851391   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:15.884960   57945 cri.go:89] found id: ""
	I0816 13:47:15.884987   57945 logs.go:276] 0 containers: []
	W0816 13:47:15.884997   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:15.885004   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:15.885063   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:15.922027   57945 cri.go:89] found id: ""
	I0816 13:47:15.922051   57945 logs.go:276] 0 containers: []
	W0816 13:47:15.922060   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:15.922067   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:15.922130   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:15.956774   57945 cri.go:89] found id: ""
	I0816 13:47:15.956799   57945 logs.go:276] 0 containers: []
	W0816 13:47:15.956806   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:15.956814   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:15.956828   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:16.036342   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:16.036375   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:16.079006   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:16.079033   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:16.130374   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:16.130409   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:16.144707   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:16.144740   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:16.216466   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:18.716696   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:18.729670   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:18.729731   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:18.764481   57945 cri.go:89] found id: ""
	I0816 13:47:18.764513   57945 logs.go:276] 0 containers: []
	W0816 13:47:18.764521   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:18.764527   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:18.764574   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:18.803141   57945 cri.go:89] found id: ""
	I0816 13:47:18.803172   57945 logs.go:276] 0 containers: []
	W0816 13:47:18.803183   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:18.803192   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:18.803257   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:18.847951   57945 cri.go:89] found id: ""
	I0816 13:47:18.847977   57945 logs.go:276] 0 containers: []
	W0816 13:47:18.847985   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:18.847991   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:18.848038   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:18.881370   57945 cri.go:89] found id: ""
	I0816 13:47:18.881402   57945 logs.go:276] 0 containers: []
	W0816 13:47:18.881420   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:18.881434   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:18.881491   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:18.916206   57945 cri.go:89] found id: ""
	I0816 13:47:18.916237   57945 logs.go:276] 0 containers: []
	W0816 13:47:18.916247   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:18.916253   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:18.916314   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:18.946851   57945 cri.go:89] found id: ""
	I0816 13:47:18.946873   57945 logs.go:276] 0 containers: []
	W0816 13:47:18.946883   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:18.946891   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:18.946944   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:18.980684   57945 cri.go:89] found id: ""
	I0816 13:47:18.980710   57945 logs.go:276] 0 containers: []
	W0816 13:47:18.980718   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:18.980724   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:18.980789   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:19.015762   57945 cri.go:89] found id: ""
	I0816 13:47:19.015794   57945 logs.go:276] 0 containers: []
	W0816 13:47:19.015805   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:19.015817   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:19.015837   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:19.101544   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:19.101582   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:19.143587   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:19.143621   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:19.198788   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:19.198826   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:19.212697   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:19.212723   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:19.282719   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:21.783729   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:21.797977   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:21.798056   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:21.833944   57945 cri.go:89] found id: ""
	I0816 13:47:21.833976   57945 logs.go:276] 0 containers: []
	W0816 13:47:21.833987   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:21.833996   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:21.834053   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:21.870079   57945 cri.go:89] found id: ""
	I0816 13:47:21.870110   57945 logs.go:276] 0 containers: []
	W0816 13:47:21.870120   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:21.870128   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:21.870191   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:21.905834   57945 cri.go:89] found id: ""
	I0816 13:47:21.905864   57945 logs.go:276] 0 containers: []
	W0816 13:47:21.905872   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:21.905878   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:21.905932   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:21.943319   57945 cri.go:89] found id: ""
	I0816 13:47:21.943341   57945 logs.go:276] 0 containers: []
	W0816 13:47:21.943349   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:21.943354   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:21.943412   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:21.982065   57945 cri.go:89] found id: ""
	I0816 13:47:21.982094   57945 logs.go:276] 0 containers: []
	W0816 13:47:21.982103   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:21.982110   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:21.982268   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:22.035131   57945 cri.go:89] found id: ""
	I0816 13:47:22.035167   57945 logs.go:276] 0 containers: []
	W0816 13:47:22.035179   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:22.035186   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:22.035250   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:22.082619   57945 cri.go:89] found id: ""
	I0816 13:47:22.082647   57945 logs.go:276] 0 containers: []
	W0816 13:47:22.082655   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:22.082661   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:22.082720   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:22.128521   57945 cri.go:89] found id: ""
	I0816 13:47:22.128550   57945 logs.go:276] 0 containers: []
	W0816 13:47:22.128559   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:22.128568   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:22.128581   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:22.182794   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:22.182824   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:22.196602   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:22.196628   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:22.264434   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:22.264457   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:22.264472   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:22.343796   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:22.343832   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:24.891164   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:24.904170   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:24.904244   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:24.941046   57945 cri.go:89] found id: ""
	I0816 13:47:24.941082   57945 logs.go:276] 0 containers: []
	W0816 13:47:24.941093   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:24.941101   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:24.941177   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:24.976520   57945 cri.go:89] found id: ""
	I0816 13:47:24.976553   57945 logs.go:276] 0 containers: []
	W0816 13:47:24.976564   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:24.976572   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:24.976635   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:25.024663   57945 cri.go:89] found id: ""
	I0816 13:47:25.024692   57945 logs.go:276] 0 containers: []
	W0816 13:47:25.024704   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:25.024712   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:25.024767   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:25.063892   57945 cri.go:89] found id: ""
	I0816 13:47:25.063920   57945 logs.go:276] 0 containers: []
	W0816 13:47:25.063928   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:25.063934   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:25.064014   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:25.105565   57945 cri.go:89] found id: ""
	I0816 13:47:25.105600   57945 logs.go:276] 0 containers: []
	W0816 13:47:25.105612   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:25.105619   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:25.105676   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:25.150965   57945 cri.go:89] found id: ""
	I0816 13:47:25.150995   57945 logs.go:276] 0 containers: []
	W0816 13:47:25.151006   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:25.151014   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:25.151074   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:25.191170   57945 cri.go:89] found id: ""
	I0816 13:47:25.191202   57945 logs.go:276] 0 containers: []
	W0816 13:47:25.191213   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:25.191220   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:25.191280   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:25.226614   57945 cri.go:89] found id: ""
	I0816 13:47:25.226643   57945 logs.go:276] 0 containers: []
	W0816 13:47:25.226653   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:25.226664   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:25.226680   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:25.239478   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:25.239516   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:25.315450   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:25.315478   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:25.315494   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:25.394755   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:25.394792   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:25.434737   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:25.434768   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:27.984829   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:28.000304   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:28.000378   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:28.042396   57945 cri.go:89] found id: ""
	I0816 13:47:28.042430   57945 logs.go:276] 0 containers: []
	W0816 13:47:28.042447   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:28.042455   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:28.042514   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:28.094491   57945 cri.go:89] found id: ""
	I0816 13:47:28.094515   57945 logs.go:276] 0 containers: []
	W0816 13:47:28.094523   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:28.094528   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:28.094586   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:28.146228   57945 cri.go:89] found id: ""
	I0816 13:47:28.146254   57945 logs.go:276] 0 containers: []
	W0816 13:47:28.146262   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:28.146267   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:28.146314   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:28.179302   57945 cri.go:89] found id: ""
	I0816 13:47:28.179335   57945 logs.go:276] 0 containers: []
	W0816 13:47:28.179347   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:28.179355   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:28.179417   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:28.216707   57945 cri.go:89] found id: ""
	I0816 13:47:28.216737   57945 logs.go:276] 0 containers: []
	W0816 13:47:28.216749   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:28.216757   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:28.216808   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:28.253800   57945 cri.go:89] found id: ""
	I0816 13:47:28.253832   57945 logs.go:276] 0 containers: []
	W0816 13:47:28.253843   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:28.253851   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:28.253906   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:28.289403   57945 cri.go:89] found id: ""
	I0816 13:47:28.289438   57945 logs.go:276] 0 containers: []
	W0816 13:47:28.289450   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:28.289458   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:28.289520   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:28.325174   57945 cri.go:89] found id: ""
	I0816 13:47:28.325206   57945 logs.go:276] 0 containers: []
	W0816 13:47:28.325214   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:28.325222   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:28.325233   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:28.377043   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:28.377077   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:28.390991   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:28.391028   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:28.463563   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:28.463584   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:28.463598   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:28.546593   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:28.546628   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:31.084932   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:31.100742   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:31.100809   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:31.134888   57945 cri.go:89] found id: ""
	I0816 13:47:31.134914   57945 logs.go:276] 0 containers: []
	W0816 13:47:31.134921   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:31.134929   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:31.134979   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:31.169533   57945 cri.go:89] found id: ""
	I0816 13:47:31.169558   57945 logs.go:276] 0 containers: []
	W0816 13:47:31.169566   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:31.169572   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:31.169630   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:31.203888   57945 cri.go:89] found id: ""
	I0816 13:47:31.203914   57945 logs.go:276] 0 containers: []
	W0816 13:47:31.203924   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:31.203931   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:31.203993   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:31.239346   57945 cri.go:89] found id: ""
	I0816 13:47:31.239374   57945 logs.go:276] 0 containers: []
	W0816 13:47:31.239387   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:31.239393   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:31.239443   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:31.274011   57945 cri.go:89] found id: ""
	I0816 13:47:31.274038   57945 logs.go:276] 0 containers: []
	W0816 13:47:31.274046   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:31.274053   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:31.274117   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:31.308812   57945 cri.go:89] found id: ""
	I0816 13:47:31.308845   57945 logs.go:276] 0 containers: []
	W0816 13:47:31.308856   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:31.308863   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:31.308950   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:31.343041   57945 cri.go:89] found id: ""
	I0816 13:47:31.343067   57945 logs.go:276] 0 containers: []
	W0816 13:47:31.343075   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:31.343082   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:31.343143   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:31.380969   57945 cri.go:89] found id: ""
	I0816 13:47:31.380998   57945 logs.go:276] 0 containers: []
	W0816 13:47:31.381006   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:31.381015   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:31.381028   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:31.434431   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:31.434465   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:31.449374   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:31.449404   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:31.522134   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:31.522159   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:31.522174   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:31.602707   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:31.602736   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:34.142413   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:34.155531   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:34.155595   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:34.195926   57945 cri.go:89] found id: ""
	I0816 13:47:34.195953   57945 logs.go:276] 0 containers: []
	W0816 13:47:34.195964   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:34.195972   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:34.196040   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:34.230064   57945 cri.go:89] found id: ""
	I0816 13:47:34.230092   57945 logs.go:276] 0 containers: []
	W0816 13:47:34.230103   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:34.230109   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:34.230163   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:34.263973   57945 cri.go:89] found id: ""
	I0816 13:47:34.263998   57945 logs.go:276] 0 containers: []
	W0816 13:47:34.264005   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:34.264012   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:34.264069   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:34.298478   57945 cri.go:89] found id: ""
	I0816 13:47:34.298523   57945 logs.go:276] 0 containers: []
	W0816 13:47:34.298532   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:34.298539   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:34.298597   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:34.337196   57945 cri.go:89] found id: ""
	I0816 13:47:34.337225   57945 logs.go:276] 0 containers: []
	W0816 13:47:34.337233   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:34.337239   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:34.337291   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:34.374716   57945 cri.go:89] found id: ""
	I0816 13:47:34.374751   57945 logs.go:276] 0 containers: []
	W0816 13:47:34.374763   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:34.374771   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:34.374830   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:34.413453   57945 cri.go:89] found id: ""
	I0816 13:47:34.413480   57945 logs.go:276] 0 containers: []
	W0816 13:47:34.413491   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:34.413498   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:34.413563   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:34.450074   57945 cri.go:89] found id: ""
	I0816 13:47:34.450107   57945 logs.go:276] 0 containers: []
	W0816 13:47:34.450119   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:34.450156   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:34.450176   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:34.490214   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:34.490239   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:34.542861   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:34.542895   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:34.557371   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:34.557400   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:34.627976   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:34.627995   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:34.628011   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:37.205741   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:37.219207   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:37.219286   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:37.258254   57945 cri.go:89] found id: ""
	I0816 13:47:37.258288   57945 logs.go:276] 0 containers: []
	W0816 13:47:37.258300   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:37.258307   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:37.258359   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:37.293604   57945 cri.go:89] found id: ""
	I0816 13:47:37.293635   57945 logs.go:276] 0 containers: []
	W0816 13:47:37.293647   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:37.293654   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:37.293715   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:37.334043   57945 cri.go:89] found id: ""
	I0816 13:47:37.334072   57945 logs.go:276] 0 containers: []
	W0816 13:47:37.334084   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:37.334091   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:37.334153   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:37.369745   57945 cri.go:89] found id: ""
	I0816 13:47:37.369770   57945 logs.go:276] 0 containers: []
	W0816 13:47:37.369777   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:37.369784   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:37.369835   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:37.406277   57945 cri.go:89] found id: ""
	I0816 13:47:37.406305   57945 logs.go:276] 0 containers: []
	W0816 13:47:37.406317   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:37.406325   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:37.406407   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:37.440418   57945 cri.go:89] found id: ""
	I0816 13:47:37.440449   57945 logs.go:276] 0 containers: []
	W0816 13:47:37.440456   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:37.440463   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:37.440515   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:37.474527   57945 cri.go:89] found id: ""
	I0816 13:47:37.474561   57945 logs.go:276] 0 containers: []
	W0816 13:47:37.474572   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:37.474580   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:37.474642   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:37.513959   57945 cri.go:89] found id: ""
	I0816 13:47:37.513987   57945 logs.go:276] 0 containers: []
	W0816 13:47:37.513995   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:37.514004   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:37.514020   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:37.569561   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:37.569597   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:37.584095   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:37.584127   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:37.652289   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:37.652317   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:37.652333   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:37.737388   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:37.737434   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:40.281872   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:40.295704   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:40.295763   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:40.336641   57945 cri.go:89] found id: ""
	I0816 13:47:40.336667   57945 logs.go:276] 0 containers: []
	W0816 13:47:40.336678   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:40.336686   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:40.336748   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:40.373500   57945 cri.go:89] found id: ""
	I0816 13:47:40.373524   57945 logs.go:276] 0 containers: []
	W0816 13:47:40.373531   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:40.373536   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:40.373593   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:40.417553   57945 cri.go:89] found id: ""
	I0816 13:47:40.417575   57945 logs.go:276] 0 containers: []
	W0816 13:47:40.417583   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:40.417589   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:40.417645   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:40.452778   57945 cri.go:89] found id: ""
	I0816 13:47:40.452809   57945 logs.go:276] 0 containers: []
	W0816 13:47:40.452819   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:40.452827   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:40.452896   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:40.491389   57945 cri.go:89] found id: ""
	I0816 13:47:40.491424   57945 logs.go:276] 0 containers: []
	W0816 13:47:40.491436   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:40.491445   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:40.491505   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:40.529780   57945 cri.go:89] found id: ""
	I0816 13:47:40.529815   57945 logs.go:276] 0 containers: []
	W0816 13:47:40.529826   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:40.529835   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:40.529903   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:40.567724   57945 cri.go:89] found id: ""
	I0816 13:47:40.567751   57945 logs.go:276] 0 containers: []
	W0816 13:47:40.567761   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:40.567768   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:40.567825   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:40.604260   57945 cri.go:89] found id: ""
	I0816 13:47:40.604299   57945 logs.go:276] 0 containers: []
	W0816 13:47:40.604309   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:40.604319   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:40.604335   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:40.676611   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:40.676642   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:40.676659   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:40.755779   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:40.755815   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:40.793780   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:40.793811   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:40.845869   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:40.845902   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:43.361766   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:43.376247   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:43.376309   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:43.416527   57945 cri.go:89] found id: ""
	I0816 13:47:43.416559   57945 logs.go:276] 0 containers: []
	W0816 13:47:43.416567   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:43.416573   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:43.416621   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:43.458203   57945 cri.go:89] found id: ""
	I0816 13:47:43.458228   57945 logs.go:276] 0 containers: []
	W0816 13:47:43.458239   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:43.458246   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:43.458312   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:43.498122   57945 cri.go:89] found id: ""
	I0816 13:47:43.498146   57945 logs.go:276] 0 containers: []
	W0816 13:47:43.498158   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:43.498166   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:43.498231   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:43.533392   57945 cri.go:89] found id: ""
	I0816 13:47:43.533418   57945 logs.go:276] 0 containers: []
	W0816 13:47:43.533428   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:43.533436   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:43.533510   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:43.569258   57945 cri.go:89] found id: ""
	I0816 13:47:43.569294   57945 logs.go:276] 0 containers: []
	W0816 13:47:43.569301   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:43.569309   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:43.569368   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:43.603599   57945 cri.go:89] found id: ""
	I0816 13:47:43.603624   57945 logs.go:276] 0 containers: []
	W0816 13:47:43.603633   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:43.603639   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:43.603696   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:43.643204   57945 cri.go:89] found id: ""
	I0816 13:47:43.643236   57945 logs.go:276] 0 containers: []
	W0816 13:47:43.643248   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:43.643256   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:43.643343   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:43.678365   57945 cri.go:89] found id: ""
	I0816 13:47:43.678393   57945 logs.go:276] 0 containers: []
	W0816 13:47:43.678412   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:43.678424   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:43.678440   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:43.729472   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:43.729522   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:43.743714   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:43.743749   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:43.819210   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:43.819237   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:43.819252   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:43.899800   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:43.899835   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:46.437795   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:46.450756   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:46.450828   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:46.487036   57945 cri.go:89] found id: ""
	I0816 13:47:46.487059   57945 logs.go:276] 0 containers: []
	W0816 13:47:46.487067   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:46.487073   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:46.487119   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:46.524268   57945 cri.go:89] found id: ""
	I0816 13:47:46.524294   57945 logs.go:276] 0 containers: []
	W0816 13:47:46.524303   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:46.524308   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:46.524360   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:46.561202   57945 cri.go:89] found id: ""
	I0816 13:47:46.561232   57945 logs.go:276] 0 containers: []
	W0816 13:47:46.561244   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:46.561251   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:46.561311   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:46.596006   57945 cri.go:89] found id: ""
	I0816 13:47:46.596032   57945 logs.go:276] 0 containers: []
	W0816 13:47:46.596039   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:46.596045   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:46.596094   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:46.632279   57945 cri.go:89] found id: ""
	I0816 13:47:46.632306   57945 logs.go:276] 0 containers: []
	W0816 13:47:46.632313   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:46.632319   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:46.632372   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:46.669139   57945 cri.go:89] found id: ""
	I0816 13:47:46.669166   57945 logs.go:276] 0 containers: []
	W0816 13:47:46.669174   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:46.669179   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:46.669237   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:46.704084   57945 cri.go:89] found id: ""
	I0816 13:47:46.704115   57945 logs.go:276] 0 containers: []
	W0816 13:47:46.704126   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:46.704134   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:46.704207   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:46.740275   57945 cri.go:89] found id: ""
	I0816 13:47:46.740303   57945 logs.go:276] 0 containers: []
	W0816 13:47:46.740314   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:46.740325   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:46.740341   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:46.792777   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:46.792811   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:46.807390   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:46.807429   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:46.877563   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:46.877589   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:46.877605   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:46.954703   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:46.954737   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:49.497506   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:49.510913   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:49.511007   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:49.547461   57945 cri.go:89] found id: ""
	I0816 13:47:49.547491   57945 logs.go:276] 0 containers: []
	W0816 13:47:49.547503   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:49.547517   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:49.547579   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:49.581972   57945 cri.go:89] found id: ""
	I0816 13:47:49.582005   57945 logs.go:276] 0 containers: []
	W0816 13:47:49.582014   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:49.582021   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:49.582084   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:49.617148   57945 cri.go:89] found id: ""
	I0816 13:47:49.617176   57945 logs.go:276] 0 containers: []
	W0816 13:47:49.617185   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:49.617193   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:49.617260   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:49.652546   57945 cri.go:89] found id: ""
	I0816 13:47:49.652569   57945 logs.go:276] 0 containers: []
	W0816 13:47:49.652578   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:49.652584   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:49.652631   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:49.688040   57945 cri.go:89] found id: ""
	I0816 13:47:49.688071   57945 logs.go:276] 0 containers: []
	W0816 13:47:49.688079   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:49.688084   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:49.688154   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:49.721779   57945 cri.go:89] found id: ""
	I0816 13:47:49.721809   57945 logs.go:276] 0 containers: []
	W0816 13:47:49.721819   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:49.721827   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:49.721890   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:49.758926   57945 cri.go:89] found id: ""
	I0816 13:47:49.758953   57945 logs.go:276] 0 containers: []
	W0816 13:47:49.758960   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:49.758966   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:49.759020   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:49.796328   57945 cri.go:89] found id: ""
	I0816 13:47:49.796358   57945 logs.go:276] 0 containers: []
	W0816 13:47:49.796368   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:49.796378   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:49.796393   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:49.851818   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:49.851855   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:49.867320   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:49.867350   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:49.934885   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:49.934907   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:49.934921   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:50.018012   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:50.018055   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:52.563101   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:52.576817   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:52.576879   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:52.613425   57945 cri.go:89] found id: ""
	I0816 13:47:52.613459   57945 logs.go:276] 0 containers: []
	W0816 13:47:52.613469   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:52.613475   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:52.613522   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:52.650086   57945 cri.go:89] found id: ""
	I0816 13:47:52.650109   57945 logs.go:276] 0 containers: []
	W0816 13:47:52.650117   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:52.650123   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:52.650186   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:52.686993   57945 cri.go:89] found id: ""
	I0816 13:47:52.687020   57945 logs.go:276] 0 containers: []
	W0816 13:47:52.687028   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:52.687034   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:52.687080   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:52.724307   57945 cri.go:89] found id: ""
	I0816 13:47:52.724337   57945 logs.go:276] 0 containers: []
	W0816 13:47:52.724349   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:52.724357   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:52.724421   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:52.759250   57945 cri.go:89] found id: ""
	I0816 13:47:52.759281   57945 logs.go:276] 0 containers: []
	W0816 13:47:52.759290   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:52.759295   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:52.759350   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:52.798634   57945 cri.go:89] found id: ""
	I0816 13:47:52.798660   57945 logs.go:276] 0 containers: []
	W0816 13:47:52.798670   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:52.798677   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:52.798741   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:52.833923   57945 cri.go:89] found id: ""
	I0816 13:47:52.833946   57945 logs.go:276] 0 containers: []
	W0816 13:47:52.833954   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:52.833960   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:52.834005   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:52.873647   57945 cri.go:89] found id: ""
	I0816 13:47:52.873671   57945 logs.go:276] 0 containers: []
	W0816 13:47:52.873679   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:52.873687   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:52.873701   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:52.887667   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:52.887697   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:52.960494   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:52.960516   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:52.960529   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:53.037132   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:53.037167   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:53.076769   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:53.076799   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:55.625565   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:55.639296   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:55.639367   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:55.675104   57945 cri.go:89] found id: ""
	I0816 13:47:55.675137   57945 logs.go:276] 0 containers: []
	W0816 13:47:55.675149   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:55.675156   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:55.675220   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:55.710108   57945 cri.go:89] found id: ""
	I0816 13:47:55.710137   57945 logs.go:276] 0 containers: []
	W0816 13:47:55.710149   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:55.710156   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:55.710218   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:55.744190   57945 cri.go:89] found id: ""
	I0816 13:47:55.744212   57945 logs.go:276] 0 containers: []
	W0816 13:47:55.744220   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:55.744225   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:55.744288   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:55.781775   57945 cri.go:89] found id: ""
	I0816 13:47:55.781806   57945 logs.go:276] 0 containers: []
	W0816 13:47:55.781815   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:55.781821   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:55.781879   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:55.818877   57945 cri.go:89] found id: ""
	I0816 13:47:55.818907   57945 logs.go:276] 0 containers: []
	W0816 13:47:55.818915   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:55.818921   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:55.818973   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:55.858751   57945 cri.go:89] found id: ""
	I0816 13:47:55.858773   57945 logs.go:276] 0 containers: []
	W0816 13:47:55.858782   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:55.858790   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:55.858852   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:55.894745   57945 cri.go:89] found id: ""
	I0816 13:47:55.894776   57945 logs.go:276] 0 containers: []
	W0816 13:47:55.894787   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:55.894796   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:55.894854   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:55.928805   57945 cri.go:89] found id: ""
	I0816 13:47:55.928832   57945 logs.go:276] 0 containers: []
	W0816 13:47:55.928843   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:55.928853   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:55.928872   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:55.982684   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:55.982717   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:55.997319   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:55.997354   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:56.063016   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:56.063043   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:56.063059   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:56.147138   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:56.147177   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:58.686160   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:58.699135   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:58.699260   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:58.737566   57945 cri.go:89] found id: ""
	I0816 13:47:58.737597   57945 logs.go:276] 0 containers: []
	W0816 13:47:58.737606   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:58.737613   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:58.737662   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:58.778119   57945 cri.go:89] found id: ""
	I0816 13:47:58.778149   57945 logs.go:276] 0 containers: []
	W0816 13:47:58.778164   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:58.778173   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:58.778243   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:58.815003   57945 cri.go:89] found id: ""
	I0816 13:47:58.815031   57945 logs.go:276] 0 containers: []
	W0816 13:47:58.815040   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:58.815046   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:58.815094   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:58.847912   57945 cri.go:89] found id: ""
	I0816 13:47:58.847941   57945 logs.go:276] 0 containers: []
	W0816 13:47:58.847949   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:58.847955   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:58.848005   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:58.882600   57945 cri.go:89] found id: ""
	I0816 13:47:58.882623   57945 logs.go:276] 0 containers: []
	W0816 13:47:58.882631   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:58.882637   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:58.882686   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:58.920459   57945 cri.go:89] found id: ""
	I0816 13:47:58.920489   57945 logs.go:276] 0 containers: []
	W0816 13:47:58.920500   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:58.920507   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:58.920571   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:58.952411   57945 cri.go:89] found id: ""
	I0816 13:47:58.952445   57945 logs.go:276] 0 containers: []
	W0816 13:47:58.952453   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:58.952460   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:58.952570   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:58.985546   57945 cri.go:89] found id: ""
	I0816 13:47:58.985573   57945 logs.go:276] 0 containers: []
	W0816 13:47:58.985581   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:58.985589   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:58.985600   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:59.067406   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:59.067439   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:59.108076   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:59.108107   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:59.162698   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:59.162734   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:59.178734   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:59.178759   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:59.255267   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:01.756248   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:01.768940   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:01.769009   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:01.804884   57945 cri.go:89] found id: ""
	I0816 13:48:01.804924   57945 logs.go:276] 0 containers: []
	W0816 13:48:01.804936   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:01.804946   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:01.805000   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:01.844010   57945 cri.go:89] found id: ""
	I0816 13:48:01.844035   57945 logs.go:276] 0 containers: []
	W0816 13:48:01.844042   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:01.844051   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:01.844104   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:01.882450   57945 cri.go:89] found id: ""
	I0816 13:48:01.882488   57945 logs.go:276] 0 containers: []
	W0816 13:48:01.882500   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:01.882507   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:01.882568   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:01.916995   57945 cri.go:89] found id: ""
	I0816 13:48:01.917028   57945 logs.go:276] 0 containers: []
	W0816 13:48:01.917039   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:01.917048   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:01.917109   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:01.956289   57945 cri.go:89] found id: ""
	I0816 13:48:01.956312   57945 logs.go:276] 0 containers: []
	W0816 13:48:01.956319   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:01.956325   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:01.956378   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:01.991823   57945 cri.go:89] found id: ""
	I0816 13:48:01.991862   57945 logs.go:276] 0 containers: []
	W0816 13:48:01.991875   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:01.991882   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:01.991953   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:02.034244   57945 cri.go:89] found id: ""
	I0816 13:48:02.034272   57945 logs.go:276] 0 containers: []
	W0816 13:48:02.034282   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:02.034290   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:02.034357   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:02.067902   57945 cri.go:89] found id: ""
	I0816 13:48:02.067930   57945 logs.go:276] 0 containers: []
	W0816 13:48:02.067942   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:02.067953   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:02.067971   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:02.121170   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:02.121196   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:02.177468   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:02.177498   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:02.191721   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:02.191757   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:02.270433   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:02.270463   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:02.270500   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:04.855768   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:04.869098   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:04.869175   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:04.907817   57945 cri.go:89] found id: ""
	I0816 13:48:04.907848   57945 logs.go:276] 0 containers: []
	W0816 13:48:04.907856   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:04.907863   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:04.907919   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:04.943307   57945 cri.go:89] found id: ""
	I0816 13:48:04.943339   57945 logs.go:276] 0 containers: []
	W0816 13:48:04.943349   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:04.943356   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:04.943416   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:04.979884   57945 cri.go:89] found id: ""
	I0816 13:48:04.979914   57945 logs.go:276] 0 containers: []
	W0816 13:48:04.979922   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:04.979929   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:04.979978   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:05.021400   57945 cri.go:89] found id: ""
	I0816 13:48:05.021442   57945 logs.go:276] 0 containers: []
	W0816 13:48:05.021453   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:05.021463   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:05.021542   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:05.057780   57945 cri.go:89] found id: ""
	I0816 13:48:05.057800   57945 logs.go:276] 0 containers: []
	W0816 13:48:05.057808   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:05.057814   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:05.057864   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:05.091947   57945 cri.go:89] found id: ""
	I0816 13:48:05.091976   57945 logs.go:276] 0 containers: []
	W0816 13:48:05.091987   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:05.091995   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:05.092058   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:05.129740   57945 cri.go:89] found id: ""
	I0816 13:48:05.129771   57945 logs.go:276] 0 containers: []
	W0816 13:48:05.129781   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:05.129788   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:05.129857   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:05.163020   57945 cri.go:89] found id: ""
	I0816 13:48:05.163049   57945 logs.go:276] 0 containers: []
	W0816 13:48:05.163060   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:05.163070   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:05.163087   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:05.236240   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:05.236266   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:05.236281   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:05.310559   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:05.310595   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:05.351614   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:05.351646   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:05.404938   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:05.404971   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:07.921010   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:07.934181   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:07.934255   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:07.969474   57945 cri.go:89] found id: ""
	I0816 13:48:07.969502   57945 logs.go:276] 0 containers: []
	W0816 13:48:07.969512   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:07.969520   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:07.969575   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:08.007423   57945 cri.go:89] found id: ""
	I0816 13:48:08.007447   57945 logs.go:276] 0 containers: []
	W0816 13:48:08.007454   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:08.007460   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:08.007515   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:08.043981   57945 cri.go:89] found id: ""
	I0816 13:48:08.044010   57945 logs.go:276] 0 containers: []
	W0816 13:48:08.044021   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:08.044027   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:08.044076   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:08.078631   57945 cri.go:89] found id: ""
	I0816 13:48:08.078656   57945 logs.go:276] 0 containers: []
	W0816 13:48:08.078664   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:08.078669   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:08.078720   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:08.114970   57945 cri.go:89] found id: ""
	I0816 13:48:08.114998   57945 logs.go:276] 0 containers: []
	W0816 13:48:08.115010   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:08.115020   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:08.115081   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:08.149901   57945 cri.go:89] found id: ""
	I0816 13:48:08.149936   57945 logs.go:276] 0 containers: []
	W0816 13:48:08.149944   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:08.149951   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:08.150007   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:08.183104   57945 cri.go:89] found id: ""
	I0816 13:48:08.183128   57945 logs.go:276] 0 containers: []
	W0816 13:48:08.183136   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:08.183141   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:08.183189   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:08.216972   57945 cri.go:89] found id: ""
	I0816 13:48:08.217005   57945 logs.go:276] 0 containers: []
	W0816 13:48:08.217016   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:08.217027   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:08.217043   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:08.231192   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:08.231223   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:08.306779   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:08.306807   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:08.306823   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:08.388235   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:08.388274   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:08.429040   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:08.429071   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:10.983867   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:10.997649   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:10.997722   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:11.033315   57945 cri.go:89] found id: ""
	I0816 13:48:11.033351   57945 logs.go:276] 0 containers: []
	W0816 13:48:11.033362   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:11.033370   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:11.033437   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:11.069000   57945 cri.go:89] found id: ""
	I0816 13:48:11.069030   57945 logs.go:276] 0 containers: []
	W0816 13:48:11.069038   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:11.069044   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:11.069102   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:11.100668   57945 cri.go:89] found id: ""
	I0816 13:48:11.100691   57945 logs.go:276] 0 containers: []
	W0816 13:48:11.100698   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:11.100704   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:11.100755   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:11.134753   57945 cri.go:89] found id: ""
	I0816 13:48:11.134782   57945 logs.go:276] 0 containers: []
	W0816 13:48:11.134792   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:11.134800   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:11.134857   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:11.169691   57945 cri.go:89] found id: ""
	I0816 13:48:11.169717   57945 logs.go:276] 0 containers: []
	W0816 13:48:11.169726   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:11.169734   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:11.169797   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:11.204048   57945 cri.go:89] found id: ""
	I0816 13:48:11.204077   57945 logs.go:276] 0 containers: []
	W0816 13:48:11.204088   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:11.204095   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:11.204147   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:11.237659   57945 cri.go:89] found id: ""
	I0816 13:48:11.237687   57945 logs.go:276] 0 containers: []
	W0816 13:48:11.237698   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:11.237706   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:11.237768   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:11.271886   57945 cri.go:89] found id: ""
	I0816 13:48:11.271911   57945 logs.go:276] 0 containers: []
	W0816 13:48:11.271922   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:11.271932   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:11.271946   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:11.327237   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:11.327274   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:11.343215   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:11.343256   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:11.419725   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:11.419752   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:11.419768   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:11.498221   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:11.498252   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:14.044619   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:14.057479   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:14.057537   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:14.093405   57945 cri.go:89] found id: ""
	I0816 13:48:14.093439   57945 logs.go:276] 0 containers: []
	W0816 13:48:14.093450   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:14.093459   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:14.093516   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:14.127089   57945 cri.go:89] found id: ""
	I0816 13:48:14.127111   57945 logs.go:276] 0 containers: []
	W0816 13:48:14.127120   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:14.127127   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:14.127190   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:14.165676   57945 cri.go:89] found id: ""
	I0816 13:48:14.165708   57945 logs.go:276] 0 containers: []
	W0816 13:48:14.165719   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:14.165726   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:14.165791   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:14.198630   57945 cri.go:89] found id: ""
	I0816 13:48:14.198652   57945 logs.go:276] 0 containers: []
	W0816 13:48:14.198660   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:14.198665   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:14.198717   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:14.246679   57945 cri.go:89] found id: ""
	I0816 13:48:14.246706   57945 logs.go:276] 0 containers: []
	W0816 13:48:14.246714   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:14.246719   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:14.246774   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:14.290928   57945 cri.go:89] found id: ""
	I0816 13:48:14.290960   57945 logs.go:276] 0 containers: []
	W0816 13:48:14.290972   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:14.290979   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:14.291043   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:14.342499   57945 cri.go:89] found id: ""
	I0816 13:48:14.342527   57945 logs.go:276] 0 containers: []
	W0816 13:48:14.342537   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:14.342544   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:14.342613   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:14.377858   57945 cri.go:89] found id: ""
	I0816 13:48:14.377891   57945 logs.go:276] 0 containers: []
	W0816 13:48:14.377899   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:14.377913   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:14.377928   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:14.431180   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:14.431218   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:14.445355   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:14.445381   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:14.513970   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:14.513991   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:14.514006   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:14.591381   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:14.591416   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:17.133406   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:17.146647   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:17.146703   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:17.180991   57945 cri.go:89] found id: ""
	I0816 13:48:17.181022   57945 logs.go:276] 0 containers: []
	W0816 13:48:17.181032   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:17.181041   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:17.181103   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:17.214862   57945 cri.go:89] found id: ""
	I0816 13:48:17.214892   57945 logs.go:276] 0 containers: []
	W0816 13:48:17.214903   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:17.214910   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:17.214971   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:17.250316   57945 cri.go:89] found id: ""
	I0816 13:48:17.250344   57945 logs.go:276] 0 containers: []
	W0816 13:48:17.250355   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:17.250362   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:17.250425   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:17.282959   57945 cri.go:89] found id: ""
	I0816 13:48:17.282991   57945 logs.go:276] 0 containers: []
	W0816 13:48:17.283001   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:17.283008   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:17.283070   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:17.316185   57945 cri.go:89] found id: ""
	I0816 13:48:17.316213   57945 logs.go:276] 0 containers: []
	W0816 13:48:17.316224   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:17.316232   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:17.316292   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:17.353383   57945 cri.go:89] found id: ""
	I0816 13:48:17.353410   57945 logs.go:276] 0 containers: []
	W0816 13:48:17.353420   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:17.353428   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:17.353487   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:17.390808   57945 cri.go:89] found id: ""
	I0816 13:48:17.390836   57945 logs.go:276] 0 containers: []
	W0816 13:48:17.390844   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:17.390850   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:17.390898   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:17.425484   57945 cri.go:89] found id: ""
	I0816 13:48:17.425517   57945 logs.go:276] 0 containers: []
	W0816 13:48:17.425529   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:17.425539   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:17.425556   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:17.439184   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:17.439220   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:17.511813   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:17.511838   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:17.511853   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:17.597415   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:17.597447   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:17.636703   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:17.636738   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:20.193694   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:20.207488   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:20.207549   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:20.246584   57945 cri.go:89] found id: ""
	I0816 13:48:20.246610   57945 logs.go:276] 0 containers: []
	W0816 13:48:20.246618   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:20.246624   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:20.246678   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:20.282030   57945 cri.go:89] found id: ""
	I0816 13:48:20.282060   57945 logs.go:276] 0 containers: []
	W0816 13:48:20.282071   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:20.282078   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:20.282142   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:20.317530   57945 cri.go:89] found id: ""
	I0816 13:48:20.317562   57945 logs.go:276] 0 containers: []
	W0816 13:48:20.317571   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:20.317578   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:20.317630   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:20.352964   57945 cri.go:89] found id: ""
	I0816 13:48:20.352990   57945 logs.go:276] 0 containers: []
	W0816 13:48:20.353000   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:20.353008   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:20.353066   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:20.388108   57945 cri.go:89] found id: ""
	I0816 13:48:20.388138   57945 logs.go:276] 0 containers: []
	W0816 13:48:20.388148   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:20.388156   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:20.388224   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:20.423627   57945 cri.go:89] found id: ""
	I0816 13:48:20.423660   57945 logs.go:276] 0 containers: []
	W0816 13:48:20.423672   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:20.423680   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:20.423741   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:20.460975   57945 cri.go:89] found id: ""
	I0816 13:48:20.461003   57945 logs.go:276] 0 containers: []
	W0816 13:48:20.461011   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:20.461017   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:20.461081   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:20.497707   57945 cri.go:89] found id: ""
	I0816 13:48:20.497728   57945 logs.go:276] 0 containers: []
	W0816 13:48:20.497735   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:20.497743   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:20.497758   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:20.584887   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:20.584939   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:20.627020   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:20.627054   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:20.680716   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:20.680756   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:20.694945   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:20.694973   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:20.770900   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:23.271654   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:23.284709   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:23.284788   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:23.324342   57945 cri.go:89] found id: ""
	I0816 13:48:23.324374   57945 logs.go:276] 0 containers: []
	W0816 13:48:23.324384   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:23.324393   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:23.324453   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:23.358846   57945 cri.go:89] found id: ""
	I0816 13:48:23.358869   57945 logs.go:276] 0 containers: []
	W0816 13:48:23.358879   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:23.358885   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:23.358943   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:23.392580   57945 cri.go:89] found id: ""
	I0816 13:48:23.392607   57945 logs.go:276] 0 containers: []
	W0816 13:48:23.392618   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:23.392626   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:23.392686   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:23.428035   57945 cri.go:89] found id: ""
	I0816 13:48:23.428066   57945 logs.go:276] 0 containers: []
	W0816 13:48:23.428076   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:23.428083   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:23.428164   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:23.470027   57945 cri.go:89] found id: ""
	I0816 13:48:23.470054   57945 logs.go:276] 0 containers: []
	W0816 13:48:23.470066   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:23.470076   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:23.470242   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:23.506497   57945 cri.go:89] found id: ""
	I0816 13:48:23.506522   57945 logs.go:276] 0 containers: []
	W0816 13:48:23.506530   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:23.506536   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:23.506588   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:23.542571   57945 cri.go:89] found id: ""
	I0816 13:48:23.542600   57945 logs.go:276] 0 containers: []
	W0816 13:48:23.542611   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:23.542619   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:23.542683   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:23.578552   57945 cri.go:89] found id: ""
	I0816 13:48:23.578584   57945 logs.go:276] 0 containers: []
	W0816 13:48:23.578592   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:23.578601   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:23.578612   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:23.633145   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:23.633181   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:23.648089   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:23.648129   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:23.724645   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:23.724663   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:23.724675   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:23.812979   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:23.813013   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:26.353455   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:26.367433   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:26.367504   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:26.406717   57945 cri.go:89] found id: ""
	I0816 13:48:26.406746   57945 logs.go:276] 0 containers: []
	W0816 13:48:26.406756   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:26.406764   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:26.406825   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:26.440267   57945 cri.go:89] found id: ""
	I0816 13:48:26.440298   57945 logs.go:276] 0 containers: []
	W0816 13:48:26.440309   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:26.440317   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:26.440379   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:26.479627   57945 cri.go:89] found id: ""
	I0816 13:48:26.479653   57945 logs.go:276] 0 containers: []
	W0816 13:48:26.479662   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:26.479667   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:26.479714   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:26.516608   57945 cri.go:89] found id: ""
	I0816 13:48:26.516638   57945 logs.go:276] 0 containers: []
	W0816 13:48:26.516646   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:26.516653   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:26.516713   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:26.553474   57945 cri.go:89] found id: ""
	I0816 13:48:26.553496   57945 logs.go:276] 0 containers: []
	W0816 13:48:26.553505   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:26.553510   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:26.553566   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:26.586090   57945 cri.go:89] found id: ""
	I0816 13:48:26.586147   57945 logs.go:276] 0 containers: []
	W0816 13:48:26.586160   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:26.586167   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:26.586217   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:26.621874   57945 cri.go:89] found id: ""
	I0816 13:48:26.621903   57945 logs.go:276] 0 containers: []
	W0816 13:48:26.621914   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:26.621923   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:26.621999   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:26.656643   57945 cri.go:89] found id: ""
	I0816 13:48:26.656668   57945 logs.go:276] 0 containers: []
	W0816 13:48:26.656676   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:26.656684   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:26.656694   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:26.710589   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:26.710628   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:26.724403   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:26.724423   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:26.795530   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:26.795550   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:26.795568   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:26.879670   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:26.879709   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:29.420540   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:29.434301   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:29.434368   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:29.471409   57945 cri.go:89] found id: ""
	I0816 13:48:29.471438   57945 logs.go:276] 0 containers: []
	W0816 13:48:29.471455   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:29.471464   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:29.471527   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:29.510841   57945 cri.go:89] found id: ""
	I0816 13:48:29.510865   57945 logs.go:276] 0 containers: []
	W0816 13:48:29.510873   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:29.510880   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:29.510928   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:29.546300   57945 cri.go:89] found id: ""
	I0816 13:48:29.546331   57945 logs.go:276] 0 containers: []
	W0816 13:48:29.546342   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:29.546349   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:29.546409   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:29.579324   57945 cri.go:89] found id: ""
	I0816 13:48:29.579349   57945 logs.go:276] 0 containers: []
	W0816 13:48:29.579357   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:29.579363   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:29.579414   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:29.613729   57945 cri.go:89] found id: ""
	I0816 13:48:29.613755   57945 logs.go:276] 0 containers: []
	W0816 13:48:29.613765   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:29.613772   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:29.613831   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:29.649401   57945 cri.go:89] found id: ""
	I0816 13:48:29.649428   57945 logs.go:276] 0 containers: []
	W0816 13:48:29.649439   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:29.649447   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:29.649510   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:29.685391   57945 cri.go:89] found id: ""
	I0816 13:48:29.685420   57945 logs.go:276] 0 containers: []
	W0816 13:48:29.685428   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:29.685436   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:29.685504   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:29.720954   57945 cri.go:89] found id: ""
	I0816 13:48:29.720981   57945 logs.go:276] 0 containers: []
	W0816 13:48:29.720993   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:29.721004   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:29.721019   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:29.791602   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:29.791625   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:29.791637   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:29.876595   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:29.876633   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:29.917172   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:29.917203   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:29.969511   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:29.969548   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:32.484186   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:32.499320   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:32.499386   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:32.537301   57945 cri.go:89] found id: ""
	I0816 13:48:32.537351   57945 logs.go:276] 0 containers: []
	W0816 13:48:32.537365   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:32.537373   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:32.537441   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:32.574324   57945 cri.go:89] found id: ""
	I0816 13:48:32.574350   57945 logs.go:276] 0 containers: []
	W0816 13:48:32.574360   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:32.574367   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:32.574445   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:32.610672   57945 cri.go:89] found id: ""
	I0816 13:48:32.610697   57945 logs.go:276] 0 containers: []
	W0816 13:48:32.610704   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:32.610709   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:32.610760   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:32.649916   57945 cri.go:89] found id: ""
	I0816 13:48:32.649941   57945 logs.go:276] 0 containers: []
	W0816 13:48:32.649949   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:32.649955   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:32.650010   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:32.684204   57945 cri.go:89] found id: ""
	I0816 13:48:32.684234   57945 logs.go:276] 0 containers: []
	W0816 13:48:32.684245   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:32.684257   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:32.684319   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:32.723735   57945 cri.go:89] found id: ""
	I0816 13:48:32.723764   57945 logs.go:276] 0 containers: []
	W0816 13:48:32.723772   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:32.723778   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:32.723838   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:32.759709   57945 cri.go:89] found id: ""
	I0816 13:48:32.759732   57945 logs.go:276] 0 containers: []
	W0816 13:48:32.759740   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:32.759746   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:32.759798   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:32.798782   57945 cri.go:89] found id: ""
	I0816 13:48:32.798807   57945 logs.go:276] 0 containers: []
	W0816 13:48:32.798815   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:32.798823   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:32.798835   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:32.876166   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:32.876188   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:32.876199   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:32.956218   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:32.956253   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:32.996625   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:32.996662   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:33.050093   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:33.050128   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:35.565097   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:35.578582   57945 kubeadm.go:597] duration metric: took 4m3.330349632s to restartPrimaryControlPlane
	W0816 13:48:35.578670   57945 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0816 13:48:35.578704   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 13:48:36.655625   57945 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.076898816s)
	I0816 13:48:36.655703   57945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 13:48:36.670273   57945 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 13:48:36.681600   57945 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 13:48:36.691816   57945 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 13:48:36.691835   57945 kubeadm.go:157] found existing configuration files:
	
	I0816 13:48:36.691877   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 13:48:36.701841   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 13:48:36.701901   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 13:48:36.711571   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 13:48:36.720990   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 13:48:36.721055   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 13:48:36.730948   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 13:48:36.740294   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 13:48:36.740361   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 13:48:36.750725   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 13:48:36.761936   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 13:48:36.762009   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 13:48:36.772572   57945 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 13:48:37.001184   57945 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 13:50:32.875973   57945 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0816 13:50:32.876092   57945 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0816 13:50:32.877853   57945 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0816 13:50:32.877964   57945 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 13:50:32.878066   57945 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 13:50:32.878184   57945 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 13:50:32.878286   57945 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0816 13:50:32.878362   57945 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 13:50:32.880211   57945 out.go:235]   - Generating certificates and keys ...
	I0816 13:50:32.880308   57945 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 13:50:32.880389   57945 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 13:50:32.880480   57945 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 13:50:32.880575   57945 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 13:50:32.880684   57945 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 13:50:32.880782   57945 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 13:50:32.880874   57945 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 13:50:32.880988   57945 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 13:50:32.881100   57945 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 13:50:32.881190   57945 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 13:50:32.881228   57945 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 13:50:32.881274   57945 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 13:50:32.881318   57945 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 13:50:32.881362   57945 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 13:50:32.881418   57945 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 13:50:32.881473   57945 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 13:50:32.881585   57945 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 13:50:32.881676   57945 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 13:50:32.881747   57945 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 13:50:32.881846   57945 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 13:50:32.883309   57945 out.go:235]   - Booting up control plane ...
	I0816 13:50:32.883394   57945 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 13:50:32.883493   57945 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 13:50:32.883563   57945 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 13:50:32.883661   57945 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 13:50:32.883867   57945 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0816 13:50:32.883916   57945 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0816 13:50:32.883985   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:50:32.884185   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:50:32.884285   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:50:32.884483   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:50:32.884557   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:50:32.884718   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:50:32.884775   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:50:32.884984   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:50:32.885058   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:50:32.885258   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:50:32.885272   57945 kubeadm.go:310] 
	I0816 13:50:32.885367   57945 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0816 13:50:32.885419   57945 kubeadm.go:310] 		timed out waiting for the condition
	I0816 13:50:32.885426   57945 kubeadm.go:310] 
	I0816 13:50:32.885455   57945 kubeadm.go:310] 	This error is likely caused by:
	I0816 13:50:32.885489   57945 kubeadm.go:310] 		- The kubelet is not running
	I0816 13:50:32.885579   57945 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0816 13:50:32.885587   57945 kubeadm.go:310] 
	I0816 13:50:32.885709   57945 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0816 13:50:32.885745   57945 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0816 13:50:32.885774   57945 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0816 13:50:32.885781   57945 kubeadm.go:310] 
	I0816 13:50:32.885866   57945 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0816 13:50:32.885938   57945 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0816 13:50:32.885945   57945 kubeadm.go:310] 
	I0816 13:50:32.886039   57945 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0816 13:50:32.886139   57945 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0816 13:50:32.886251   57945 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0816 13:50:32.886331   57945 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0816 13:50:32.886369   57945 kubeadm.go:310] 
	W0816 13:50:32.886438   57945 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0816 13:50:32.886474   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 13:50:33.351503   57945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 13:50:33.366285   57945 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 13:50:33.378157   57945 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 13:50:33.378180   57945 kubeadm.go:157] found existing configuration files:
	
	I0816 13:50:33.378241   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 13:50:33.389301   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 13:50:33.389358   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 13:50:33.400730   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 13:50:33.412130   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 13:50:33.412209   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 13:50:33.423484   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 13:50:33.433610   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 13:50:33.433676   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 13:50:33.445384   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 13:50:33.456098   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 13:50:33.456159   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 13:50:33.466036   57945 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 13:50:33.693238   57945 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 13:52:29.699171   57945 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0816 13:52:29.699367   57945 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0816 13:52:29.700903   57945 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0816 13:52:29.701036   57945 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 13:52:29.701228   57945 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 13:52:29.701460   57945 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 13:52:29.701761   57945 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0816 13:52:29.701863   57945 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 13:52:29.703486   57945 out.go:235]   - Generating certificates and keys ...
	I0816 13:52:29.703550   57945 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 13:52:29.703603   57945 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 13:52:29.703671   57945 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 13:52:29.703732   57945 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 13:52:29.703823   57945 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 13:52:29.703918   57945 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 13:52:29.704016   57945 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 13:52:29.704098   57945 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 13:52:29.704190   57945 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 13:52:29.704283   57945 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 13:52:29.704344   57945 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 13:52:29.704407   57945 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 13:52:29.704469   57945 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 13:52:29.704541   57945 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 13:52:29.704630   57945 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 13:52:29.704674   57945 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 13:52:29.704753   57945 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 13:52:29.704824   57945 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 13:52:29.704855   57945 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 13:52:29.704939   57945 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 13:52:29.706461   57945 out.go:235]   - Booting up control plane ...
	I0816 13:52:29.706555   57945 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 13:52:29.706672   57945 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 13:52:29.706744   57945 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 13:52:29.706836   57945 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 13:52:29.707002   57945 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0816 13:52:29.707047   57945 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0816 13:52:29.707126   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:52:29.707345   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:52:29.707438   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:52:29.707691   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:52:29.707752   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:52:29.707892   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:52:29.707969   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:52:29.708132   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:52:29.708219   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:52:29.708478   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:52:29.708500   57945 kubeadm.go:310] 
	I0816 13:52:29.708538   57945 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0816 13:52:29.708579   57945 kubeadm.go:310] 		timed out waiting for the condition
	I0816 13:52:29.708593   57945 kubeadm.go:310] 
	I0816 13:52:29.708633   57945 kubeadm.go:310] 	This error is likely caused by:
	I0816 13:52:29.708660   57945 kubeadm.go:310] 		- The kubelet is not running
	I0816 13:52:29.708743   57945 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0816 13:52:29.708750   57945 kubeadm.go:310] 
	I0816 13:52:29.708841   57945 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0816 13:52:29.708892   57945 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0816 13:52:29.708959   57945 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0816 13:52:29.708969   57945 kubeadm.go:310] 
	I0816 13:52:29.709120   57945 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0816 13:52:29.709237   57945 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0816 13:52:29.709248   57945 kubeadm.go:310] 
	I0816 13:52:29.709412   57945 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0816 13:52:29.709551   57945 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0816 13:52:29.709660   57945 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0816 13:52:29.709755   57945 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0816 13:52:29.709782   57945 kubeadm.go:310] 
	I0816 13:52:29.709836   57945 kubeadm.go:394] duration metric: took 7m57.514215667s to StartCluster
	I0816 13:52:29.709886   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:52:29.709942   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:52:29.753540   57945 cri.go:89] found id: ""
	I0816 13:52:29.753569   57945 logs.go:276] 0 containers: []
	W0816 13:52:29.753580   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:52:29.753588   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:52:29.753655   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:52:29.793951   57945 cri.go:89] found id: ""
	I0816 13:52:29.793975   57945 logs.go:276] 0 containers: []
	W0816 13:52:29.793983   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:52:29.793988   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:52:29.794040   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:52:29.831303   57945 cri.go:89] found id: ""
	I0816 13:52:29.831334   57945 logs.go:276] 0 containers: []
	W0816 13:52:29.831345   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:52:29.831356   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:52:29.831420   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:52:29.867252   57945 cri.go:89] found id: ""
	I0816 13:52:29.867277   57945 logs.go:276] 0 containers: []
	W0816 13:52:29.867285   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:52:29.867296   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:52:29.867349   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:52:29.901161   57945 cri.go:89] found id: ""
	I0816 13:52:29.901188   57945 logs.go:276] 0 containers: []
	W0816 13:52:29.901204   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:52:29.901212   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:52:29.901268   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:52:29.935781   57945 cri.go:89] found id: ""
	I0816 13:52:29.935808   57945 logs.go:276] 0 containers: []
	W0816 13:52:29.935816   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:52:29.935823   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:52:29.935873   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:52:29.970262   57945 cri.go:89] found id: ""
	I0816 13:52:29.970292   57945 logs.go:276] 0 containers: []
	W0816 13:52:29.970303   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:52:29.970310   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:52:29.970370   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:52:30.026580   57945 cri.go:89] found id: ""
	I0816 13:52:30.026610   57945 logs.go:276] 0 containers: []
	W0816 13:52:30.026621   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:52:30.026642   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:52:30.026657   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:52:30.050718   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:52:30.050747   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:52:30.146600   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:52:30.146623   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:52:30.146637   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:52:30.268976   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:52:30.269012   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:52:30.312306   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:52:30.312341   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0816 13:52:30.363242   57945 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0816 13:52:30.363303   57945 out.go:270] * 
	* 
	W0816 13:52:30.363365   57945 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0816 13:52:30.363377   57945 out.go:270] * 
	* 
	W0816 13:52:30.364104   57945 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 13:52:30.366989   57945 out.go:201] 
	W0816 13:52:30.368192   57945 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0816 13:52:30.368293   57945 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0816 13:52:30.368318   57945 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0816 13:52:30.369674   57945 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-882237 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-882237 -n old-k8s-version-882237
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-882237 -n old-k8s-version-882237: exit status 2 (231.305977ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-882237 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-882237 logs -n 25: (1.596475277s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cert-options-779306 -- sudo                         | cert-options-779306          | jenkins | v1.33.1 | 16 Aug 24 13:34 UTC | 16 Aug 24 13:34 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-779306                                 | cert-options-779306          | jenkins | v1.33.1 | 16 Aug 24 13:34 UTC | 16 Aug 24 13:34 UTC |
	| start   | -p no-preload-311070                                   | no-preload-311070            | jenkins | v1.33.1 | 16 Aug 24 13:34 UTC | 16 Aug 24 13:36 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-759623                           | kubernetes-upgrade-759623    | jenkins | v1.33.1 | 16 Aug 24 13:35 UTC | 16 Aug 24 13:35 UTC |
	| start   | -p embed-certs-302520                                  | embed-certs-302520           | jenkins | v1.33.1 | 16 Aug 24 13:35 UTC | 16 Aug 24 13:36 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-302520            | embed-certs-302520           | jenkins | v1.33.1 | 16 Aug 24 13:36 UTC | 16 Aug 24 13:36 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-302520                                  | embed-certs-302520           | jenkins | v1.33.1 | 16 Aug 24 13:36 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-311070             | no-preload-311070            | jenkins | v1.33.1 | 16 Aug 24 13:36 UTC | 16 Aug 24 13:37 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-311070                                   | no-preload-311070            | jenkins | v1.33.1 | 16 Aug 24 13:37 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-050553                              | cert-expiration-050553       | jenkins | v1.33.1 | 16 Aug 24 13:37 UTC | 16 Aug 24 13:38 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-050553                              | cert-expiration-050553       | jenkins | v1.33.1 | 16 Aug 24 13:38 UTC | 16 Aug 24 13:38 UTC |
	| delete  | -p                                                     | disable-driver-mounts-338033 | jenkins | v1.33.1 | 16 Aug 24 13:38 UTC | 16 Aug 24 13:38 UTC |
	|         | disable-driver-mounts-338033                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-893736 | jenkins | v1.33.1 | 16 Aug 24 13:38 UTC | 16 Aug 24 13:39 UTC |
	|         | default-k8s-diff-port-893736                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-302520                 | embed-certs-302520           | jenkins | v1.33.1 | 16 Aug 24 13:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-882237        | old-k8s-version-882237       | jenkins | v1.33.1 | 16 Aug 24 13:39 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-302520                                  | embed-certs-302520           | jenkins | v1.33.1 | 16 Aug 24 13:39 UTC | 16 Aug 24 13:50 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-311070                  | no-preload-311070            | jenkins | v1.33.1 | 16 Aug 24 13:39 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-311070                                   | no-preload-311070            | jenkins | v1.33.1 | 16 Aug 24 13:39 UTC | 16 Aug 24 13:48 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-893736  | default-k8s-diff-port-893736 | jenkins | v1.33.1 | 16 Aug 24 13:39 UTC | 16 Aug 24 13:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-893736 | jenkins | v1.33.1 | 16 Aug 24 13:39 UTC |                     |
	|         | default-k8s-diff-port-893736                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-882237                              | old-k8s-version-882237       | jenkins | v1.33.1 | 16 Aug 24 13:40 UTC | 16 Aug 24 13:40 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-882237             | old-k8s-version-882237       | jenkins | v1.33.1 | 16 Aug 24 13:40 UTC | 16 Aug 24 13:40 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-882237                              | old-k8s-version-882237       | jenkins | v1.33.1 | 16 Aug 24 13:40 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-893736       | default-k8s-diff-port-893736 | jenkins | v1.33.1 | 16 Aug 24 13:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-893736 | jenkins | v1.33.1 | 16 Aug 24 13:42 UTC | 16 Aug 24 13:49 UTC |
	|         | default-k8s-diff-port-893736                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 13:42:15
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 13:42:15.998819   58430 out.go:345] Setting OutFile to fd 1 ...
	I0816 13:42:15.998960   58430 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 13:42:15.998970   58430 out.go:358] Setting ErrFile to fd 2...
	I0816 13:42:15.998976   58430 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 13:42:15.999197   58430 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-3966/.minikube/bin
	I0816 13:42:15.999747   58430 out.go:352] Setting JSON to false
	I0816 13:42:16.000715   58430 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5081,"bootTime":1723810655,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 13:42:16.000770   58430 start.go:139] virtualization: kvm guest
	I0816 13:42:16.003216   58430 out.go:177] * [default-k8s-diff-port-893736] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 13:42:16.004663   58430 notify.go:220] Checking for updates...
	I0816 13:42:16.004698   58430 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 13:42:16.006298   58430 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 13:42:16.007719   58430 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 13:42:16.009073   58430 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-3966/.minikube
	I0816 13:42:16.010602   58430 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 13:42:16.012058   58430 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 13:42:16.013799   58430 config.go:182] Loaded profile config "default-k8s-diff-port-893736": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 13:42:16.014204   58430 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:42:16.014278   58430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:42:16.029427   58430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34093
	I0816 13:42:16.029977   58430 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:42:16.030548   58430 main.go:141] libmachine: Using API Version  1
	I0816 13:42:16.030573   58430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:42:16.030903   58430 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:42:16.031164   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:42:16.031412   58430 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 13:42:16.031691   58430 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:42:16.031731   58430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:42:16.046245   58430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38243
	I0816 13:42:16.046668   58430 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:42:16.047205   58430 main.go:141] libmachine: Using API Version  1
	I0816 13:42:16.047244   58430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:42:16.047537   58430 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:42:16.047730   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:42:16.080470   58430 out.go:177] * Using the kvm2 driver based on existing profile
	I0816 13:42:16.081700   58430 start.go:297] selected driver: kvm2
	I0816 13:42:16.081721   58430 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-893736 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-893736 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.186 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 13:42:16.081825   58430 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 13:42:16.082512   58430 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 13:42:16.082593   58430 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-3966/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 13:42:16.097784   58430 install.go:137] /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0816 13:42:16.098155   58430 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 13:42:16.098223   58430 cni.go:84] Creating CNI manager for ""
	I0816 13:42:16.098233   58430 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:42:16.098274   58430 start.go:340] cluster config:
	{Name:default-k8s-diff-port-893736 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-893736 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.186 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 13:42:16.098365   58430 iso.go:125] acquiring lock: {Name:mk00139b359c6fe4c84d80e843a2099303fb7c5b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 13:42:16.100341   58430 out.go:177] * Starting "default-k8s-diff-port-893736" primary control-plane node in "default-k8s-diff-port-893736" cluster
	I0816 13:42:17.205125   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:42:16.101925   58430 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 13:42:16.101966   58430 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0816 13:42:16.101973   58430 cache.go:56] Caching tarball of preloaded images
	I0816 13:42:16.102052   58430 preload.go:172] Found /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 13:42:16.102063   58430 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0816 13:42:16.102162   58430 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/default-k8s-diff-port-893736/config.json ...
	I0816 13:42:16.102344   58430 start.go:360] acquireMachinesLock for default-k8s-diff-port-893736: {Name:mk0b4b88ee8893cefe753073bc08069ac50e5641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 13:42:23.285172   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:42:26.357214   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:42:32.437218   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:42:35.509221   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:42:41.589174   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:42:44.661162   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:42:50.741223   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:42:53.813193   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:42:59.893180   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:43:02.965205   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:43:09.045252   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:43:12.117232   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:43:18.197189   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:43:21.269234   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:43:27.349182   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:43:30.421174   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:43:36.501197   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:43:39.573246   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:43:42.577406   57440 start.go:364] duration metric: took 4m10.318515071s to acquireMachinesLock for "no-preload-311070"
	I0816 13:43:42.577513   57440 start.go:96] Skipping create...Using existing machine configuration
	I0816 13:43:42.577529   57440 fix.go:54] fixHost starting: 
	I0816 13:43:42.577955   57440 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:43:42.577989   57440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:43:42.593032   57440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43785
	I0816 13:43:42.593416   57440 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:43:42.593860   57440 main.go:141] libmachine: Using API Version  1
	I0816 13:43:42.593882   57440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:43:42.594256   57440 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:43:42.594434   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	I0816 13:43:42.594586   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetState
	I0816 13:43:42.596234   57440 fix.go:112] recreateIfNeeded on no-preload-311070: state=Stopped err=<nil>
	I0816 13:43:42.596261   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	W0816 13:43:42.596431   57440 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 13:43:42.598334   57440 out.go:177] * Restarting existing kvm2 VM for "no-preload-311070" ...
	I0816 13:43:42.574954   57240 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 13:43:42.574990   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetMachineName
	I0816 13:43:42.575324   57240 buildroot.go:166] provisioning hostname "embed-certs-302520"
	I0816 13:43:42.575349   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetMachineName
	I0816 13:43:42.575554   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:43:42.577250   57240 machine.go:96] duration metric: took 4m37.4289608s to provisionDockerMachine
	I0816 13:43:42.577309   57240 fix.go:56] duration metric: took 4m37.450613575s for fixHost
	I0816 13:43:42.577314   57240 start.go:83] releasing machines lock for "embed-certs-302520", held for 4m37.450631849s
	W0816 13:43:42.577330   57240 start.go:714] error starting host: provision: host is not running
	W0816 13:43:42.577401   57240 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0816 13:43:42.577410   57240 start.go:729] Will try again in 5 seconds ...
	I0816 13:43:42.599558   57440 main.go:141] libmachine: (no-preload-311070) Calling .Start
	I0816 13:43:42.599720   57440 main.go:141] libmachine: (no-preload-311070) Ensuring networks are active...
	I0816 13:43:42.600383   57440 main.go:141] libmachine: (no-preload-311070) Ensuring network default is active
	I0816 13:43:42.600682   57440 main.go:141] libmachine: (no-preload-311070) Ensuring network mk-no-preload-311070 is active
	I0816 13:43:42.601157   57440 main.go:141] libmachine: (no-preload-311070) Getting domain xml...
	I0816 13:43:42.601868   57440 main.go:141] libmachine: (no-preload-311070) Creating domain...
	I0816 13:43:43.816308   57440 main.go:141] libmachine: (no-preload-311070) Waiting to get IP...
	I0816 13:43:43.817179   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:43.817566   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:43.817586   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:43.817516   58770 retry.go:31] will retry after 295.385031ms: waiting for machine to come up
	I0816 13:43:44.115046   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:44.115850   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:44.115875   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:44.115787   58770 retry.go:31] will retry after 340.249659ms: waiting for machine to come up
	I0816 13:43:44.457278   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:44.457722   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:44.457752   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:44.457657   58770 retry.go:31] will retry after 476.905089ms: waiting for machine to come up
	I0816 13:43:44.936230   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:44.936674   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:44.936714   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:44.936640   58770 retry.go:31] will retry after 555.288542ms: waiting for machine to come up
	I0816 13:43:45.493301   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:45.493698   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:45.493724   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:45.493657   58770 retry.go:31] will retry after 462.336365ms: waiting for machine to come up
	I0816 13:43:45.957163   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:45.957553   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:45.957580   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:45.957509   58770 retry.go:31] will retry after 886.665194ms: waiting for machine to come up
	I0816 13:43:46.845380   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:46.845743   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:46.845763   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:46.845723   58770 retry.go:31] will retry after 909.05227ms: waiting for machine to come up
	I0816 13:43:47.579134   57240 start.go:360] acquireMachinesLock for embed-certs-302520: {Name:mk0b4b88ee8893cefe753073bc08069ac50e5641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 13:43:47.755998   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:47.756439   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:47.756460   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:47.756407   58770 retry.go:31] will retry after 1.380778497s: waiting for machine to come up
	I0816 13:43:49.138398   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:49.138861   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:49.138884   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:49.138811   58770 retry.go:31] will retry after 1.788185586s: waiting for machine to come up
	I0816 13:43:50.929915   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:50.930326   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:50.930356   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:50.930276   58770 retry.go:31] will retry after 1.603049262s: waiting for machine to come up
	I0816 13:43:52.536034   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:52.536492   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:52.536518   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:52.536438   58770 retry.go:31] will retry after 1.964966349s: waiting for machine to come up
	I0816 13:43:54.504003   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:54.504408   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:54.504440   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:54.504363   58770 retry.go:31] will retry after 3.616796835s: waiting for machine to come up
	I0816 13:43:58.122295   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:58.122714   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:58.122747   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:58.122673   58770 retry.go:31] will retry after 3.893804146s: waiting for machine to come up
	I0816 13:44:02.020870   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.021351   57440 main.go:141] libmachine: (no-preload-311070) Found IP for machine: 192.168.61.116
	I0816 13:44:02.021372   57440 main.go:141] libmachine: (no-preload-311070) Reserving static IP address...
	I0816 13:44:02.021385   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has current primary IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.021917   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "no-preload-311070", mac: "52:54:00:14:17:b3", ip: "192.168.61.116"} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:02.021948   57440 main.go:141] libmachine: (no-preload-311070) Reserved static IP address: 192.168.61.116
	I0816 13:44:02.021966   57440 main.go:141] libmachine: (no-preload-311070) DBG | skip adding static IP to network mk-no-preload-311070 - found existing host DHCP lease matching {name: "no-preload-311070", mac: "52:54:00:14:17:b3", ip: "192.168.61.116"}
	I0816 13:44:02.021977   57440 main.go:141] libmachine: (no-preload-311070) DBG | Getting to WaitForSSH function...
	I0816 13:44:02.021989   57440 main.go:141] libmachine: (no-preload-311070) Waiting for SSH to be available...
	I0816 13:44:02.024661   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.025071   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:02.025094   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.025327   57440 main.go:141] libmachine: (no-preload-311070) DBG | Using SSH client type: external
	I0816 13:44:02.025349   57440 main.go:141] libmachine: (no-preload-311070) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/no-preload-311070/id_rsa (-rw-------)
	I0816 13:44:02.025376   57440 main.go:141] libmachine: (no-preload-311070) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-3966/.minikube/machines/no-preload-311070/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 13:44:02.025387   57440 main.go:141] libmachine: (no-preload-311070) DBG | About to run SSH command:
	I0816 13:44:02.025406   57440 main.go:141] libmachine: (no-preload-311070) DBG | exit 0
	I0816 13:44:02.148864   57440 main.go:141] libmachine: (no-preload-311070) DBG | SSH cmd err, output: <nil>: 
	I0816 13:44:02.149279   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetConfigRaw
	I0816 13:44:02.149868   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetIP
	I0816 13:44:02.152149   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.152460   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:02.152481   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.152681   57440 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070/config.json ...
	I0816 13:44:02.152853   57440 machine.go:93] provisionDockerMachine start ...
	I0816 13:44:02.152869   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	I0816 13:44:02.153131   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:02.155341   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.155703   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:02.155743   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.155845   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:03.389847   57945 start.go:364] duration metric: took 3m33.186277254s to acquireMachinesLock for "old-k8s-version-882237"
	I0816 13:44:03.389911   57945 start.go:96] Skipping create...Using existing machine configuration
	I0816 13:44:03.389923   57945 fix.go:54] fixHost starting: 
	I0816 13:44:03.390344   57945 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:03.390384   57945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:03.406808   57945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40841
	I0816 13:44:03.407227   57945 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:03.407790   57945 main.go:141] libmachine: Using API Version  1
	I0816 13:44:03.407819   57945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:03.408124   57945 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:03.408341   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:44:03.408506   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetState
	I0816 13:44:03.409993   57945 fix.go:112] recreateIfNeeded on old-k8s-version-882237: state=Stopped err=<nil>
	I0816 13:44:03.410029   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	W0816 13:44:03.410200   57945 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 13:44:03.412299   57945 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-882237" ...
	I0816 13:44:02.156024   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:02.156199   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:02.156350   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:02.156557   57440 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:02.156747   57440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0816 13:44:02.156758   57440 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 13:44:02.261263   57440 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 13:44:02.261290   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetMachineName
	I0816 13:44:02.261514   57440 buildroot.go:166] provisioning hostname "no-preload-311070"
	I0816 13:44:02.261528   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetMachineName
	I0816 13:44:02.261696   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:02.264473   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.264892   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:02.264936   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.265030   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:02.265198   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:02.265365   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:02.265485   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:02.265624   57440 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:02.265796   57440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0816 13:44:02.265813   57440 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-311070 && echo "no-preload-311070" | sudo tee /etc/hostname
	I0816 13:44:02.384079   57440 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-311070
	
	I0816 13:44:02.384112   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:02.386756   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.387065   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:02.387104   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.387285   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:02.387501   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:02.387699   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:02.387843   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:02.387997   57440 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:02.388193   57440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0816 13:44:02.388218   57440 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-311070' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-311070/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-311070' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 13:44:02.502089   57440 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 13:44:02.502122   57440 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-3966/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-3966/.minikube}
	I0816 13:44:02.502159   57440 buildroot.go:174] setting up certificates
	I0816 13:44:02.502173   57440 provision.go:84] configureAuth start
	I0816 13:44:02.502191   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetMachineName
	I0816 13:44:02.502484   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetIP
	I0816 13:44:02.505215   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.505523   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:02.505560   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.505726   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:02.507770   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.508033   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:02.508062   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.508193   57440 provision.go:143] copyHostCerts
	I0816 13:44:02.508249   57440 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem, removing ...
	I0816 13:44:02.508267   57440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem
	I0816 13:44:02.508336   57440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem (1082 bytes)
	I0816 13:44:02.508426   57440 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem, removing ...
	I0816 13:44:02.508435   57440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem
	I0816 13:44:02.508460   57440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem (1123 bytes)
	I0816 13:44:02.508520   57440 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem, removing ...
	I0816 13:44:02.508527   57440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem
	I0816 13:44:02.508548   57440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem (1675 bytes)
	I0816 13:44:02.508597   57440 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem org=jenkins.no-preload-311070 san=[127.0.0.1 192.168.61.116 localhost minikube no-preload-311070]
	I0816 13:44:02.732379   57440 provision.go:177] copyRemoteCerts
	I0816 13:44:02.732434   57440 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 13:44:02.732458   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:02.735444   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.735803   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:02.735837   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.736040   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:02.736274   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:02.736428   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:02.736587   57440 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/no-preload-311070/id_rsa Username:docker}
	I0816 13:44:02.819602   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 13:44:02.843489   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0816 13:44:02.866482   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 13:44:02.889908   57440 provision.go:87] duration metric: took 387.723287ms to configureAuth
	I0816 13:44:02.889936   57440 buildroot.go:189] setting minikube options for container-runtime
	I0816 13:44:02.890151   57440 config.go:182] Loaded profile config "no-preload-311070": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 13:44:02.890250   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:02.892851   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.893158   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:02.893184   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.893381   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:02.893607   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:02.893777   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:02.893925   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:02.894076   57440 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:02.894267   57440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0816 13:44:02.894286   57440 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 13:44:03.153730   57440 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 13:44:03.153755   57440 machine.go:96] duration metric: took 1.000891309s to provisionDockerMachine
	I0816 13:44:03.153766   57440 start.go:293] postStartSetup for "no-preload-311070" (driver="kvm2")
	I0816 13:44:03.153776   57440 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 13:44:03.153790   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	I0816 13:44:03.154088   57440 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 13:44:03.154122   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:03.156612   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:03.156931   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:03.156969   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:03.157113   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:03.157299   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:03.157438   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:03.157595   57440 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/no-preload-311070/id_rsa Username:docker}
	I0816 13:44:03.241700   57440 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 13:44:03.246133   57440 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 13:44:03.246155   57440 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/addons for local assets ...
	I0816 13:44:03.246221   57440 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/files for local assets ...
	I0816 13:44:03.246292   57440 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem -> 111492.pem in /etc/ssl/certs
	I0816 13:44:03.246379   57440 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 13:44:03.257778   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /etc/ssl/certs/111492.pem (1708 bytes)
	I0816 13:44:03.283511   57440 start.go:296] duration metric: took 129.718161ms for postStartSetup
	I0816 13:44:03.283552   57440 fix.go:56] duration metric: took 20.706029776s for fixHost
	I0816 13:44:03.283603   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:03.286296   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:03.286608   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:03.286651   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:03.286803   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:03.287016   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:03.287158   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:03.287298   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:03.287477   57440 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:03.287639   57440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0816 13:44:03.287649   57440 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 13:44:03.389691   57440 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723815843.358144829
	
	I0816 13:44:03.389710   57440 fix.go:216] guest clock: 1723815843.358144829
	I0816 13:44:03.389717   57440 fix.go:229] Guest: 2024-08-16 13:44:03.358144829 +0000 UTC Remote: 2024-08-16 13:44:03.283556408 +0000 UTC m=+271.159980604 (delta=74.588421ms)
	I0816 13:44:03.389749   57440 fix.go:200] guest clock delta is within tolerance: 74.588421ms
	I0816 13:44:03.389754   57440 start.go:83] releasing machines lock for "no-preload-311070", held for 20.812259998s
	I0816 13:44:03.389779   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	I0816 13:44:03.390029   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetIP
	I0816 13:44:03.392788   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:03.393137   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:03.393160   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:03.393365   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	I0816 13:44:03.393870   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	I0816 13:44:03.394042   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	I0816 13:44:03.394125   57440 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 13:44:03.394180   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:03.394215   57440 ssh_runner.go:195] Run: cat /version.json
	I0816 13:44:03.394235   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:03.396749   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:03.396813   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:03.397124   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:03.397152   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:03.397180   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:03.397197   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:03.397466   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:03.397543   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:03.397717   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:03.397731   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:03.397874   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:03.397921   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:03.397998   57440 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/no-preload-311070/id_rsa Username:docker}
	I0816 13:44:03.398077   57440 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/no-preload-311070/id_rsa Username:docker}
	I0816 13:44:03.473552   57440 ssh_runner.go:195] Run: systemctl --version
	I0816 13:44:03.497958   57440 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 13:44:03.644212   57440 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 13:44:03.651347   57440 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 13:44:03.651455   57440 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 13:44:03.667822   57440 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 13:44:03.667842   57440 start.go:495] detecting cgroup driver to use...
	I0816 13:44:03.667915   57440 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 13:44:03.685838   57440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 13:44:03.700002   57440 docker.go:217] disabling cri-docker service (if available) ...
	I0816 13:44:03.700073   57440 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 13:44:03.713465   57440 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 13:44:03.726793   57440 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 13:44:03.838274   57440 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 13:44:03.967880   57440 docker.go:233] disabling docker service ...
	I0816 13:44:03.967951   57440 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 13:44:03.982178   57440 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 13:44:03.994574   57440 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 13:44:04.132374   57440 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 13:44:04.242820   57440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 13:44:04.257254   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 13:44:04.277961   57440 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 13:44:04.278018   57440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:04.288557   57440 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 13:44:04.288621   57440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:04.299108   57440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:04.310139   57440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:04.320850   57440 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 13:44:04.332224   57440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:04.342905   57440 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:04.361606   57440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:04.372423   57440 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 13:44:04.382305   57440 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 13:44:04.382355   57440 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 13:44:04.396774   57440 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 13:44:04.408417   57440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:44:04.516638   57440 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 13:44:04.684247   57440 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 13:44:04.684316   57440 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 13:44:04.689824   57440 start.go:563] Will wait 60s for crictl version
	I0816 13:44:04.689878   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:44:04.693456   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 13:44:04.732628   57440 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 13:44:04.732712   57440 ssh_runner.go:195] Run: crio --version
	I0816 13:44:04.760111   57440 ssh_runner.go:195] Run: crio --version
	I0816 13:44:04.790127   57440 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 13:44:03.413613   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .Start
	I0816 13:44:03.413783   57945 main.go:141] libmachine: (old-k8s-version-882237) Ensuring networks are active...
	I0816 13:44:03.414567   57945 main.go:141] libmachine: (old-k8s-version-882237) Ensuring network default is active
	I0816 13:44:03.414873   57945 main.go:141] libmachine: (old-k8s-version-882237) Ensuring network mk-old-k8s-version-882237 is active
	I0816 13:44:03.415336   57945 main.go:141] libmachine: (old-k8s-version-882237) Getting domain xml...
	I0816 13:44:03.416198   57945 main.go:141] libmachine: (old-k8s-version-882237) Creating domain...
	I0816 13:44:04.671017   57945 main.go:141] libmachine: (old-k8s-version-882237) Waiting to get IP...
	I0816 13:44:04.672035   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:04.672467   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:04.672560   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:04.672467   58914 retry.go:31] will retry after 271.707338ms: waiting for machine to come up
	I0816 13:44:04.946147   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:04.946549   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:04.946577   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:04.946506   58914 retry.go:31] will retry after 324.872897ms: waiting for machine to come up
	I0816 13:44:04.791315   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetIP
	I0816 13:44:04.794224   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:04.794587   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:04.794613   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:04.794794   57440 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0816 13:44:04.798848   57440 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 13:44:04.811522   57440 kubeadm.go:883] updating cluster {Name:no-preload-311070 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-311070 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 13:44:04.811628   57440 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 13:44:04.811685   57440 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 13:44:04.845546   57440 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 13:44:04.845567   57440 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0816 13:44:04.845630   57440 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:04.845654   57440 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0816 13:44:04.845687   57440 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 13:44:04.845714   57440 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0816 13:44:04.845694   57440 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0816 13:44:04.845789   57440 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0816 13:44:04.845839   57440 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0816 13:44:04.845875   57440 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0816 13:44:04.847428   57440 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0816 13:44:04.847440   57440 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0816 13:44:04.847454   57440 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0816 13:44:04.847428   57440 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0816 13:44:04.847484   57440 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0816 13:44:04.847429   57440 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0816 13:44:04.847431   57440 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:04.847508   57440 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 13:44:05.036225   57440 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0816 13:44:05.071514   57440 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0816 13:44:05.075186   57440 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0816 13:44:05.075233   57440 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0816 13:44:05.075273   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:44:05.111591   57440 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0816 13:44:05.111634   57440 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0816 13:44:05.111687   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:44:05.111704   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0816 13:44:05.145127   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0816 13:44:05.145289   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0816 13:44:05.186194   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0816 13:44:05.200886   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0816 13:44:05.203824   57440 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0816 13:44:05.208201   57440 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0816 13:44:05.209021   57440 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 13:44:05.234117   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0816 13:44:05.234893   57440 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0816 13:44:05.245119   57440 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0816 13:44:05.305971   57440 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0816 13:44:05.306084   57440 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0816 13:44:05.374880   57440 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0816 13:44:05.374922   57440 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0816 13:44:05.374971   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:44:05.399114   57440 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0816 13:44:05.399156   57440 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0816 13:44:05.399187   57440 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0816 13:44:05.399216   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:44:05.399225   57440 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 13:44:05.399267   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:44:05.399318   57440 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0816 13:44:05.399414   57440 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0816 13:44:05.401940   57440 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0816 13:44:05.401975   57440 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0816 13:44:05.402006   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:44:05.513930   57440 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0816 13:44:05.513961   57440 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0816 13:44:05.514017   57440 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0816 13:44:05.514032   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0816 13:44:05.514059   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0816 13:44:05.514112   57440 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0816 13:44:05.514115   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 13:44:05.514150   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0816 13:44:05.634275   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0816 13:44:05.634340   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 13:44:05.864118   57440 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:05.273252   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:05.273730   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:05.273758   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:05.273682   58914 retry.go:31] will retry after 300.46858ms: waiting for machine to come up
	I0816 13:44:05.576567   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:05.577060   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:05.577088   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:05.577023   58914 retry.go:31] will retry after 471.968976ms: waiting for machine to come up
	I0816 13:44:06.050651   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:06.051035   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:06.051075   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:06.051005   58914 retry.go:31] will retry after 696.85088ms: waiting for machine to come up
	I0816 13:44:06.750108   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:06.750611   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:06.750643   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:06.750548   58914 retry.go:31] will retry after 752.204898ms: waiting for machine to come up
	I0816 13:44:07.504321   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:07.504741   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:07.504766   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:07.504706   58914 retry.go:31] will retry after 734.932569ms: waiting for machine to come up
	I0816 13:44:08.241587   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:08.241950   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:08.241977   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:08.241895   58914 retry.go:31] will retry after 1.245731112s: waiting for machine to come up
	I0816 13:44:09.488787   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:09.489326   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:09.489370   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:09.489282   58914 retry.go:31] will retry after 1.454286295s: waiting for machine to come up
	I0816 13:44:07.542707   57440 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (2.028664898s)
	I0816 13:44:07.542745   57440 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0816 13:44:07.542770   57440 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0816 13:44:07.542773   57440 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1: (2.028589727s)
	I0816 13:44:07.542817   57440 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0: (2.028737534s)
	I0816 13:44:07.542831   57440 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0816 13:44:07.542837   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0816 13:44:07.542869   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0816 13:44:07.542888   57440 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0: (1.908584925s)
	I0816 13:44:07.542937   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0816 13:44:07.542951   57440 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0: (1.908590671s)
	I0816 13:44:07.542995   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 13:44:07.543034   57440 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.678883978s)
	I0816 13:44:07.543068   57440 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0816 13:44:07.543103   57440 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:07.543138   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:44:11.390456   57440 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0: (3.847434032s)
	I0816 13:44:11.390507   57440 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0816 13:44:11.390610   57440 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0816 13:44:11.390647   57440 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.847797916s)
	I0816 13:44:11.390674   57440 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0816 13:44:11.390684   57440 ssh_runner.go:235] Completed: which crictl: (3.847535001s)
	I0816 13:44:11.390740   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:11.390780   57440 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0: (3.847819859s)
	I0816 13:44:11.390810   57440 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1: (3.847960553s)
	I0816 13:44:11.390825   57440 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0816 13:44:11.390848   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0816 13:44:11.390908   57440 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0816 13:44:11.390923   57440 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0: (3.848033361s)
	I0816 13:44:11.390978   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0816 13:44:11.461833   57440 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0816 13:44:11.461859   57440 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0816 13:44:11.461905   57440 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0816 13:44:11.461922   57440 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0816 13:44:11.461843   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:11.461990   57440 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0816 13:44:11.462013   57440 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0816 13:44:11.462557   57440 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0816 13:44:11.462649   57440 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0816 13:44:10.944947   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:10.945395   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:10.945459   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:10.945352   58914 retry.go:31] will retry after 1.738238967s: waiting for machine to come up
	I0816 13:44:12.686147   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:12.686673   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:12.686701   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:12.686630   58914 retry.go:31] will retry after 2.778761596s: waiting for machine to come up
	I0816 13:44:13.839070   57440 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (2.377139357s)
	I0816 13:44:13.839101   57440 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0816 13:44:13.839141   57440 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0816 13:44:13.839207   57440 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0816 13:44:13.839255   57440 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.377282192s)
	I0816 13:44:13.839312   57440 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1: (2.377281378s)
	I0816 13:44:13.839358   57440 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0816 13:44:13.839358   57440 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0: (2.376690281s)
	I0816 13:44:13.839379   57440 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0816 13:44:13.839318   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:13.880059   57440 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0816 13:44:13.880203   57440 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0816 13:44:15.818912   57440 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (1.979684366s)
	I0816 13:44:15.818943   57440 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0816 13:44:15.818975   57440 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.938747663s)
	I0816 13:44:15.818986   57440 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0816 13:44:15.819000   57440 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0816 13:44:15.819043   57440 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0816 13:44:15.468356   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:15.468788   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:15.468817   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:15.468739   58914 retry.go:31] will retry after 2.807621726s: waiting for machine to come up
	I0816 13:44:18.277604   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:18.277980   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:18.278013   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:18.277937   58914 retry.go:31] will retry after 4.131806684s: waiting for machine to come up
	I0816 13:44:17.795989   57440 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.976923514s)
	I0816 13:44:17.796013   57440 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0816 13:44:17.796040   57440 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0816 13:44:17.796088   57440 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0816 13:44:19.147815   57440 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.351703003s)
	I0816 13:44:19.147843   57440 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0816 13:44:19.147869   57440 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0816 13:44:19.147919   57440 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0816 13:44:19.791370   57440 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0816 13:44:19.791414   57440 cache_images.go:123] Successfully loaded all cached images
	I0816 13:44:19.791421   57440 cache_images.go:92] duration metric: took 14.945842963s to LoadCachedImages
	I0816 13:44:19.791440   57440 kubeadm.go:934] updating node { 192.168.61.116 8443 v1.31.0 crio true true} ...
	I0816 13:44:19.791593   57440 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-311070 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-311070 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 13:44:19.791681   57440 ssh_runner.go:195] Run: crio config
	I0816 13:44:19.843963   57440 cni.go:84] Creating CNI manager for ""
	I0816 13:44:19.843984   57440 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:44:19.844003   57440 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 13:44:19.844029   57440 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.116 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-311070 NodeName:no-preload-311070 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 13:44:19.844189   57440 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.116
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-311070"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 13:44:19.844250   57440 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 13:44:19.854942   57440 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 13:44:19.855014   57440 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 13:44:19.864794   57440 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0816 13:44:19.881210   57440 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 13:44:19.897450   57440 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0816 13:44:19.916038   57440 ssh_runner.go:195] Run: grep 192.168.61.116	control-plane.minikube.internal$ /etc/hosts
	I0816 13:44:19.919995   57440 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 13:44:19.934081   57440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:44:20.077422   57440 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 13:44:20.093846   57440 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070 for IP: 192.168.61.116
	I0816 13:44:20.093864   57440 certs.go:194] generating shared ca certs ...
	I0816 13:44:20.093881   57440 certs.go:226] acquiring lock for ca certs: {Name:mkdb46755373c135acf8239d2d4352f4b0b3d1d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:44:20.094055   57440 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key
	I0816 13:44:20.094120   57440 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key
	I0816 13:44:20.094135   57440 certs.go:256] generating profile certs ...
	I0816 13:44:20.094236   57440 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070/client.key
	I0816 13:44:20.094325   57440 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070/apiserver.key.000c4f90
	I0816 13:44:20.094391   57440 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070/proxy-client.key
	I0816 13:44:20.094529   57440 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem (1338 bytes)
	W0816 13:44:20.094571   57440 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149_empty.pem, impossibly tiny 0 bytes
	I0816 13:44:20.094584   57440 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 13:44:20.094621   57440 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem (1082 bytes)
	I0816 13:44:20.094654   57440 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem (1123 bytes)
	I0816 13:44:20.094795   57440 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem (1675 bytes)
	I0816 13:44:20.094874   57440 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem (1708 bytes)
	I0816 13:44:20.096132   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 13:44:20.130987   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 13:44:20.160701   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 13:44:20.187948   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 13:44:20.217162   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0816 13:44:20.242522   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 13:44:20.273582   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 13:44:20.300613   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 13:44:20.328363   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /usr/share/ca-certificates/111492.pem (1708 bytes)
	I0816 13:44:20.353396   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 13:44:20.377770   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem --> /usr/share/ca-certificates/11149.pem (1338 bytes)
	I0816 13:44:20.401760   57440 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 13:44:20.418302   57440 ssh_runner.go:195] Run: openssl version
	I0816 13:44:20.424065   57440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111492.pem && ln -fs /usr/share/ca-certificates/111492.pem /etc/ssl/certs/111492.pem"
	I0816 13:44:20.434841   57440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111492.pem
	I0816 13:44:20.439352   57440 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 12:33 /usr/share/ca-certificates/111492.pem
	I0816 13:44:20.439398   57440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111492.pem
	I0816 13:44:20.445210   57440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111492.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 13:44:20.455727   57440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 13:44:20.466095   57440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:44:20.470528   57440 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 12:22 /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:44:20.470568   57440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:44:20.476080   57440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 13:44:20.486189   57440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11149.pem && ln -fs /usr/share/ca-certificates/11149.pem /etc/ssl/certs/11149.pem"
	I0816 13:44:20.496373   57440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11149.pem
	I0816 13:44:20.500696   57440 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 12:33 /usr/share/ca-certificates/11149.pem
	I0816 13:44:20.500737   57440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11149.pem
	I0816 13:44:20.506426   57440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11149.pem /etc/ssl/certs/51391683.0"
	I0816 13:44:20.517130   57440 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 13:44:20.521664   57440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 13:44:20.527604   57440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 13:44:20.533478   57440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 13:44:20.539285   57440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 13:44:20.545042   57440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 13:44:20.550681   57440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 13:44:20.556239   57440 kubeadm.go:392] StartCluster: {Name:no-preload-311070 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-311070 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 13:44:20.556314   57440 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 13:44:20.556391   57440 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 13:44:20.594069   57440 cri.go:89] found id: ""
	I0816 13:44:20.594128   57440 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 13:44:20.604067   57440 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 13:44:20.604085   57440 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 13:44:20.604131   57440 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 13:44:20.614182   57440 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 13:44:20.615072   57440 kubeconfig.go:125] found "no-preload-311070" server: "https://192.168.61.116:8443"
	I0816 13:44:20.617096   57440 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 13:44:20.626046   57440 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.116
	I0816 13:44:20.626069   57440 kubeadm.go:1160] stopping kube-system containers ...
	I0816 13:44:20.626078   57440 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 13:44:20.626114   57440 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 13:44:20.659889   57440 cri.go:89] found id: ""
	I0816 13:44:20.659954   57440 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 13:44:20.676977   57440 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 13:44:20.686930   57440 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 13:44:20.686946   57440 kubeadm.go:157] found existing configuration files:
	
	I0816 13:44:20.686985   57440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 13:44:20.696144   57440 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 13:44:20.696222   57440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 13:44:20.705550   57440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 13:44:20.714350   57440 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 13:44:20.714399   57440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 13:44:20.723636   57440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 13:44:20.732287   57440 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 13:44:20.732329   57440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 13:44:20.741390   57440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 13:44:20.749913   57440 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 13:44:20.749956   57440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 13:44:20.758968   57440 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 13:44:20.768054   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:20.872847   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:21.933273   57440 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.060394194s)
	I0816 13:44:21.933303   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:22.130462   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:23.689897   58430 start.go:364] duration metric: took 2m7.587518205s to acquireMachinesLock for "default-k8s-diff-port-893736"
	I0816 13:44:23.689958   58430 start.go:96] Skipping create...Using existing machine configuration
	I0816 13:44:23.689965   58430 fix.go:54] fixHost starting: 
	I0816 13:44:23.690363   58430 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:23.690401   58430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:23.707766   58430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40097
	I0816 13:44:23.708281   58430 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:23.709439   58430 main.go:141] libmachine: Using API Version  1
	I0816 13:44:23.709462   58430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:23.709757   58430 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:23.709906   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:44:23.710017   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetState
	I0816 13:44:23.711612   58430 fix.go:112] recreateIfNeeded on default-k8s-diff-port-893736: state=Stopped err=<nil>
	I0816 13:44:23.711655   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	W0816 13:44:23.711797   58430 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 13:44:23.713600   58430 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-893736" ...
	I0816 13:44:22.413954   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.414552   57945 main.go:141] libmachine: (old-k8s-version-882237) Found IP for machine: 192.168.72.105
	I0816 13:44:22.414575   57945 main.go:141] libmachine: (old-k8s-version-882237) Reserving static IP address...
	I0816 13:44:22.414591   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has current primary IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.415085   57945 main.go:141] libmachine: (old-k8s-version-882237) Reserved static IP address: 192.168.72.105
	I0816 13:44:22.415142   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "old-k8s-version-882237", mac: "52:54:00:ce:02:bd", ip: "192.168.72.105"} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:22.415157   57945 main.go:141] libmachine: (old-k8s-version-882237) Waiting for SSH to be available...
	I0816 13:44:22.415183   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | skip adding static IP to network mk-old-k8s-version-882237 - found existing host DHCP lease matching {name: "old-k8s-version-882237", mac: "52:54:00:ce:02:bd", ip: "192.168.72.105"}
	I0816 13:44:22.415195   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | Getting to WaitForSSH function...
	I0816 13:44:22.417524   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.417844   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:22.417875   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.417987   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | Using SSH client type: external
	I0816 13:44:22.418014   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/old-k8s-version-882237/id_rsa (-rw-------)
	I0816 13:44:22.418052   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.105 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-3966/.minikube/machines/old-k8s-version-882237/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 13:44:22.418072   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | About to run SSH command:
	I0816 13:44:22.418086   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | exit 0
	I0816 13:44:22.536890   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | SSH cmd err, output: <nil>: 
	I0816 13:44:22.537284   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetConfigRaw
	I0816 13:44:22.537843   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetIP
	I0816 13:44:22.540100   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.540454   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:22.540490   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.540683   57945 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/config.json ...
	I0816 13:44:22.540939   57945 machine.go:93] provisionDockerMachine start ...
	I0816 13:44:22.540960   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:44:22.541184   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:22.543102   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.543385   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:22.543413   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.543505   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:44:22.543664   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:22.543798   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:22.543991   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:44:22.544177   57945 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:22.544497   57945 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.105 22 <nil> <nil>}
	I0816 13:44:22.544519   57945 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 13:44:22.641319   57945 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 13:44:22.641355   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetMachineName
	I0816 13:44:22.641606   57945 buildroot.go:166] provisioning hostname "old-k8s-version-882237"
	I0816 13:44:22.641630   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetMachineName
	I0816 13:44:22.641820   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:22.644657   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.645053   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:22.645085   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.645279   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:44:22.645476   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:22.645656   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:22.645827   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:44:22.646013   57945 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:22.646233   57945 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.105 22 <nil> <nil>}
	I0816 13:44:22.646248   57945 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-882237 && echo "old-k8s-version-882237" | sudo tee /etc/hostname
	I0816 13:44:22.759488   57945 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-882237
	
	I0816 13:44:22.759526   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:22.762382   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.762774   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:22.762811   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.762959   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:44:22.763163   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:22.763353   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:22.763534   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:44:22.763738   57945 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:22.763967   57945 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.105 22 <nil> <nil>}
	I0816 13:44:22.763995   57945 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-882237' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-882237/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-882237' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 13:44:22.878120   57945 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 13:44:22.878158   57945 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-3966/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-3966/.minikube}
	I0816 13:44:22.878215   57945 buildroot.go:174] setting up certificates
	I0816 13:44:22.878230   57945 provision.go:84] configureAuth start
	I0816 13:44:22.878244   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetMachineName
	I0816 13:44:22.878581   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetIP
	I0816 13:44:22.881426   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.881808   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:22.881843   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.881971   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:22.884352   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.884750   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:22.884778   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.884932   57945 provision.go:143] copyHostCerts
	I0816 13:44:22.884994   57945 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem, removing ...
	I0816 13:44:22.885016   57945 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem
	I0816 13:44:22.885084   57945 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem (1082 bytes)
	I0816 13:44:22.885230   57945 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem, removing ...
	I0816 13:44:22.885242   57945 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem
	I0816 13:44:22.885276   57945 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem (1123 bytes)
	I0816 13:44:22.885374   57945 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem, removing ...
	I0816 13:44:22.885383   57945 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem
	I0816 13:44:22.885415   57945 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem (1675 bytes)
	I0816 13:44:22.885503   57945 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-882237 san=[127.0.0.1 192.168.72.105 localhost minikube old-k8s-version-882237]
	I0816 13:44:23.017446   57945 provision.go:177] copyRemoteCerts
	I0816 13:44:23.017519   57945 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 13:44:23.017555   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:23.020030   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.020423   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:23.020460   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.020678   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:44:23.020888   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:23.021076   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:44:23.021199   57945 sshutil.go:53] new ssh client: &{IP:192.168.72.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/old-k8s-version-882237/id_rsa Username:docker}
	I0816 13:44:23.100006   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0816 13:44:23.128795   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 13:44:23.157542   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 13:44:23.182619   57945 provision.go:87] duration metric: took 304.375843ms to configureAuth
	I0816 13:44:23.182652   57945 buildroot.go:189] setting minikube options for container-runtime
	I0816 13:44:23.182882   57945 config.go:182] Loaded profile config "old-k8s-version-882237": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0816 13:44:23.182984   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:23.186043   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.186441   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:23.186474   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.186648   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:44:23.186844   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:23.187015   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:23.187196   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:44:23.187383   57945 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:23.187566   57945 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.105 22 <nil> <nil>}
	I0816 13:44:23.187587   57945 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 13:44:23.459221   57945 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 13:44:23.459248   57945 machine.go:96] duration metric: took 918.295024ms to provisionDockerMachine
	I0816 13:44:23.459261   57945 start.go:293] postStartSetup for "old-k8s-version-882237" (driver="kvm2")
	I0816 13:44:23.459275   57945 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 13:44:23.459305   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:44:23.459614   57945 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 13:44:23.459649   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:23.462624   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.463010   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:23.463033   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.463210   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:44:23.463405   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:23.463584   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:44:23.463715   57945 sshutil.go:53] new ssh client: &{IP:192.168.72.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/old-k8s-version-882237/id_rsa Username:docker}
	I0816 13:44:23.550785   57945 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 13:44:23.554984   57945 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 13:44:23.555009   57945 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/addons for local assets ...
	I0816 13:44:23.555078   57945 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/files for local assets ...
	I0816 13:44:23.555171   57945 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem -> 111492.pem in /etc/ssl/certs
	I0816 13:44:23.555290   57945 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 13:44:23.564655   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /etc/ssl/certs/111492.pem (1708 bytes)
	I0816 13:44:23.588471   57945 start.go:296] duration metric: took 129.196791ms for postStartSetup
	I0816 13:44:23.588515   57945 fix.go:56] duration metric: took 20.198590598s for fixHost
	I0816 13:44:23.588544   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:23.591443   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.591805   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:23.591835   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.591959   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:44:23.592145   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:23.592354   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:23.592492   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:44:23.592668   57945 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:23.592868   57945 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.105 22 <nil> <nil>}
	I0816 13:44:23.592885   57945 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 13:44:23.689724   57945 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723815863.663875328
	
	I0816 13:44:23.689760   57945 fix.go:216] guest clock: 1723815863.663875328
	I0816 13:44:23.689771   57945 fix.go:229] Guest: 2024-08-16 13:44:23.663875328 +0000 UTC Remote: 2024-08-16 13:44:23.588520483 +0000 UTC m=+233.521229154 (delta=75.354845ms)
	I0816 13:44:23.689796   57945 fix.go:200] guest clock delta is within tolerance: 75.354845ms
	I0816 13:44:23.689806   57945 start.go:83] releasing machines lock for "old-k8s-version-882237", held for 20.299922092s
	I0816 13:44:23.689839   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:44:23.690115   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetIP
	I0816 13:44:23.692683   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.693079   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:23.693108   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.693268   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:44:23.693753   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:44:23.693926   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:44:23.694009   57945 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 13:44:23.694062   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:23.694142   57945 ssh_runner.go:195] Run: cat /version.json
	I0816 13:44:23.694167   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:23.696872   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.696897   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.697247   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:23.697281   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:23.697309   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.697340   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.697622   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:44:23.697801   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:44:23.697830   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:23.697974   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:44:23.698010   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:23.698144   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:44:23.698155   57945 sshutil.go:53] new ssh client: &{IP:192.168.72.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/old-k8s-version-882237/id_rsa Username:docker}
	I0816 13:44:23.698312   57945 sshutil.go:53] new ssh client: &{IP:192.168.72.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/old-k8s-version-882237/id_rsa Username:docker}
	I0816 13:44:23.774706   57945 ssh_runner.go:195] Run: systemctl --version
	I0816 13:44:23.802788   57945 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 13:44:23.955361   57945 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 13:44:23.963291   57945 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 13:44:23.963363   57945 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 13:44:23.979542   57945 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 13:44:23.979579   57945 start.go:495] detecting cgroup driver to use...
	I0816 13:44:23.979645   57945 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 13:44:24.002509   57945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 13:44:24.019715   57945 docker.go:217] disabling cri-docker service (if available) ...
	I0816 13:44:24.019773   57945 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 13:44:24.033677   57945 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 13:44:24.049195   57945 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 13:44:24.168789   57945 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 13:44:24.346709   57945 docker.go:233] disabling docker service ...
	I0816 13:44:24.346772   57945 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 13:44:24.363739   57945 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 13:44:24.378785   57945 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 13:44:24.547705   57945 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 13:44:24.738866   57945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 13:44:24.756139   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 13:44:24.775999   57945 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0816 13:44:24.776060   57945 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:24.786682   57945 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 13:44:24.786783   57945 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:24.797385   57945 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:24.807825   57945 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:24.817919   57945 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 13:44:24.828884   57945 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 13:44:24.838725   57945 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 13:44:24.838782   57945 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 13:44:24.852544   57945 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 13:44:24.868302   57945 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:44:24.980614   57945 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 13:44:25.122584   57945 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 13:44:25.122660   57945 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 13:44:25.128622   57945 start.go:563] Will wait 60s for crictl version
	I0816 13:44:25.128694   57945 ssh_runner.go:195] Run: which crictl
	I0816 13:44:25.133726   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 13:44:25.188714   57945 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 13:44:25.188801   57945 ssh_runner.go:195] Run: crio --version
	I0816 13:44:25.223719   57945 ssh_runner.go:195] Run: crio --version
	I0816 13:44:25.263894   57945 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0816 13:44:23.714877   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .Start
	I0816 13:44:23.715069   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Ensuring networks are active...
	I0816 13:44:23.715788   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Ensuring network default is active
	I0816 13:44:23.716164   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Ensuring network mk-default-k8s-diff-port-893736 is active
	I0816 13:44:23.716648   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Getting domain xml...
	I0816 13:44:23.717424   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Creating domain...
	I0816 13:44:24.979917   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting to get IP...
	I0816 13:44:24.980942   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:24.981375   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:24.981448   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:24.981363   59082 retry.go:31] will retry after 199.038336ms: waiting for machine to come up
	I0816 13:44:25.181886   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:25.182350   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:25.182374   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:25.182330   59082 retry.go:31] will retry after 297.566018ms: waiting for machine to come up
	I0816 13:44:25.481811   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:25.482271   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:25.482296   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:25.482234   59082 retry.go:31] will retry after 297.833233ms: waiting for machine to come up
	I0816 13:44:25.781831   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:25.782445   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:25.782479   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:25.782400   59082 retry.go:31] will retry after 459.810978ms: waiting for machine to come up
	I0816 13:44:22.220022   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:22.317717   57440 api_server.go:52] waiting for apiserver process to appear ...
	I0816 13:44:22.317800   57440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:22.818025   57440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:23.318171   57440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:23.354996   57440 api_server.go:72] duration metric: took 1.037294965s to wait for apiserver process to appear ...
	I0816 13:44:23.355023   57440 api_server.go:88] waiting for apiserver healthz status ...
	I0816 13:44:23.355043   57440 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0816 13:44:23.355677   57440 api_server.go:269] stopped: https://192.168.61.116:8443/healthz: Get "https://192.168.61.116:8443/healthz": dial tcp 192.168.61.116:8443: connect: connection refused
	I0816 13:44:23.855190   57440 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0816 13:44:26.719152   57440 api_server.go:279] https://192.168.61.116:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 13:44:26.719184   57440 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 13:44:26.719204   57440 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0816 13:44:26.756329   57440 api_server.go:279] https://192.168.61.116:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 13:44:26.756366   57440 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 13:44:26.855581   57440 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0816 13:44:26.862856   57440 api_server.go:279] https://192.168.61.116:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 13:44:26.862885   57440 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 13:44:27.355555   57440 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0816 13:44:27.365664   57440 api_server.go:279] https://192.168.61.116:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 13:44:27.365702   57440 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 13:44:27.855844   57440 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0816 13:44:27.863185   57440 api_server.go:279] https://192.168.61.116:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 13:44:27.863227   57440 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 13:44:28.355490   57440 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0816 13:44:28.361410   57440 api_server.go:279] https://192.168.61.116:8443/healthz returned 200:
	ok
	I0816 13:44:28.374558   57440 api_server.go:141] control plane version: v1.31.0
	I0816 13:44:28.374593   57440 api_server.go:131] duration metric: took 5.019562023s to wait for apiserver health ...
	I0816 13:44:28.374604   57440 cni.go:84] Creating CNI manager for ""
	I0816 13:44:28.374613   57440 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:44:28.376749   57440 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 13:44:28.378413   57440 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 13:44:28.401199   57440 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 13:44:28.420798   57440 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 13:44:28.452605   57440 system_pods.go:59] 8 kube-system pods found
	I0816 13:44:28.452645   57440 system_pods.go:61] "coredns-6f6b679f8f-8kbs6" [e732183e-3b22-4a11-909a-246de5fc1c8a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 13:44:28.452655   57440 system_pods.go:61] "etcd-no-preload-311070" [e05deb95-06ff-4a8d-b520-0350b2332814] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 13:44:28.452663   57440 system_pods.go:61] "kube-apiserver-no-preload-311070" [06cb4989-0511-4c14-a3ec-e966829cf2ec] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 13:44:28.452671   57440 system_pods.go:61] "kube-controller-manager-no-preload-311070" [baa9f379-4e14-4873-a456-42e90a816b0b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 13:44:28.452680   57440 system_pods.go:61] "kube-proxy-b8d5b" [9ed1c33b-903f-43e8-880c-b9a49c658806] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0816 13:44:28.452689   57440 system_pods.go:61] "kube-scheduler-no-preload-311070" [166c0f64-ebf6-413f-82d5-f1c32991c63a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 13:44:28.452704   57440 system_pods.go:61] "metrics-server-6867b74b74-mgxhv" [e9654a8e-4db2-494d-93a7-a134b0e2bb50] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 13:44:28.452710   57440 system_pods.go:61] "storage-provisioner" [f340d2e3-2889-4200-b477-830494b827c6] Running
	I0816 13:44:28.452719   57440 system_pods.go:74] duration metric: took 31.89892ms to wait for pod list to return data ...
	I0816 13:44:28.452726   57440 node_conditions.go:102] verifying NodePressure condition ...
	I0816 13:44:28.463229   57440 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 13:44:28.463262   57440 node_conditions.go:123] node cpu capacity is 2
	I0816 13:44:28.463275   57440 node_conditions.go:105] duration metric: took 10.544476ms to run NodePressure ...
	I0816 13:44:28.463296   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:28.809304   57440 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0816 13:44:28.819091   57440 kubeadm.go:739] kubelet initialised
	I0816 13:44:28.819115   57440 kubeadm.go:740] duration metric: took 9.779672ms waiting for restarted kubelet to initialise ...
	I0816 13:44:28.819124   57440 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:44:28.827828   57440 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-8kbs6" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:28.840277   57440 pod_ready.go:98] node "no-preload-311070" hosting pod "coredns-6f6b679f8f-8kbs6" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:28.840310   57440 pod_ready.go:82] duration metric: took 12.450089ms for pod "coredns-6f6b679f8f-8kbs6" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:28.840322   57440 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-311070" hosting pod "coredns-6f6b679f8f-8kbs6" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:28.840333   57440 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:28.847012   57440 pod_ready.go:98] node "no-preload-311070" hosting pod "etcd-no-preload-311070" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:28.847036   57440 pod_ready.go:82] duration metric: took 6.692927ms for pod "etcd-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:28.847045   57440 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-311070" hosting pod "etcd-no-preload-311070" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:28.847050   57440 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:28.861358   57440 pod_ready.go:98] node "no-preload-311070" hosting pod "kube-apiserver-no-preload-311070" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:28.861404   57440 pod_ready.go:82] duration metric: took 14.346379ms for pod "kube-apiserver-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:28.861417   57440 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-311070" hosting pod "kube-apiserver-no-preload-311070" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:28.861428   57440 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:28.870641   57440 pod_ready.go:98] node "no-preload-311070" hosting pod "kube-controller-manager-no-preload-311070" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:28.870663   57440 pod_ready.go:82] duration metric: took 9.224713ms for pod "kube-controller-manager-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:28.870671   57440 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-311070" hosting pod "kube-controller-manager-no-preload-311070" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:28.870678   57440 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-b8d5b" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:29.224281   57440 pod_ready.go:98] node "no-preload-311070" hosting pod "kube-proxy-b8d5b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:29.224310   57440 pod_ready.go:82] duration metric: took 353.622663ms for pod "kube-proxy-b8d5b" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:29.224322   57440 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-311070" hosting pod "kube-proxy-b8d5b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:29.224331   57440 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:29.624518   57440 pod_ready.go:98] node "no-preload-311070" hosting pod "kube-scheduler-no-preload-311070" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:29.624552   57440 pod_ready.go:82] duration metric: took 400.212041ms for pod "kube-scheduler-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:29.624567   57440 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-311070" hosting pod "kube-scheduler-no-preload-311070" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:29.624577   57440 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:30.030291   57440 pod_ready.go:98] node "no-preload-311070" hosting pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:30.030327   57440 pod_ready.go:82] duration metric: took 405.73495ms for pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:30.030341   57440 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-311070" hosting pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:30.030352   57440 pod_ready.go:39] duration metric: took 1.211214389s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:44:30.030372   57440 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 13:44:30.045247   57440 ops.go:34] apiserver oom_adj: -16
	I0816 13:44:30.045279   57440 kubeadm.go:597] duration metric: took 9.441179951s to restartPrimaryControlPlane
	I0816 13:44:30.045291   57440 kubeadm.go:394] duration metric: took 9.489057744s to StartCluster
	I0816 13:44:30.045312   57440 settings.go:142] acquiring lock: {Name:mk96ffc9331ceacd8f3c1c33a59b38e047898a0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:44:30.045410   57440 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 13:44:30.047053   57440 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/kubeconfig: {Name:mk102d7d0e1fecb6a50b9d6c1ee82dcff7f7a898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:44:30.047310   57440 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 13:44:30.047415   57440 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 13:44:30.047486   57440 addons.go:69] Setting storage-provisioner=true in profile "no-preload-311070"
	I0816 13:44:30.047521   57440 addons.go:234] Setting addon storage-provisioner=true in "no-preload-311070"
	W0816 13:44:30.047534   57440 addons.go:243] addon storage-provisioner should already be in state true
	I0816 13:44:30.047569   57440 host.go:66] Checking if "no-preload-311070" exists ...
	I0816 13:44:30.048048   57440 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:30.048079   57440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:30.048302   57440 addons.go:69] Setting default-storageclass=true in profile "no-preload-311070"
	I0816 13:44:30.048339   57440 addons.go:69] Setting metrics-server=true in profile "no-preload-311070"
	I0816 13:44:30.048377   57440 addons.go:234] Setting addon metrics-server=true in "no-preload-311070"
	W0816 13:44:30.048387   57440 addons.go:243] addon metrics-server should already be in state true
	I0816 13:44:30.048424   57440 host.go:66] Checking if "no-preload-311070" exists ...
	I0816 13:44:30.048343   57440 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-311070"
	I0816 13:44:30.048812   57440 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:30.048834   57440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:30.048933   57440 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:30.048957   57440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:30.049282   57440 config.go:182] Loaded profile config "no-preload-311070": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 13:44:30.050905   57440 out.go:177] * Verifying Kubernetes components...
	I0816 13:44:30.052478   57440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:44:30.069405   57440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40065
	I0816 13:44:30.069463   57440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33057
	I0816 13:44:30.069735   57440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46331
	I0816 13:44:30.069949   57440 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:30.070072   57440 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:30.070145   57440 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:30.070488   57440 main.go:141] libmachine: Using API Version  1
	I0816 13:44:30.070506   57440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:30.070586   57440 main.go:141] libmachine: Using API Version  1
	I0816 13:44:30.070598   57440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:30.070618   57440 main.go:141] libmachine: Using API Version  1
	I0816 13:44:30.070627   57440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:30.070977   57440 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:30.071006   57440 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:30.071031   57440 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:30.071212   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetState
	I0816 13:44:30.071602   57440 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:30.071602   57440 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:30.071639   57440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:30.071621   57440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:30.074680   57440 addons.go:234] Setting addon default-storageclass=true in "no-preload-311070"
	W0816 13:44:30.074699   57440 addons.go:243] addon default-storageclass should already be in state true
	I0816 13:44:30.074730   57440 host.go:66] Checking if "no-preload-311070" exists ...
	I0816 13:44:30.075073   57440 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:30.075100   57440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:30.088961   57440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46717
	I0816 13:44:30.089421   57440 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:30.089952   57440 main.go:141] libmachine: Using API Version  1
	I0816 13:44:30.089971   57440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:30.090128   57440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37475
	I0816 13:44:30.090429   57440 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:30.090491   57440 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:30.090744   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetState
	I0816 13:44:30.090933   57440 main.go:141] libmachine: Using API Version  1
	I0816 13:44:30.090950   57440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:30.091263   57440 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:30.091463   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetState
	I0816 13:44:30.093258   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	I0816 13:44:30.093571   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	I0816 13:44:25.265126   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetIP
	I0816 13:44:25.268186   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:25.268630   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:25.268662   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:25.268927   57945 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0816 13:44:25.274101   57945 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 13:44:25.288155   57945 kubeadm.go:883] updating cluster {Name:old-k8s-version-882237 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-882237 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.105 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 13:44:25.288260   57945 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0816 13:44:25.288311   57945 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 13:44:25.342303   57945 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0816 13:44:25.342377   57945 ssh_runner.go:195] Run: which lz4
	I0816 13:44:25.346641   57945 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 13:44:25.350761   57945 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 13:44:25.350793   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0816 13:44:27.052140   57945 crio.go:462] duration metric: took 1.705504554s to copy over tarball
	I0816 13:44:27.052223   57945 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 13:44:30.094479   57440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35323
	I0816 13:44:30.094965   57440 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:30.095482   57440 main.go:141] libmachine: Using API Version  1
	I0816 13:44:30.095502   57440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:30.095857   57440 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:30.096322   57440 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:30.096361   57440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:30.128555   57440 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:30.128676   57440 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0816 13:44:26.244353   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:26.245158   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:26.245183   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:26.245062   59082 retry.go:31] will retry after 680.176025ms: waiting for machine to come up
	I0816 13:44:26.926654   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:26.927139   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:26.927183   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:26.927106   59082 retry.go:31] will retry after 720.530442ms: waiting for machine to come up
	I0816 13:44:27.648858   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:27.649342   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:27.649367   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:27.649289   59082 retry.go:31] will retry after 930.752133ms: waiting for machine to come up
	I0816 13:44:28.581283   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:28.581684   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:28.581709   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:28.581635   59082 retry.go:31] will retry after 972.791503ms: waiting for machine to come up
	I0816 13:44:29.556168   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:29.556563   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:29.556583   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:29.556525   59082 retry.go:31] will retry after 1.18129541s: waiting for machine to come up
	I0816 13:44:30.739498   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:30.739951   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:30.739978   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:30.739883   59082 retry.go:31] will retry after 2.27951459s: waiting for machine to come up
	I0816 13:44:30.133959   57440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39625
	I0816 13:44:30.134516   57440 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:30.135080   57440 main.go:141] libmachine: Using API Version  1
	I0816 13:44:30.135105   57440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:30.135463   57440 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:30.135598   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetState
	I0816 13:44:30.137494   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	I0816 13:44:30.137805   57440 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 13:44:30.137824   57440 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 13:44:30.137839   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:30.141006   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:30.141509   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:30.141544   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:30.141772   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:30.141952   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:30.142150   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:30.142305   57440 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/no-preload-311070/id_rsa Username:docker}
	I0816 13:44:30.164598   57440 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 13:44:30.164627   57440 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 13:44:30.164653   57440 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 13:44:30.164654   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:30.164662   57440 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 13:44:30.164687   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:30.168935   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:30.169259   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:30.169588   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:30.169615   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:30.169806   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:30.169828   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:30.169859   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:30.169953   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:30.170096   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:30.170103   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:30.170243   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:30.170241   57440 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/no-preload-311070/id_rsa Username:docker}
	I0816 13:44:30.170389   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:30.170505   57440 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/no-preload-311070/id_rsa Username:docker}
	I0816 13:44:30.285806   57440 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 13:44:30.312267   57440 node_ready.go:35] waiting up to 6m0s for node "no-preload-311070" to be "Ready" ...
	I0816 13:44:30.406371   57440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 13:44:30.409491   57440 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 13:44:30.409515   57440 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0816 13:44:30.440485   57440 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 13:44:30.440508   57440 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 13:44:30.480735   57440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 13:44:30.484549   57440 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 13:44:30.484573   57440 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 13:44:30.541485   57440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 13:44:32.535406   57440 node_ready.go:53] node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:33.204746   57440 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.723973086s)
	I0816 13:44:33.204802   57440 main.go:141] libmachine: Making call to close driver server
	I0816 13:44:33.204817   57440 main.go:141] libmachine: (no-preload-311070) Calling .Close
	I0816 13:44:33.204843   57440 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.798437569s)
	I0816 13:44:33.204877   57440 main.go:141] libmachine: Making call to close driver server
	I0816 13:44:33.204889   57440 main.go:141] libmachine: (no-preload-311070) Calling .Close
	I0816 13:44:33.205092   57440 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:44:33.205116   57440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:44:33.205126   57440 main.go:141] libmachine: Making call to close driver server
	I0816 13:44:33.205134   57440 main.go:141] libmachine: (no-preload-311070) Calling .Close
	I0816 13:44:33.205357   57440 main.go:141] libmachine: (no-preload-311070) DBG | Closing plugin on server side
	I0816 13:44:33.205359   57440 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:44:33.205379   57440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:44:33.205387   57440 main.go:141] libmachine: Making call to close driver server
	I0816 13:44:33.205395   57440 main.go:141] libmachine: (no-preload-311070) Calling .Close
	I0816 13:44:33.205408   57440 main.go:141] libmachine: (no-preload-311070) DBG | Closing plugin on server side
	I0816 13:44:33.205445   57440 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:44:33.205454   57440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:44:33.205593   57440 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:44:33.205605   57440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:44:33.214075   57440 main.go:141] libmachine: Making call to close driver server
	I0816 13:44:33.214095   57440 main.go:141] libmachine: (no-preload-311070) Calling .Close
	I0816 13:44:33.214307   57440 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:44:33.214320   57440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:44:33.259136   57440 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.717608276s)
	I0816 13:44:33.259188   57440 main.go:141] libmachine: Making call to close driver server
	I0816 13:44:33.259212   57440 main.go:141] libmachine: (no-preload-311070) Calling .Close
	I0816 13:44:33.259468   57440 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:44:33.259485   57440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:44:33.259495   57440 main.go:141] libmachine: Making call to close driver server
	I0816 13:44:33.259503   57440 main.go:141] libmachine: (no-preload-311070) Calling .Close
	I0816 13:44:33.259988   57440 main.go:141] libmachine: (no-preload-311070) DBG | Closing plugin on server side
	I0816 13:44:33.260004   57440 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:44:33.260016   57440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:44:33.260026   57440 addons.go:475] Verifying addon metrics-server=true in "no-preload-311070"
	I0816 13:44:33.262190   57440 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0816 13:44:30.191146   57945 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.138885293s)
	I0816 13:44:30.191188   57945 crio.go:469] duration metric: took 3.139020745s to extract the tarball
	I0816 13:44:30.191198   57945 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 13:44:30.249011   57945 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 13:44:30.285826   57945 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0816 13:44:30.285847   57945 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0816 13:44:30.285918   57945 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:30.285940   57945 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:44:30.285947   57945 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0816 13:44:30.285922   57945 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:44:30.285971   57945 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0816 13:44:30.286019   57945 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:44:30.285922   57945 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:44:30.285979   57945 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0816 13:44:30.288208   57945 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:44:30.288272   57945 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:30.288275   57945 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0816 13:44:30.288205   57945 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0816 13:44:30.288303   57945 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0816 13:44:30.288320   57945 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:44:30.288211   57945 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:44:30.288207   57945 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:44:30.434593   57945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:44:30.434847   57945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:44:30.438852   57945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:44:30.449704   57945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0816 13:44:30.451130   57945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:44:30.454848   57945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0816 13:44:30.513569   57945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0816 13:44:30.594404   57945 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0816 13:44:30.594453   57945 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:44:30.594509   57945 ssh_runner.go:195] Run: which crictl
	I0816 13:44:30.612653   57945 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0816 13:44:30.612699   57945 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:44:30.612746   57945 ssh_runner.go:195] Run: which crictl
	I0816 13:44:30.652117   57945 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0816 13:44:30.652162   57945 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:44:30.652214   57945 ssh_runner.go:195] Run: which crictl
	I0816 13:44:30.681057   57945 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0816 13:44:30.681116   57945 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0816 13:44:30.681163   57945 ssh_runner.go:195] Run: which crictl
	I0816 13:44:30.681239   57945 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0816 13:44:30.681296   57945 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:44:30.681341   57945 ssh_runner.go:195] Run: which crictl
	I0816 13:44:30.688696   57945 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0816 13:44:30.688739   57945 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0816 13:44:30.688785   57945 ssh_runner.go:195] Run: which crictl
	I0816 13:44:30.706749   57945 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0816 13:44:30.706802   57945 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0816 13:44:30.706816   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:44:30.706843   57945 ssh_runner.go:195] Run: which crictl
	I0816 13:44:30.706911   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:44:30.706938   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:44:30.706987   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 13:44:30.707031   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:44:30.707045   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 13:44:30.913446   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 13:44:30.913520   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 13:44:30.913548   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:44:30.913611   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:44:30.913653   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 13:44:30.913675   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:44:30.913813   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:44:31.079066   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:44:31.079100   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 13:44:31.079140   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:44:31.103707   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:44:31.103890   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 13:44:31.106587   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:44:31.106723   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 13:44:31.210359   57945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:31.226549   57945 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0816 13:44:31.226605   57945 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0816 13:44:31.226648   57945 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0816 13:44:31.266238   57945 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0816 13:44:31.266256   57945 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0816 13:44:31.269423   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 13:44:31.270551   57945 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0816 13:44:31.399144   57945 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0816 13:44:31.399220   57945 cache_images.go:92] duration metric: took 1.113354806s to LoadCachedImages
	W0816 13:44:31.399297   57945 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0816 13:44:31.399311   57945 kubeadm.go:934] updating node { 192.168.72.105 8443 v1.20.0 crio true true} ...
	I0816 13:44:31.399426   57945 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-882237 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.105
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-882237 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 13:44:31.399515   57945 ssh_runner.go:195] Run: crio config
	I0816 13:44:31.459182   57945 cni.go:84] Creating CNI manager for ""
	I0816 13:44:31.459226   57945 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:44:31.459244   57945 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 13:44:31.459270   57945 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.105 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-882237 NodeName:old-k8s-version-882237 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.105"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.105 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0816 13:44:31.459439   57945 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.105
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-882237"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.105
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.105"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 13:44:31.459521   57945 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0816 13:44:31.470415   57945 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 13:44:31.470500   57945 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 13:44:31.480890   57945 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0816 13:44:31.498797   57945 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 13:44:31.516425   57945 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0816 13:44:31.536528   57945 ssh_runner.go:195] Run: grep 192.168.72.105	control-plane.minikube.internal$ /etc/hosts
	I0816 13:44:31.540569   57945 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.105	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 13:44:31.553530   57945 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:44:31.693191   57945 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 13:44:31.711162   57945 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237 for IP: 192.168.72.105
	I0816 13:44:31.711190   57945 certs.go:194] generating shared ca certs ...
	I0816 13:44:31.711209   57945 certs.go:226] acquiring lock for ca certs: {Name:mkdb46755373c135acf8239d2d4352f4b0b3d1d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:44:31.711382   57945 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key
	I0816 13:44:31.711465   57945 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key
	I0816 13:44:31.711478   57945 certs.go:256] generating profile certs ...
	I0816 13:44:31.711596   57945 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/client.key
	I0816 13:44:31.711676   57945 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/apiserver.key.e63f19d8
	I0816 13:44:31.711739   57945 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/proxy-client.key
	I0816 13:44:31.711906   57945 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem (1338 bytes)
	W0816 13:44:31.711969   57945 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149_empty.pem, impossibly tiny 0 bytes
	I0816 13:44:31.711984   57945 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 13:44:31.712019   57945 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem (1082 bytes)
	I0816 13:44:31.712058   57945 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem (1123 bytes)
	I0816 13:44:31.712089   57945 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem (1675 bytes)
	I0816 13:44:31.712146   57945 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem (1708 bytes)
	I0816 13:44:31.713101   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 13:44:31.748701   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 13:44:31.789308   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 13:44:31.814410   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 13:44:31.841281   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0816 13:44:31.867939   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 13:44:31.894410   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 13:44:31.921591   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 13:44:31.952356   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /usr/share/ca-certificates/111492.pem (1708 bytes)
	I0816 13:44:31.982171   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 13:44:32.008849   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem --> /usr/share/ca-certificates/11149.pem (1338 bytes)
	I0816 13:44:32.034750   57945 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 13:44:32.051812   57945 ssh_runner.go:195] Run: openssl version
	I0816 13:44:32.057774   57945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11149.pem && ln -fs /usr/share/ca-certificates/11149.pem /etc/ssl/certs/11149.pem"
	I0816 13:44:32.068553   57945 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11149.pem
	I0816 13:44:32.073022   57945 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 12:33 /usr/share/ca-certificates/11149.pem
	I0816 13:44:32.073095   57945 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11149.pem
	I0816 13:44:32.079239   57945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11149.pem /etc/ssl/certs/51391683.0"
	I0816 13:44:32.089825   57945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111492.pem && ln -fs /usr/share/ca-certificates/111492.pem /etc/ssl/certs/111492.pem"
	I0816 13:44:32.100630   57945 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111492.pem
	I0816 13:44:32.105792   57945 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 12:33 /usr/share/ca-certificates/111492.pem
	I0816 13:44:32.105851   57945 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111492.pem
	I0816 13:44:32.112004   57945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111492.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 13:44:32.122723   57945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 13:44:32.133560   57945 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:44:32.138215   57945 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 12:22 /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:44:32.138260   57945 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:44:32.144059   57945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 13:44:32.155210   57945 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 13:44:32.159746   57945 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 13:44:32.165984   57945 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 13:44:32.171617   57945 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 13:44:32.177778   57945 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 13:44:32.183623   57945 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 13:44:32.189537   57945 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 13:44:32.195627   57945 kubeadm.go:392] StartCluster: {Name:old-k8s-version-882237 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-882237 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.105 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 13:44:32.195706   57945 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 13:44:32.195741   57945 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 13:44:32.235910   57945 cri.go:89] found id: ""
	I0816 13:44:32.235978   57945 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 13:44:32.248201   57945 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 13:44:32.248223   57945 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 13:44:32.248286   57945 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 13:44:32.258917   57945 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 13:44:32.260386   57945 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-882237" does not appear in /home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 13:44:32.261475   57945 kubeconfig.go:62] /home/jenkins/minikube-integration/19423-3966/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-882237" cluster setting kubeconfig missing "old-k8s-version-882237" context setting]
	I0816 13:44:32.263041   57945 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/kubeconfig: {Name:mk102d7d0e1fecb6a50b9d6c1ee82dcff7f7a898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:44:32.335150   57945 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 13:44:32.346103   57945 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.105
	I0816 13:44:32.346141   57945 kubeadm.go:1160] stopping kube-system containers ...
	I0816 13:44:32.346155   57945 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 13:44:32.346212   57945 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 13:44:32.390110   57945 cri.go:89] found id: ""
	I0816 13:44:32.390197   57945 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 13:44:32.408685   57945 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 13:44:32.419119   57945 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 13:44:32.419146   57945 kubeadm.go:157] found existing configuration files:
	
	I0816 13:44:32.419227   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 13:44:32.429282   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 13:44:32.429352   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 13:44:32.439444   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 13:44:32.449342   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 13:44:32.449409   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 13:44:32.459836   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 13:44:32.469581   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 13:44:32.469653   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 13:44:32.479655   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 13:44:32.489139   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 13:44:32.489204   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 13:44:32.499439   57945 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 13:44:32.509706   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:32.672388   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:33.787722   57945 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.115294487s)
	I0816 13:44:33.787763   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:34.027016   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:34.141852   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:34.247190   57945 api_server.go:52] waiting for apiserver process to appear ...
	I0816 13:44:34.247286   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:34.747781   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:33.022378   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:33.023000   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:33.023028   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:33.022950   59082 retry.go:31] will retry after 1.906001247s: waiting for machine to come up
	I0816 13:44:34.930169   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:34.930674   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:34.930702   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:34.930612   59082 retry.go:31] will retry after 2.809719622s: waiting for machine to come up
	I0816 13:44:33.263780   57440 addons.go:510] duration metric: took 3.216351591s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0816 13:44:34.816280   57440 node_ready.go:53] node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:36.817474   57440 node_ready.go:53] node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:35.248075   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:35.747575   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:36.247693   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:36.748219   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:37.247519   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:37.748189   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:38.248143   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:38.748193   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:39.247412   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:39.748043   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:37.742122   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:37.742506   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:37.742545   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:37.742464   59082 retry.go:31] will retry after 4.139761236s: waiting for machine to come up
	I0816 13:44:37.815407   57440 node_ready.go:49] node "no-preload-311070" has status "Ready":"True"
	I0816 13:44:37.815428   57440 node_ready.go:38] duration metric: took 7.503128864s for node "no-preload-311070" to be "Ready" ...
	I0816 13:44:37.815437   57440 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:44:37.820318   57440 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-8kbs6" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:37.825460   57440 pod_ready.go:93] pod "coredns-6f6b679f8f-8kbs6" in "kube-system" namespace has status "Ready":"True"
	I0816 13:44:37.825478   57440 pod_ready.go:82] duration metric: took 5.136508ms for pod "coredns-6f6b679f8f-8kbs6" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:37.825486   57440 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:37.829609   57440 pod_ready.go:93] pod "etcd-no-preload-311070" in "kube-system" namespace has status "Ready":"True"
	I0816 13:44:37.829628   57440 pod_ready.go:82] duration metric: took 4.133294ms for pod "etcd-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:37.829635   57440 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:39.835973   57440 pod_ready.go:103] pod "kube-apiserver-no-preload-311070" in "kube-system" namespace has status "Ready":"False"
	I0816 13:44:40.335270   57440 pod_ready.go:93] pod "kube-apiserver-no-preload-311070" in "kube-system" namespace has status "Ready":"True"
	I0816 13:44:40.335289   57440 pod_ready.go:82] duration metric: took 2.505647853s for pod "kube-apiserver-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:40.335298   57440 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:43.233555   57240 start.go:364] duration metric: took 55.654362151s to acquireMachinesLock for "embed-certs-302520"
	I0816 13:44:43.233638   57240 start.go:96] Skipping create...Using existing machine configuration
	I0816 13:44:43.233649   57240 fix.go:54] fixHost starting: 
	I0816 13:44:43.234047   57240 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:43.234078   57240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:43.253929   57240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34851
	I0816 13:44:43.254405   57240 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:43.254878   57240 main.go:141] libmachine: Using API Version  1
	I0816 13:44:43.254900   57240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:43.255235   57240 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:43.255400   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	I0816 13:44:43.255578   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetState
	I0816 13:44:43.257434   57240 fix.go:112] recreateIfNeeded on embed-certs-302520: state=Stopped err=<nil>
	I0816 13:44:43.257472   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	W0816 13:44:43.257637   57240 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 13:44:43.259743   57240 out.go:177] * Restarting existing kvm2 VM for "embed-certs-302520" ...
	I0816 13:44:41.885729   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:41.886143   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Found IP for machine: 192.168.50.186
	I0816 13:44:41.886162   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Reserving static IP address...
	I0816 13:44:41.886178   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has current primary IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:41.886540   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-893736", mac: "52:54:00:5f:b2:25", ip: "192.168.50.186"} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:41.886570   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | skip adding static IP to network mk-default-k8s-diff-port-893736 - found existing host DHCP lease matching {name: "default-k8s-diff-port-893736", mac: "52:54:00:5f:b2:25", ip: "192.168.50.186"}
	I0816 13:44:41.886585   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Reserved static IP address: 192.168.50.186
	I0816 13:44:41.886600   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for SSH to be available...
	I0816 13:44:41.886617   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | Getting to WaitForSSH function...
	I0816 13:44:41.888671   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:41.889003   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:41.889047   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:41.889118   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | Using SSH client type: external
	I0816 13:44:41.889142   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/default-k8s-diff-port-893736/id_rsa (-rw-------)
	I0816 13:44:41.889181   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.186 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-3966/.minikube/machines/default-k8s-diff-port-893736/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 13:44:41.889201   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | About to run SSH command:
	I0816 13:44:41.889215   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | exit 0
	I0816 13:44:42.017010   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | SSH cmd err, output: <nil>: 
	I0816 13:44:42.017374   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetConfigRaw
	I0816 13:44:42.017979   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetIP
	I0816 13:44:42.020580   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.020958   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:42.020992   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.021174   58430 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/default-k8s-diff-port-893736/config.json ...
	I0816 13:44:42.021342   58430 machine.go:93] provisionDockerMachine start ...
	I0816 13:44:42.021356   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:44:42.021521   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:42.023732   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.024033   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:42.024057   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.024201   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:42.024354   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:42.024526   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:42.024667   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:42.024811   58430 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:42.024994   58430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.186 22 <nil> <nil>}
	I0816 13:44:42.025005   58430 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 13:44:42.137459   58430 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 13:44:42.137495   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetMachineName
	I0816 13:44:42.137722   58430 buildroot.go:166] provisioning hostname "default-k8s-diff-port-893736"
	I0816 13:44:42.137745   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetMachineName
	I0816 13:44:42.137925   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:42.140599   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.140987   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:42.141017   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.141148   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:42.141309   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:42.141430   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:42.141536   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:42.141677   58430 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:42.141843   58430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.186 22 <nil> <nil>}
	I0816 13:44:42.141855   58430 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-893736 && echo "default-k8s-diff-port-893736" | sudo tee /etc/hostname
	I0816 13:44:42.267643   58430 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-893736
	
	I0816 13:44:42.267670   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:42.270489   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.270834   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:42.270867   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.271089   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:42.271266   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:42.271405   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:42.271527   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:42.271675   58430 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:42.271829   58430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.186 22 <nil> <nil>}
	I0816 13:44:42.271847   58430 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-893736' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-893736/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-893736' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 13:44:42.398010   58430 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 13:44:42.398057   58430 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-3966/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-3966/.minikube}
	I0816 13:44:42.398122   58430 buildroot.go:174] setting up certificates
	I0816 13:44:42.398139   58430 provision.go:84] configureAuth start
	I0816 13:44:42.398157   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetMachineName
	I0816 13:44:42.398484   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetIP
	I0816 13:44:42.401217   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.401566   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:42.401587   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.401749   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:42.404082   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.404380   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:42.404425   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.404541   58430 provision.go:143] copyHostCerts
	I0816 13:44:42.404596   58430 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem, removing ...
	I0816 13:44:42.404606   58430 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem
	I0816 13:44:42.404666   58430 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem (1123 bytes)
	I0816 13:44:42.404758   58430 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem, removing ...
	I0816 13:44:42.404767   58430 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem
	I0816 13:44:42.404788   58430 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem (1675 bytes)
	I0816 13:44:42.404850   58430 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem, removing ...
	I0816 13:44:42.404857   58430 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem
	I0816 13:44:42.404873   58430 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem (1082 bytes)
	I0816 13:44:42.404965   58430 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-893736 san=[127.0.0.1 192.168.50.186 default-k8s-diff-port-893736 localhost minikube]
	I0816 13:44:42.551867   58430 provision.go:177] copyRemoteCerts
	I0816 13:44:42.551928   58430 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 13:44:42.551954   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:42.554945   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.555276   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:42.555316   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.555517   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:42.555699   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:42.555838   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:42.555964   58430 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/default-k8s-diff-port-893736/id_rsa Username:docker}
	I0816 13:44:42.643591   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 13:44:42.667108   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0816 13:44:42.690852   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 13:44:42.714001   58430 provision.go:87] duration metric: took 315.84846ms to configureAuth
	I0816 13:44:42.714030   58430 buildroot.go:189] setting minikube options for container-runtime
	I0816 13:44:42.714189   58430 config.go:182] Loaded profile config "default-k8s-diff-port-893736": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 13:44:42.714263   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:42.716726   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.717082   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:42.717110   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.717282   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:42.717486   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:42.717621   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:42.717740   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:42.717883   58430 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:42.718038   58430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.186 22 <nil> <nil>}
	I0816 13:44:42.718055   58430 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 13:44:42.988769   58430 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 13:44:42.988798   58430 machine.go:96] duration metric: took 967.444538ms to provisionDockerMachine
	I0816 13:44:42.988814   58430 start.go:293] postStartSetup for "default-k8s-diff-port-893736" (driver="kvm2")
	I0816 13:44:42.988833   58430 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 13:44:42.988864   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:44:42.989226   58430 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 13:44:42.989261   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:42.991868   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.992162   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:42.992184   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.992364   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:42.992537   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:42.992689   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:42.992838   58430 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/default-k8s-diff-port-893736/id_rsa Username:docker}
	I0816 13:44:43.079199   58430 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 13:44:43.083277   58430 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 13:44:43.083296   58430 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/addons for local assets ...
	I0816 13:44:43.083357   58430 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/files for local assets ...
	I0816 13:44:43.083459   58430 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem -> 111492.pem in /etc/ssl/certs
	I0816 13:44:43.083576   58430 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 13:44:43.092684   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /etc/ssl/certs/111492.pem (1708 bytes)
	I0816 13:44:43.115693   58430 start.go:296] duration metric: took 126.86489ms for postStartSetup
	I0816 13:44:43.115735   58430 fix.go:56] duration metric: took 19.425768942s for fixHost
	I0816 13:44:43.115761   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:43.118597   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:43.118915   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:43.118947   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:43.119100   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:43.119306   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:43.119442   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:43.119563   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:43.119683   58430 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:43.119840   58430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.186 22 <nil> <nil>}
	I0816 13:44:43.119850   58430 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 13:44:43.233362   58430 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723815883.193133132
	
	I0816 13:44:43.233394   58430 fix.go:216] guest clock: 1723815883.193133132
	I0816 13:44:43.233406   58430 fix.go:229] Guest: 2024-08-16 13:44:43.193133132 +0000 UTC Remote: 2024-08-16 13:44:43.115740856 +0000 UTC m=+147.151935383 (delta=77.392276ms)
	I0816 13:44:43.233479   58430 fix.go:200] guest clock delta is within tolerance: 77.392276ms
	I0816 13:44:43.233486   58430 start.go:83] releasing machines lock for "default-k8s-diff-port-893736", held for 19.543554553s
	I0816 13:44:43.233517   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:44:43.233783   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetIP
	I0816 13:44:43.236492   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:43.236875   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:43.236901   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:43.237136   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:44:43.237703   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:44:43.237943   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:44:43.238074   58430 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 13:44:43.238153   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:43.238182   58430 ssh_runner.go:195] Run: cat /version.json
	I0816 13:44:43.238215   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:43.240639   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:43.241000   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:43.241029   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:43.241052   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:43.241193   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:43.241360   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:43.241573   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:43.241581   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:43.241601   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:43.241733   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:43.241732   58430 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/default-k8s-diff-port-893736/id_rsa Username:docker}
	I0816 13:44:43.241895   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:43.242052   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:43.242178   58430 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/default-k8s-diff-port-893736/id_rsa Username:docker}
	I0816 13:44:43.352903   58430 ssh_runner.go:195] Run: systemctl --version
	I0816 13:44:43.359071   58430 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 13:44:43.509233   58430 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 13:44:43.516592   58430 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 13:44:43.516666   58430 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 13:44:43.534069   58430 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 13:44:43.534096   58430 start.go:495] detecting cgroup driver to use...
	I0816 13:44:43.534167   58430 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 13:44:43.553305   58430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 13:44:43.569958   58430 docker.go:217] disabling cri-docker service (if available) ...
	I0816 13:44:43.570007   58430 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 13:44:43.590642   58430 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 13:44:43.606411   58430 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 13:44:43.733331   58430 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 13:44:43.882032   58430 docker.go:233] disabling docker service ...
	I0816 13:44:43.882110   58430 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 13:44:43.896780   58430 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 13:44:43.909702   58430 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 13:44:44.044071   58430 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 13:44:44.170798   58430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 13:44:44.184421   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 13:44:44.203201   58430 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 13:44:44.203269   58430 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:44.213647   58430 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 13:44:44.213708   58430 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:44.224261   58430 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:44.235295   58430 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:44.247670   58430 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 13:44:44.264065   58430 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:44.276212   58430 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:44.296049   58430 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:44.307920   58430 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 13:44:44.319689   58430 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 13:44:44.319746   58430 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 13:44:44.335735   58430 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 13:44:44.352364   58430 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:44:44.476754   58430 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 13:44:44.618847   58430 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 13:44:44.618914   58430 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 13:44:44.623946   58430 start.go:563] Will wait 60s for crictl version
	I0816 13:44:44.624004   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:44:44.627796   58430 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 13:44:44.666274   58430 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 13:44:44.666350   58430 ssh_runner.go:195] Run: crio --version
	I0816 13:44:44.694476   58430 ssh_runner.go:195] Run: crio --version
	I0816 13:44:44.723937   58430 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 13:44:43.261237   57240 main.go:141] libmachine: (embed-certs-302520) Calling .Start
	I0816 13:44:43.261399   57240 main.go:141] libmachine: (embed-certs-302520) Ensuring networks are active...
	I0816 13:44:43.262183   57240 main.go:141] libmachine: (embed-certs-302520) Ensuring network default is active
	I0816 13:44:43.262591   57240 main.go:141] libmachine: (embed-certs-302520) Ensuring network mk-embed-certs-302520 is active
	I0816 13:44:43.263044   57240 main.go:141] libmachine: (embed-certs-302520) Getting domain xml...
	I0816 13:44:43.263849   57240 main.go:141] libmachine: (embed-certs-302520) Creating domain...
	I0816 13:44:44.565632   57240 main.go:141] libmachine: (embed-certs-302520) Waiting to get IP...
	I0816 13:44:44.566705   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:44.567120   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:44.567211   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:44.567113   59274 retry.go:31] will retry after 259.265867ms: waiting for machine to come up
	I0816 13:44:44.827603   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:44.828117   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:44.828152   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:44.828043   59274 retry.go:31] will retry after 271.270487ms: waiting for machine to come up
	I0816 13:44:40.247541   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:40.747938   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:41.247408   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:41.747777   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:42.248295   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:42.747393   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:43.247508   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:43.748151   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:44.247958   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:44.747978   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:44.725112   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetIP
	I0816 13:44:44.728077   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:44.728446   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:44.728469   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:44.728728   58430 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0816 13:44:44.733365   58430 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 13:44:44.746196   58430 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-893736 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-893736 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.186 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 13:44:44.746325   58430 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 13:44:44.746385   58430 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 13:44:44.787402   58430 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 13:44:44.787481   58430 ssh_runner.go:195] Run: which lz4
	I0816 13:44:44.791755   58430 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 13:44:44.797290   58430 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 13:44:44.797320   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0816 13:44:42.342663   57440 pod_ready.go:93] pod "kube-controller-manager-no-preload-311070" in "kube-system" namespace has status "Ready":"True"
	I0816 13:44:42.342685   57440 pod_ready.go:82] duration metric: took 2.007381193s for pod "kube-controller-manager-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:42.342694   57440 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-b8d5b" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:42.346807   57440 pod_ready.go:93] pod "kube-proxy-b8d5b" in "kube-system" namespace has status "Ready":"True"
	I0816 13:44:42.346824   57440 pod_ready.go:82] duration metric: took 4.124529ms for pod "kube-proxy-b8d5b" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:42.346832   57440 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:42.351010   57440 pod_ready.go:93] pod "kube-scheduler-no-preload-311070" in "kube-system" namespace has status "Ready":"True"
	I0816 13:44:42.351025   57440 pod_ready.go:82] duration metric: took 4.186812ms for pod "kube-scheduler-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:42.351032   57440 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:44.358663   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:44:46.359708   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:44:45.100554   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:45.101150   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:45.101265   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:45.101207   59274 retry.go:31] will retry after 309.469795ms: waiting for machine to come up
	I0816 13:44:45.412518   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:45.413009   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:45.413040   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:45.412975   59274 retry.go:31] will retry after 502.564219ms: waiting for machine to come up
	I0816 13:44:45.917731   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:45.918284   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:45.918316   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:45.918235   59274 retry.go:31] will retry after 723.442166ms: waiting for machine to come up
	I0816 13:44:46.642971   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:46.643467   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:46.643498   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:46.643400   59274 retry.go:31] will retry after 600.365383ms: waiting for machine to come up
	I0816 13:44:47.245233   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:47.245756   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:47.245785   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:47.245710   59274 retry.go:31] will retry after 1.06438693s: waiting for machine to come up
	I0816 13:44:48.312043   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:48.312842   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:48.312886   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:48.312840   59274 retry.go:31] will retry after 903.877948ms: waiting for machine to come up
	I0816 13:44:49.218419   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:49.218805   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:49.218835   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:49.218758   59274 retry.go:31] will retry after 1.73892963s: waiting for machine to come up
	I0816 13:44:45.247523   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:45.747694   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:46.248397   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:46.747660   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:47.247382   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:47.748220   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:48.248130   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:48.747818   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:49.248360   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:49.747962   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:46.230345   58430 crio.go:462] duration metric: took 1.438624377s to copy over tarball
	I0816 13:44:46.230429   58430 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 13:44:48.358060   58430 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.127589486s)
	I0816 13:44:48.358131   58430 crio.go:469] duration metric: took 2.127754698s to extract the tarball
	I0816 13:44:48.358145   58430 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 13:44:48.398054   58430 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 13:44:48.449391   58430 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 13:44:48.449416   58430 cache_images.go:84] Images are preloaded, skipping loading
	I0816 13:44:48.449425   58430 kubeadm.go:934] updating node { 192.168.50.186 8444 v1.31.0 crio true true} ...
	I0816 13:44:48.449576   58430 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-893736 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.186
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-893736 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 13:44:48.449662   58430 ssh_runner.go:195] Run: crio config
	I0816 13:44:48.499389   58430 cni.go:84] Creating CNI manager for ""
	I0816 13:44:48.499413   58430 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:44:48.499424   58430 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 13:44:48.499452   58430 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.186 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-893736 NodeName:default-k8s-diff-port-893736 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.186"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.186 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 13:44:48.499576   58430 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.186
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-893736"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.186
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.186"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 13:44:48.499653   58430 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 13:44:48.509639   58430 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 13:44:48.509706   58430 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 13:44:48.519099   58430 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0816 13:44:48.535866   58430 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 13:44:48.552977   58430 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0816 13:44:48.571198   58430 ssh_runner.go:195] Run: grep 192.168.50.186	control-plane.minikube.internal$ /etc/hosts
	I0816 13:44:48.575881   58430 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.186	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 13:44:48.587850   58430 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:44:48.703848   58430 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 13:44:48.721449   58430 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/default-k8s-diff-port-893736 for IP: 192.168.50.186
	I0816 13:44:48.721476   58430 certs.go:194] generating shared ca certs ...
	I0816 13:44:48.721496   58430 certs.go:226] acquiring lock for ca certs: {Name:mkdb46755373c135acf8239d2d4352f4b0b3d1d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:44:48.721677   58430 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key
	I0816 13:44:48.721731   58430 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key
	I0816 13:44:48.721745   58430 certs.go:256] generating profile certs ...
	I0816 13:44:48.721843   58430 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/default-k8s-diff-port-893736/client.key
	I0816 13:44:48.721926   58430 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/default-k8s-diff-port-893736/apiserver.key.64c9b41b
	I0816 13:44:48.721980   58430 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/default-k8s-diff-port-893736/proxy-client.key
	I0816 13:44:48.722107   58430 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem (1338 bytes)
	W0816 13:44:48.722138   58430 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149_empty.pem, impossibly tiny 0 bytes
	I0816 13:44:48.722149   58430 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 13:44:48.722182   58430 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem (1082 bytes)
	I0816 13:44:48.722204   58430 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem (1123 bytes)
	I0816 13:44:48.722225   58430 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem (1675 bytes)
	I0816 13:44:48.722258   58430 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem (1708 bytes)
	I0816 13:44:48.722818   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 13:44:48.779462   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 13:44:48.814653   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 13:44:48.887435   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 13:44:48.913644   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/default-k8s-diff-port-893736/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0816 13:44:48.937536   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/default-k8s-diff-port-893736/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 13:44:48.960729   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/default-k8s-diff-port-893736/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 13:44:48.984375   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/default-k8s-diff-port-893736/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 13:44:49.007997   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 13:44:49.031631   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem --> /usr/share/ca-certificates/11149.pem (1338 bytes)
	I0816 13:44:49.054333   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /usr/share/ca-certificates/111492.pem (1708 bytes)
	I0816 13:44:49.076566   58430 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 13:44:49.092986   58430 ssh_runner.go:195] Run: openssl version
	I0816 13:44:49.098555   58430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111492.pem && ln -fs /usr/share/ca-certificates/111492.pem /etc/ssl/certs/111492.pem"
	I0816 13:44:49.109454   58430 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111492.pem
	I0816 13:44:49.114868   58430 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 12:33 /usr/share/ca-certificates/111492.pem
	I0816 13:44:49.114934   58430 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111492.pem
	I0816 13:44:49.120811   58430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111492.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 13:44:49.131829   58430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 13:44:49.142825   58430 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:44:49.147276   58430 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 12:22 /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:44:49.147322   58430 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:44:49.152678   58430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 13:44:49.163622   58430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11149.pem && ln -fs /usr/share/ca-certificates/11149.pem /etc/ssl/certs/11149.pem"
	I0816 13:44:49.174426   58430 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11149.pem
	I0816 13:44:49.179353   58430 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 12:33 /usr/share/ca-certificates/11149.pem
	I0816 13:44:49.179406   58430 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11149.pem
	I0816 13:44:49.185129   58430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11149.pem /etc/ssl/certs/51391683.0"
	I0816 13:44:49.196668   58430 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 13:44:49.201447   58430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 13:44:49.207718   58430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 13:44:49.213869   58430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 13:44:49.220325   58430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 13:44:49.226220   58430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 13:44:49.231971   58430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 13:44:49.238080   58430 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-893736 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-893736 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.186 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 13:44:49.238178   58430 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 13:44:49.238231   58430 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 13:44:49.276621   58430 cri.go:89] found id: ""
	I0816 13:44:49.276719   58430 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 13:44:49.287765   58430 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 13:44:49.287785   58430 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 13:44:49.287829   58430 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 13:44:49.298038   58430 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 13:44:49.299171   58430 kubeconfig.go:125] found "default-k8s-diff-port-893736" server: "https://192.168.50.186:8444"
	I0816 13:44:49.301521   58430 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 13:44:49.311800   58430 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.186
	I0816 13:44:49.311833   58430 kubeadm.go:1160] stopping kube-system containers ...
	I0816 13:44:49.311845   58430 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 13:44:49.311899   58430 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 13:44:49.363716   58430 cri.go:89] found id: ""
	I0816 13:44:49.363784   58430 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 13:44:49.381053   58430 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 13:44:49.391306   58430 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 13:44:49.391322   58430 kubeadm.go:157] found existing configuration files:
	
	I0816 13:44:49.391370   58430 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0816 13:44:49.400770   58430 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 13:44:49.400829   58430 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 13:44:49.410252   58430 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0816 13:44:49.419405   58430 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 13:44:49.419481   58430 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 13:44:49.429330   58430 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0816 13:44:49.438521   58430 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 13:44:49.438587   58430 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 13:44:49.448144   58430 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0816 13:44:49.456744   58430 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 13:44:49.456805   58430 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 13:44:49.466062   58430 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 13:44:49.476159   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:49.597639   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:50.673182   58430 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.075495766s)
	I0816 13:44:50.673218   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:50.887802   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:50.958384   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:48.858145   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:44:51.358083   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:44:50.959807   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:50.960217   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:50.960236   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:50.960188   59274 retry.go:31] will retry after 2.32558417s: waiting for machine to come up
	I0816 13:44:53.287947   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:53.288441   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:53.288470   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:53.288388   59274 retry.go:31] will retry after 1.85414625s: waiting for machine to come up
	I0816 13:44:50.247710   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:50.747741   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:51.248099   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:51.748052   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:52.247958   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:52.748141   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:53.247751   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:53.747353   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:54.247624   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:54.747699   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:51.054015   58430 api_server.go:52] waiting for apiserver process to appear ...
	I0816 13:44:51.054101   58430 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:51.554846   58430 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:52.055178   58430 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:52.082087   58430 api_server.go:72] duration metric: took 1.028080423s to wait for apiserver process to appear ...
	I0816 13:44:52.082114   58430 api_server.go:88] waiting for apiserver healthz status ...
	I0816 13:44:52.082133   58430 api_server.go:253] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 13:44:52.082624   58430 api_server.go:269] stopped: https://192.168.50.186:8444/healthz: Get "https://192.168.50.186:8444/healthz": dial tcp 192.168.50.186:8444: connect: connection refused
	I0816 13:44:52.582261   58430 api_server.go:253] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 13:44:55.336530   58430 api_server.go:279] https://192.168.50.186:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 13:44:55.336565   58430 api_server.go:103] status: https://192.168.50.186:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 13:44:55.336580   58430 api_server.go:253] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 13:44:55.374699   58430 api_server.go:279] https://192.168.50.186:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 13:44:55.374733   58430 api_server.go:103] status: https://192.168.50.186:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 13:44:55.583112   58430 api_server.go:253] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 13:44:55.588756   58430 api_server.go:279] https://192.168.50.186:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 13:44:55.588782   58430 api_server.go:103] status: https://192.168.50.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 13:44:56.082182   58430 api_server.go:253] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 13:44:56.088062   58430 api_server.go:279] https://192.168.50.186:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 13:44:56.088108   58430 api_server.go:103] status: https://192.168.50.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 13:44:56.582273   58430 api_server.go:253] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 13:44:56.587049   58430 api_server.go:279] https://192.168.50.186:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 13:44:56.587087   58430 api_server.go:103] status: https://192.168.50.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 13:44:57.082664   58430 api_server.go:253] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 13:44:57.092562   58430 api_server.go:279] https://192.168.50.186:8444/healthz returned 200:
	ok
	I0816 13:44:57.100740   58430 api_server.go:141] control plane version: v1.31.0
	I0816 13:44:57.100767   58430 api_server.go:131] duration metric: took 5.018647278s to wait for apiserver health ...
	I0816 13:44:57.100777   58430 cni.go:84] Creating CNI manager for ""
	I0816 13:44:57.100784   58430 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:44:57.102775   58430 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 13:44:53.358390   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:44:55.358437   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:44:57.104079   58430 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 13:44:57.115212   58430 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 13:44:57.137445   58430 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 13:44:57.150376   58430 system_pods.go:59] 8 kube-system pods found
	I0816 13:44:57.150412   58430 system_pods.go:61] "coredns-6f6b679f8f-xdwhx" [66987c52-9a8c-4ddd-a6cf-ac84172d8c8c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 13:44:57.150422   58430 system_pods.go:61] "etcd-default-k8s-diff-port-893736" [54f5bb47-00f5-4c65-88ae-460571872fd1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 13:44:57.150435   58430 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-893736" [c8c57619-3a4c-4cc9-b637-7842194071ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 13:44:57.150448   58430 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-893736" [e553aced-34bd-4d30-b473-082001fe2948] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 13:44:57.150454   58430 system_pods.go:61] "kube-proxy-btq6r" [a2b7b283-da62-4cb8-a039-07a509491e5e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0816 13:44:57.150458   58430 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-893736" [0a30cbd0-a2ac-4ca9-bddc-674e6fc9ae77] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 13:44:57.150463   58430 system_pods.go:61] "metrics-server-6867b74b74-j9tqh" [ef077e6d-f368-4872-bb87-9e031d3ea764] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 13:44:57.150472   58430 system_pods.go:61] "storage-provisioner" [e2fbf16a-3bc7-4300-8023-5dbb20ba70bc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0816 13:44:57.150481   58430 system_pods.go:74] duration metric: took 13.019757ms to wait for pod list to return data ...
	I0816 13:44:57.150489   58430 node_conditions.go:102] verifying NodePressure condition ...
	I0816 13:44:57.153699   58430 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 13:44:57.153721   58430 node_conditions.go:123] node cpu capacity is 2
	I0816 13:44:57.153731   58430 node_conditions.go:105] duration metric: took 3.237407ms to run NodePressure ...
	I0816 13:44:57.153752   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:57.439130   58430 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0816 13:44:57.446848   58430 kubeadm.go:739] kubelet initialised
	I0816 13:44:57.446876   58430 kubeadm.go:740] duration metric: took 7.718113ms waiting for restarted kubelet to initialise ...
	I0816 13:44:57.446885   58430 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:44:57.452263   58430 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-xdwhx" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:57.459002   58430 pod_ready.go:98] node "default-k8s-diff-port-893736" hosting pod "coredns-6f6b679f8f-xdwhx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:57.459024   58430 pod_ready.go:82] duration metric: took 6.735487ms for pod "coredns-6f6b679f8f-xdwhx" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:57.459033   58430 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-893736" hosting pod "coredns-6f6b679f8f-xdwhx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:57.459039   58430 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:57.463723   58430 pod_ready.go:98] node "default-k8s-diff-port-893736" hosting pod "etcd-default-k8s-diff-port-893736" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:57.463742   58430 pod_ready.go:82] duration metric: took 4.695932ms for pod "etcd-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:57.463751   58430 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-893736" hosting pod "etcd-default-k8s-diff-port-893736" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:57.463756   58430 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:57.468593   58430 pod_ready.go:98] node "default-k8s-diff-port-893736" hosting pod "kube-apiserver-default-k8s-diff-port-893736" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:57.468619   58430 pod_ready.go:82] duration metric: took 4.856498ms for pod "kube-apiserver-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:57.468632   58430 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-893736" hosting pod "kube-apiserver-default-k8s-diff-port-893736" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:57.468643   58430 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:57.541251   58430 pod_ready.go:98] node "default-k8s-diff-port-893736" hosting pod "kube-controller-manager-default-k8s-diff-port-893736" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:57.541278   58430 pod_ready.go:82] duration metric: took 72.626413ms for pod "kube-controller-manager-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:57.541290   58430 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-893736" hosting pod "kube-controller-manager-default-k8s-diff-port-893736" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:57.541296   58430 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-btq6r" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:57.940580   58430 pod_ready.go:98] node "default-k8s-diff-port-893736" hosting pod "kube-proxy-btq6r" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:57.940616   58430 pod_ready.go:82] duration metric: took 399.312571ms for pod "kube-proxy-btq6r" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:57.940627   58430 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-893736" hosting pod "kube-proxy-btq6r" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:57.940635   58430 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:58.340647   58430 pod_ready.go:98] node "default-k8s-diff-port-893736" hosting pod "kube-scheduler-default-k8s-diff-port-893736" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:58.340671   58430 pod_ready.go:82] duration metric: took 400.026004ms for pod "kube-scheduler-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:58.340683   58430 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-893736" hosting pod "kube-scheduler-default-k8s-diff-port-893736" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:58.340694   58430 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:58.750549   58430 pod_ready.go:98] node "default-k8s-diff-port-893736" hosting pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:58.750573   58430 pod_ready.go:82] duration metric: took 409.872187ms for pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:58.750588   58430 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-893736" hosting pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:58.750598   58430 pod_ready.go:39] duration metric: took 1.303702313s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:44:58.750626   58430 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 13:44:58.766462   58430 ops.go:34] apiserver oom_adj: -16
	I0816 13:44:58.766482   58430 kubeadm.go:597] duration metric: took 9.478690644s to restartPrimaryControlPlane
	I0816 13:44:58.766491   58430 kubeadm.go:394] duration metric: took 9.528416258s to StartCluster
	I0816 13:44:58.766509   58430 settings.go:142] acquiring lock: {Name:mk96ffc9331ceacd8f3c1c33a59b38e047898a0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:44:58.766572   58430 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 13:44:58.770737   58430 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/kubeconfig: {Name:mk102d7d0e1fecb6a50b9d6c1ee82dcff7f7a898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:44:58.771036   58430 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.186 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 13:44:58.771138   58430 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 13:44:58.771218   58430 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-893736"
	I0816 13:44:58.771232   58430 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-893736"
	I0816 13:44:58.771245   58430 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-893736"
	I0816 13:44:58.771281   58430 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-893736"
	I0816 13:44:58.771252   58430 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-893736"
	W0816 13:44:58.771337   58430 addons.go:243] addon storage-provisioner should already be in state true
	I0816 13:44:58.771371   58430 host.go:66] Checking if "default-k8s-diff-port-893736" exists ...
	I0816 13:44:58.771285   58430 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-893736"
	W0816 13:44:58.771447   58430 addons.go:243] addon metrics-server should already be in state true
	I0816 13:44:58.771485   58430 host.go:66] Checking if "default-k8s-diff-port-893736" exists ...
	I0816 13:44:58.771231   58430 config.go:182] Loaded profile config "default-k8s-diff-port-893736": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 13:44:58.771653   58430 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:58.771682   58430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:58.771750   58430 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:58.771781   58430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:58.771839   58430 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:58.771886   58430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:58.772665   58430 out.go:177] * Verifying Kubernetes components...
	I0816 13:44:58.773992   58430 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:44:58.788717   58430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41949
	I0816 13:44:58.789233   58430 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:58.789833   58430 main.go:141] libmachine: Using API Version  1
	I0816 13:44:58.789859   58430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:58.790269   58430 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:58.790882   58430 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:58.790913   58430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:58.791553   58430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35753
	I0816 13:44:58.791556   58430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36985
	I0816 13:44:58.791945   58430 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:58.791979   58430 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:58.792413   58430 main.go:141] libmachine: Using API Version  1
	I0816 13:44:58.792440   58430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:58.792813   58430 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:58.792963   58430 main.go:141] libmachine: Using API Version  1
	I0816 13:44:58.792986   58430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:58.793013   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetState
	I0816 13:44:58.793374   58430 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:58.793940   58430 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:58.793986   58430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:58.796723   58430 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-893736"
	W0816 13:44:58.796740   58430 addons.go:243] addon default-storageclass should already be in state true
	I0816 13:44:58.796763   58430 host.go:66] Checking if "default-k8s-diff-port-893736" exists ...
	I0816 13:44:58.797138   58430 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:58.797184   58430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:58.806753   58430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41511
	I0816 13:44:58.807162   58430 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:58.807605   58430 main.go:141] libmachine: Using API Version  1
	I0816 13:44:58.807624   58430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:58.807984   58430 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:58.808229   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetState
	I0816 13:44:58.809833   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:44:58.811642   58430 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:58.812716   58430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39567
	I0816 13:44:58.812888   58430 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 13:44:58.812902   58430 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 13:44:58.812937   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:58.813184   58430 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:58.813668   58430 main.go:141] libmachine: Using API Version  1
	I0816 13:44:58.813695   58430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:58.813725   58430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40851
	I0816 13:44:58.814101   58430 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:58.814207   58430 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:58.814696   58430 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:58.814715   58430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:58.814948   58430 main.go:141] libmachine: Using API Version  1
	I0816 13:44:58.814961   58430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:58.815304   58430 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:58.815518   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetState
	I0816 13:44:58.816936   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:58.817482   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:44:58.817529   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:58.817543   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:58.817871   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:58.818057   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:58.818219   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:58.818397   58430 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/default-k8s-diff-port-893736/id_rsa Username:docker}
	I0816 13:44:58.819251   58430 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0816 13:44:55.143862   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:55.144403   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:55.144433   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:55.144354   59274 retry.go:31] will retry after 3.573850343s: waiting for machine to come up
	I0816 13:44:58.720104   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:58.720571   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:58.720606   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:58.720510   59274 retry.go:31] will retry after 4.52867767s: waiting for machine to come up
	I0816 13:44:55.248021   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:55.747406   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:56.247470   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:56.747399   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:57.247462   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:57.747637   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:58.248194   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:58.747381   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:59.247772   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:59.748373   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:58.820720   58430 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 13:44:58.820733   58430 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 13:44:58.820747   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:58.823868   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:58.824290   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:58.824305   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:58.824489   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:58.824629   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:58.824764   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:58.824860   58430 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/default-k8s-diff-port-893736/id_rsa Username:docker}
	I0816 13:44:58.830530   58430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38427
	I0816 13:44:58.830894   58430 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:58.831274   58430 main.go:141] libmachine: Using API Version  1
	I0816 13:44:58.831294   58430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:58.831583   58430 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:58.831729   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetState
	I0816 13:44:58.833321   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:44:58.833512   58430 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 13:44:58.833526   58430 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 13:44:58.833543   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:58.836244   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:58.836626   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:58.836649   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:58.836762   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:58.836947   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:58.837098   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:58.837234   58430 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/default-k8s-diff-port-893736/id_rsa Username:docker}
	I0816 13:44:58.973561   58430 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 13:44:58.995763   58430 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-893736" to be "Ready" ...
	I0816 13:44:59.118558   58430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 13:44:59.126100   58430 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 13:44:59.126125   58430 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0816 13:44:59.154048   58430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 13:44:59.162623   58430 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 13:44:59.162649   58430 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 13:44:59.213614   58430 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 13:44:59.213635   58430 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 13:44:59.233653   58430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 13:44:59.485000   58430 main.go:141] libmachine: Making call to close driver server
	I0816 13:44:59.485030   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .Close
	I0816 13:44:59.485329   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | Closing plugin on server side
	I0816 13:44:59.485384   58430 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:44:59.485397   58430 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:44:59.485406   58430 main.go:141] libmachine: Making call to close driver server
	I0816 13:44:59.485414   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .Close
	I0816 13:44:59.485736   58430 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:44:59.485777   58430 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:44:59.485741   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | Closing plugin on server side
	I0816 13:44:59.491692   58430 main.go:141] libmachine: Making call to close driver server
	I0816 13:44:59.491711   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .Close
	I0816 13:44:59.491938   58430 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:44:59.491957   58430 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:45:00.273964   58430 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.04027784s)
	I0816 13:45:00.274018   58430 main.go:141] libmachine: Making call to close driver server
	I0816 13:45:00.274036   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .Close
	I0816 13:45:00.274032   58430 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.119945545s)
	I0816 13:45:00.274065   58430 main.go:141] libmachine: Making call to close driver server
	I0816 13:45:00.274080   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .Close
	I0816 13:45:00.274373   58430 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:45:00.274388   58430 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:45:00.274398   58430 main.go:141] libmachine: Making call to close driver server
	I0816 13:45:00.274406   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .Close
	I0816 13:45:00.274441   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | Closing plugin on server side
	I0816 13:45:00.274481   58430 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:45:00.274499   58430 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:45:00.274513   58430 main.go:141] libmachine: Making call to close driver server
	I0816 13:45:00.274526   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .Close
	I0816 13:45:00.274620   58430 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:45:00.274633   58430 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:45:00.274643   58430 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-893736"
	I0816 13:45:00.274749   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | Closing plugin on server side
	I0816 13:45:00.274842   58430 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:45:00.274851   58430 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:45:00.276747   58430 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0816 13:45:00.278150   58430 addons.go:510] duration metric: took 1.506994649s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0816 13:44:57.858846   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:00.357028   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:03.253913   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.254379   57240 main.go:141] libmachine: (embed-certs-302520) Found IP for machine: 192.168.39.125
	I0816 13:45:03.254401   57240 main.go:141] libmachine: (embed-certs-302520) Reserving static IP address...
	I0816 13:45:03.254418   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has current primary IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.254776   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "embed-certs-302520", mac: "52:54:00:15:a3:1b", ip: "192.168.39.125"} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:03.254804   57240 main.go:141] libmachine: (embed-certs-302520) Reserved static IP address: 192.168.39.125
	I0816 13:45:03.254822   57240 main.go:141] libmachine: (embed-certs-302520) DBG | skip adding static IP to network mk-embed-certs-302520 - found existing host DHCP lease matching {name: "embed-certs-302520", mac: "52:54:00:15:a3:1b", ip: "192.168.39.125"}
	I0816 13:45:03.254840   57240 main.go:141] libmachine: (embed-certs-302520) DBG | Getting to WaitForSSH function...
	I0816 13:45:03.254848   57240 main.go:141] libmachine: (embed-certs-302520) Waiting for SSH to be available...
	I0816 13:45:03.256951   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.257302   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:03.257327   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.257462   57240 main.go:141] libmachine: (embed-certs-302520) DBG | Using SSH client type: external
	I0816 13:45:03.257483   57240 main.go:141] libmachine: (embed-certs-302520) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/embed-certs-302520/id_rsa (-rw-------)
	I0816 13:45:03.257519   57240 main.go:141] libmachine: (embed-certs-302520) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.125 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-3966/.minikube/machines/embed-certs-302520/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 13:45:03.257528   57240 main.go:141] libmachine: (embed-certs-302520) DBG | About to run SSH command:
	I0816 13:45:03.257537   57240 main.go:141] libmachine: (embed-certs-302520) DBG | exit 0
	I0816 13:45:03.389262   57240 main.go:141] libmachine: (embed-certs-302520) DBG | SSH cmd err, output: <nil>: 
	I0816 13:45:03.389630   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetConfigRaw
	I0816 13:45:03.390305   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetIP
	I0816 13:45:03.392462   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.392767   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:03.392795   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.393012   57240 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/embed-certs-302520/config.json ...
	I0816 13:45:03.393212   57240 machine.go:93] provisionDockerMachine start ...
	I0816 13:45:03.393230   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	I0816 13:45:03.393453   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:45:03.395589   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.395949   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:03.395971   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.396086   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:45:03.396258   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:03.396447   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:03.396589   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:45:03.396785   57240 main.go:141] libmachine: Using SSH client type: native
	I0816 13:45:03.397004   57240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0816 13:45:03.397042   57240 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 13:45:03.513624   57240 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 13:45:03.513655   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetMachineName
	I0816 13:45:03.513954   57240 buildroot.go:166] provisioning hostname "embed-certs-302520"
	I0816 13:45:03.513976   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetMachineName
	I0816 13:45:03.514199   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:45:03.517138   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.517499   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:03.517520   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.517672   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:45:03.517867   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:03.518007   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:03.518168   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:45:03.518364   57240 main.go:141] libmachine: Using SSH client type: native
	I0816 13:45:03.518583   57240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0816 13:45:03.518599   57240 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-302520 && echo "embed-certs-302520" | sudo tee /etc/hostname
	I0816 13:45:03.647799   57240 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-302520
	
	I0816 13:45:03.647840   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:45:03.650491   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.650846   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:03.650880   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.651103   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:45:03.651301   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:03.651469   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:03.651614   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:45:03.651778   57240 main.go:141] libmachine: Using SSH client type: native
	I0816 13:45:03.651935   57240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0816 13:45:03.651951   57240 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-302520' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-302520/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-302520' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 13:45:03.778350   57240 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 13:45:03.778381   57240 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-3966/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-3966/.minikube}
	I0816 13:45:03.778411   57240 buildroot.go:174] setting up certificates
	I0816 13:45:03.778423   57240 provision.go:84] configureAuth start
	I0816 13:45:03.778435   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetMachineName
	I0816 13:45:03.778689   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetIP
	I0816 13:45:03.781319   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.781673   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:03.781695   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.781829   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:45:03.783724   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.784035   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:03.784064   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.784180   57240 provision.go:143] copyHostCerts
	I0816 13:45:03.784243   57240 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem, removing ...
	I0816 13:45:03.784262   57240 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem
	I0816 13:45:03.784335   57240 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem (1082 bytes)
	I0816 13:45:03.784462   57240 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem, removing ...
	I0816 13:45:03.784474   57240 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem
	I0816 13:45:03.784503   57240 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem (1123 bytes)
	I0816 13:45:03.784568   57240 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem, removing ...
	I0816 13:45:03.784578   57240 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem
	I0816 13:45:03.784600   57240 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem (1675 bytes)
	I0816 13:45:03.784647   57240 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem org=jenkins.embed-certs-302520 san=[127.0.0.1 192.168.39.125 embed-certs-302520 localhost minikube]
	I0816 13:45:03.901261   57240 provision.go:177] copyRemoteCerts
	I0816 13:45:03.901314   57240 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 13:45:03.901339   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:45:03.904505   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.904893   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:03.904933   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.905118   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:45:03.905329   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:03.905499   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:45:03.905650   57240 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/embed-certs-302520/id_rsa Username:docker}
	I0816 13:45:03.996083   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 13:45:04.024594   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0816 13:45:04.054080   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 13:45:04.079810   57240 provision.go:87] duration metric: took 301.374056ms to configureAuth
	I0816 13:45:04.079865   57240 buildroot.go:189] setting minikube options for container-runtime
	I0816 13:45:04.080048   57240 config.go:182] Loaded profile config "embed-certs-302520": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 13:45:04.080116   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:45:04.082649   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.083037   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:04.083090   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.083239   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:45:04.083430   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:04.083598   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:04.083775   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:45:04.083951   57240 main.go:141] libmachine: Using SSH client type: native
	I0816 13:45:04.084149   57240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0816 13:45:04.084171   57240 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 13:45:04.404121   57240 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 13:45:04.404150   57240 machine.go:96] duration metric: took 1.010924979s to provisionDockerMachine
	I0816 13:45:04.404163   57240 start.go:293] postStartSetup for "embed-certs-302520" (driver="kvm2")
	I0816 13:45:04.404182   57240 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 13:45:04.404202   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	I0816 13:45:04.404542   57240 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 13:45:04.404574   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:45:04.407763   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.408118   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:04.408145   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.408311   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:45:04.408508   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:04.408685   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:45:04.408864   57240 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/embed-certs-302520/id_rsa Username:docker}
	I0816 13:45:04.496519   57240 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 13:45:04.501262   57240 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 13:45:04.501282   57240 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/addons for local assets ...
	I0816 13:45:04.501352   57240 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/files for local assets ...
	I0816 13:45:04.501440   57240 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem -> 111492.pem in /etc/ssl/certs
	I0816 13:45:04.501554   57240 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 13:45:04.511338   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /etc/ssl/certs/111492.pem (1708 bytes)
	I0816 13:45:04.535372   57240 start.go:296] duration metric: took 131.188411ms for postStartSetup
	I0816 13:45:04.535411   57240 fix.go:56] duration metric: took 21.301761751s for fixHost
	I0816 13:45:04.535435   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:45:04.538286   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.538651   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:04.538676   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.538868   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:45:04.539069   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:04.539208   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:04.539344   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:45:04.539504   57240 main.go:141] libmachine: Using SSH client type: native
	I0816 13:45:04.539702   57240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0816 13:45:04.539715   57240 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 13:45:04.653529   57240 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723815904.606422212
	
	I0816 13:45:04.653556   57240 fix.go:216] guest clock: 1723815904.606422212
	I0816 13:45:04.653566   57240 fix.go:229] Guest: 2024-08-16 13:45:04.606422212 +0000 UTC Remote: 2024-08-16 13:45:04.535416156 +0000 UTC m=+359.547804920 (delta=71.006056ms)
	I0816 13:45:04.653598   57240 fix.go:200] guest clock delta is within tolerance: 71.006056ms
	I0816 13:45:04.653605   57240 start.go:83] releasing machines lock for "embed-certs-302520", held for 21.419990329s
	I0816 13:45:04.653631   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	I0816 13:45:04.653922   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetIP
	I0816 13:45:04.656682   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.657009   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:04.657034   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.657211   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	I0816 13:45:04.657800   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	I0816 13:45:04.657981   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	I0816 13:45:04.658069   57240 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 13:45:04.658114   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:45:04.658172   57240 ssh_runner.go:195] Run: cat /version.json
	I0816 13:45:04.658193   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:45:04.660629   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.660942   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.661051   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:04.661076   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.661315   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:45:04.661433   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:04.661470   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.661474   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:04.661598   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:45:04.661646   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:45:04.661841   57240 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/embed-certs-302520/id_rsa Username:docker}
	I0816 13:45:04.661904   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:04.662054   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:45:04.662199   57240 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/embed-certs-302520/id_rsa Username:docker}
	I0816 13:45:04.767691   57240 ssh_runner.go:195] Run: systemctl --version
	I0816 13:45:04.773984   57240 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 13:45:04.925431   57240 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 13:45:04.931848   57240 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 13:45:04.931931   57240 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 13:45:04.951355   57240 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 13:45:04.951381   57240 start.go:495] detecting cgroup driver to use...
	I0816 13:45:04.951442   57240 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 13:45:04.972903   57240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 13:45:04.987531   57240 docker.go:217] disabling cri-docker service (if available) ...
	I0816 13:45:04.987600   57240 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 13:45:05.001880   57240 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 13:45:05.018403   57240 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 13:45:00.247513   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:00.748342   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:01.248179   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:01.747757   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:02.247789   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:02.748162   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:03.247936   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:03.747434   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:04.247832   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:04.747704   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:00.999833   58430 node_ready.go:53] node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:45:03.500652   58430 node_ready.go:53] node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:45:05.143662   57240 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 13:45:05.297447   57240 docker.go:233] disabling docker service ...
	I0816 13:45:05.297527   57240 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 13:45:05.313382   57240 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 13:45:05.327116   57240 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 13:45:05.486443   57240 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 13:45:05.620465   57240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 13:45:05.634813   57240 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 13:45:05.653822   57240 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 13:45:05.653887   57240 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:45:05.664976   57240 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 13:45:05.665045   57240 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:45:05.676414   57240 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:45:05.688631   57240 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:45:05.700400   57240 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 13:45:05.712822   57240 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:45:05.724573   57240 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:45:05.742934   57240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:45:05.755669   57240 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 13:45:05.766837   57240 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 13:45:05.766890   57240 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 13:45:05.782296   57240 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 13:45:05.793695   57240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:45:05.919559   57240 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 13:45:06.057480   57240 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 13:45:06.057543   57240 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 13:45:06.062348   57240 start.go:563] Will wait 60s for crictl version
	I0816 13:45:06.062414   57240 ssh_runner.go:195] Run: which crictl
	I0816 13:45:06.066456   57240 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 13:45:06.104075   57240 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 13:45:06.104156   57240 ssh_runner.go:195] Run: crio --version
	I0816 13:45:06.132406   57240 ssh_runner.go:195] Run: crio --version
	I0816 13:45:06.161878   57240 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 13:45:02.357119   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:04.361365   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:06.859546   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:06.163233   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetIP
	I0816 13:45:06.165924   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:06.166310   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:06.166333   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:06.166529   57240 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0816 13:45:06.170722   57240 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 13:45:06.183152   57240 kubeadm.go:883] updating cluster {Name:embed-certs-302520 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-302520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 13:45:06.183256   57240 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 13:45:06.183306   57240 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 13:45:06.223405   57240 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 13:45:06.223489   57240 ssh_runner.go:195] Run: which lz4
	I0816 13:45:06.227851   57240 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 13:45:06.232132   57240 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 13:45:06.232156   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0816 13:45:07.642616   57240 crio.go:462] duration metric: took 1.414789512s to copy over tarball
	I0816 13:45:07.642698   57240 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 13:45:09.794329   57240 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.151601472s)
	I0816 13:45:09.794359   57240 crio.go:469] duration metric: took 2.151717024s to extract the tarball
	I0816 13:45:09.794369   57240 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 13:45:09.833609   57240 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 13:45:09.878781   57240 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 13:45:09.878806   57240 cache_images.go:84] Images are preloaded, skipping loading
	I0816 13:45:09.878815   57240 kubeadm.go:934] updating node { 192.168.39.125 8443 v1.31.0 crio true true} ...
	I0816 13:45:09.878944   57240 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-302520 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.125
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-302520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 13:45:09.879032   57240 ssh_runner.go:195] Run: crio config
	I0816 13:45:09.924876   57240 cni.go:84] Creating CNI manager for ""
	I0816 13:45:09.924900   57240 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:45:09.924927   57240 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 13:45:09.924958   57240 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.125 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-302520 NodeName:embed-certs-302520 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.125"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.125 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 13:45:09.925150   57240 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.125
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-302520"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.125
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.125"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 13:45:09.925226   57240 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 13:45:09.935204   57240 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 13:45:09.935280   57240 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 13:45:09.945366   57240 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0816 13:45:09.961881   57240 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 13:45:09.978495   57240 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0816 13:45:09.995664   57240 ssh_runner.go:195] Run: grep 192.168.39.125	control-plane.minikube.internal$ /etc/hosts
	I0816 13:45:10.000132   57240 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.125	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 13:45:10.013039   57240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:45:05.247343   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:05.747420   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:06.247801   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:06.747978   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:07.248393   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:07.747801   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:08.248388   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:08.747624   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:09.247530   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:09.748311   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:06.000553   58430 node_ready.go:49] node "default-k8s-diff-port-893736" has status "Ready":"True"
	I0816 13:45:06.000579   58430 node_ready.go:38] duration metric: took 7.004778161s for node "default-k8s-diff-port-893736" to be "Ready" ...
	I0816 13:45:06.000590   58430 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:45:06.006987   58430 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-xdwhx" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:06.012552   58430 pod_ready.go:93] pod "coredns-6f6b679f8f-xdwhx" in "kube-system" namespace has status "Ready":"True"
	I0816 13:45:06.012577   58430 pod_ready.go:82] duration metric: took 5.565882ms for pod "coredns-6f6b679f8f-xdwhx" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:06.012588   58430 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:06.519889   58430 pod_ready.go:93] pod "etcd-default-k8s-diff-port-893736" in "kube-system" namespace has status "Ready":"True"
	I0816 13:45:06.519919   58430 pod_ready.go:82] duration metric: took 507.322547ms for pod "etcd-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:06.519932   58430 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:08.527411   58430 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-893736" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:09.527923   58430 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-893736" in "kube-system" namespace has status "Ready":"True"
	I0816 13:45:09.527950   58430 pod_ready.go:82] duration metric: took 3.008009418s for pod "kube-apiserver-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:09.527963   58430 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:09.534422   58430 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-893736" in "kube-system" namespace has status "Ready":"True"
	I0816 13:45:09.534460   58430 pod_ready.go:82] duration metric: took 6.488169ms for pod "kube-controller-manager-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:09.534476   58430 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-btq6r" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:09.538660   58430 pod_ready.go:93] pod "kube-proxy-btq6r" in "kube-system" namespace has status "Ready":"True"
	I0816 13:45:09.538688   58430 pod_ready.go:82] duration metric: took 4.202597ms for pod "kube-proxy-btq6r" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:09.538700   58430 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:09.600350   58430 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-893736" in "kube-system" namespace has status "Ready":"True"
	I0816 13:45:09.600377   58430 pod_ready.go:82] duration metric: took 61.666987ms for pod "kube-scheduler-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:09.600391   58430 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:09.361968   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:11.859112   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:10.143519   57240 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 13:45:10.160358   57240 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/embed-certs-302520 for IP: 192.168.39.125
	I0816 13:45:10.160381   57240 certs.go:194] generating shared ca certs ...
	I0816 13:45:10.160400   57240 certs.go:226] acquiring lock for ca certs: {Name:mkdb46755373c135acf8239d2d4352f4b0b3d1d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:45:10.160591   57240 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key
	I0816 13:45:10.160646   57240 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key
	I0816 13:45:10.160656   57240 certs.go:256] generating profile certs ...
	I0816 13:45:10.160767   57240 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/embed-certs-302520/client.key
	I0816 13:45:10.160845   57240 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/embed-certs-302520/apiserver.key.f0c5f9ff
	I0816 13:45:10.160893   57240 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/embed-certs-302520/proxy-client.key
	I0816 13:45:10.161075   57240 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem (1338 bytes)
	W0816 13:45:10.161133   57240 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149_empty.pem, impossibly tiny 0 bytes
	I0816 13:45:10.161148   57240 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 13:45:10.161182   57240 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem (1082 bytes)
	I0816 13:45:10.161213   57240 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem (1123 bytes)
	I0816 13:45:10.161243   57240 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem (1675 bytes)
	I0816 13:45:10.161298   57240 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem (1708 bytes)
	I0816 13:45:10.161944   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 13:45:10.202268   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 13:45:10.242684   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 13:45:10.287223   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 13:45:10.316762   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/embed-certs-302520/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0816 13:45:10.343352   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/embed-certs-302520/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 13:45:10.371042   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/embed-certs-302520/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 13:45:10.394922   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/embed-certs-302520/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 13:45:10.419358   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /usr/share/ca-certificates/111492.pem (1708 bytes)
	I0816 13:45:10.442301   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 13:45:10.465266   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem --> /usr/share/ca-certificates/11149.pem (1338 bytes)
	I0816 13:45:10.487647   57240 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 13:45:10.504713   57240 ssh_runner.go:195] Run: openssl version
	I0816 13:45:10.510493   57240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111492.pem && ln -fs /usr/share/ca-certificates/111492.pem /etc/ssl/certs/111492.pem"
	I0816 13:45:10.521818   57240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111492.pem
	I0816 13:45:10.526637   57240 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 12:33 /usr/share/ca-certificates/111492.pem
	I0816 13:45:10.526681   57240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111492.pem
	I0816 13:45:10.532660   57240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111492.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 13:45:10.543403   57240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 13:45:10.554344   57240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:45:10.559089   57240 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 12:22 /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:45:10.559149   57240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:45:10.564982   57240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 13:45:10.576074   57240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11149.pem && ln -fs /usr/share/ca-certificates/11149.pem /etc/ssl/certs/11149.pem"
	I0816 13:45:10.586596   57240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11149.pem
	I0816 13:45:10.591586   57240 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 12:33 /usr/share/ca-certificates/11149.pem
	I0816 13:45:10.591637   57240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11149.pem
	I0816 13:45:10.597624   57240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11149.pem /etc/ssl/certs/51391683.0"
	I0816 13:45:10.608838   57240 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 13:45:10.613785   57240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 13:45:10.619902   57240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 13:45:10.625554   57240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 13:45:10.631526   57240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 13:45:10.637251   57240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 13:45:10.643210   57240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 13:45:10.649187   57240 kubeadm.go:392] StartCluster: {Name:embed-certs-302520 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-302520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 13:45:10.649298   57240 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 13:45:10.649349   57240 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 13:45:10.686074   57240 cri.go:89] found id: ""
	I0816 13:45:10.686153   57240 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 13:45:10.696504   57240 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 13:45:10.696527   57240 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 13:45:10.696581   57240 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 13:45:10.706447   57240 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 13:45:10.707413   57240 kubeconfig.go:125] found "embed-certs-302520" server: "https://192.168.39.125:8443"
	I0816 13:45:10.710045   57240 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 13:45:10.719563   57240 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.125
	I0816 13:45:10.719599   57240 kubeadm.go:1160] stopping kube-system containers ...
	I0816 13:45:10.719613   57240 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 13:45:10.719665   57240 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 13:45:10.759584   57240 cri.go:89] found id: ""
	I0816 13:45:10.759661   57240 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 13:45:10.776355   57240 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 13:45:10.786187   57240 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 13:45:10.786205   57240 kubeadm.go:157] found existing configuration files:
	
	I0816 13:45:10.786244   57240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 13:45:10.795644   57240 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 13:45:10.795723   57240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 13:45:10.807988   57240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 13:45:10.817234   57240 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 13:45:10.817299   57240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 13:45:10.826601   57240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 13:45:10.835702   57240 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 13:45:10.835763   57240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 13:45:10.845160   57240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 13:45:10.855522   57240 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 13:45:10.855578   57240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 13:45:10.865445   57240 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 13:45:10.875429   57240 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:45:10.988958   57240 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:45:12.195215   57240 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.206217359s)
	I0816 13:45:12.195241   57240 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:45:12.432322   57240 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:45:12.514631   57240 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:45:12.606133   57240 api_server.go:52] waiting for apiserver process to appear ...
	I0816 13:45:12.606238   57240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:13.106823   57240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:13.606856   57240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:13.624866   57240 api_server.go:72] duration metric: took 1.018743147s to wait for apiserver process to appear ...
	I0816 13:45:13.624897   57240 api_server.go:88] waiting for apiserver healthz status ...
	I0816 13:45:13.624930   57240 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0816 13:45:13.625953   57240 api_server.go:269] stopped: https://192.168.39.125:8443/healthz: Get "https://192.168.39.125:8443/healthz": dial tcp 192.168.39.125:8443: connect: connection refused
	I0816 13:45:14.124979   57240 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0816 13:45:10.247689   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:10.747756   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:11.247963   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:11.747523   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:12.247397   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:12.748146   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:13.247976   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:13.748109   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:14.247662   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:14.748041   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:11.607443   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:14.107647   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:14.357916   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:16.358986   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:16.404020   57240 api_server.go:279] https://192.168.39.125:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 13:45:16.404049   57240 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 13:45:16.404062   57240 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0816 13:45:16.462649   57240 api_server.go:279] https://192.168.39.125:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 13:45:16.462685   57240 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 13:45:16.625998   57240 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0816 13:45:16.632560   57240 api_server.go:279] https://192.168.39.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 13:45:16.632586   57240 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 13:45:17.124984   57240 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0816 13:45:17.133533   57240 api_server.go:279] https://192.168.39.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 13:45:17.133563   57240 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 13:45:17.624993   57240 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0816 13:45:17.629720   57240 api_server.go:279] https://192.168.39.125:8443/healthz returned 200:
	ok
	I0816 13:45:17.635848   57240 api_server.go:141] control plane version: v1.31.0
	I0816 13:45:17.635874   57240 api_server.go:131] duration metric: took 4.010970063s to wait for apiserver health ...
	I0816 13:45:17.635885   57240 cni.go:84] Creating CNI manager for ""
	I0816 13:45:17.635892   57240 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:45:17.637609   57240 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 13:45:17.638828   57240 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 13:45:17.650034   57240 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 13:45:17.681352   57240 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 13:45:17.691752   57240 system_pods.go:59] 8 kube-system pods found
	I0816 13:45:17.691784   57240 system_pods.go:61] "coredns-6f6b679f8f-phxht" [df7bd896-d1c6-4a0e-aead-e3db36e915da] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 13:45:17.691792   57240 system_pods.go:61] "etcd-embed-certs-302520" [ef7bae1c-7cd3-4d8e-b2fc-e5837f4c5a1e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 13:45:17.691801   57240 system_pods.go:61] "kube-apiserver-embed-certs-302520" [957ba8ec-91ae-4cea-902f-81a286e35659] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 13:45:17.691806   57240 system_pods.go:61] "kube-controller-manager-embed-certs-302520" [afbfc2da-5435-4ebb-ada0-e0edc9d09a7d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 13:45:17.691817   57240 system_pods.go:61] "kube-proxy-nnc6b" [ec8b820d-6f1d-4777-9f76-7efffb4e6e79] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0816 13:45:17.691824   57240 system_pods.go:61] "kube-scheduler-embed-certs-302520" [077024c8-3dfd-4e8c-850a-333b63d3f23b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 13:45:17.691832   57240 system_pods.go:61] "metrics-server-6867b74b74-9277d" [5d7ee9e5-b40c-4840-9fb4-0b7b8be9faca] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 13:45:17.691837   57240 system_pods.go:61] "storage-provisioner" [6f3dc7f6-a3e0-4bc3-b362-e1d97633d0eb] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0816 13:45:17.691854   57240 system_pods.go:74] duration metric: took 10.481601ms to wait for pod list to return data ...
	I0816 13:45:17.691861   57240 node_conditions.go:102] verifying NodePressure condition ...
	I0816 13:45:17.695253   57240 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 13:45:17.695278   57240 node_conditions.go:123] node cpu capacity is 2
	I0816 13:45:17.695292   57240 node_conditions.go:105] duration metric: took 3.4236ms to run NodePressure ...
	I0816 13:45:17.695311   57240 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:45:17.996024   57240 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0816 13:45:17.999887   57240 kubeadm.go:739] kubelet initialised
	I0816 13:45:17.999906   57240 kubeadm.go:740] duration metric: took 3.859222ms waiting for restarted kubelet to initialise ...
	I0816 13:45:17.999913   57240 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:45:18.004476   57240 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-phxht" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:18.009142   57240 pod_ready.go:98] node "embed-certs-302520" hosting pod "coredns-6f6b679f8f-phxht" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-302520" has status "Ready":"False"
	I0816 13:45:18.009162   57240 pod_ready.go:82] duration metric: took 4.665087ms for pod "coredns-6f6b679f8f-phxht" in "kube-system" namespace to be "Ready" ...
	E0816 13:45:18.009170   57240 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-302520" hosting pod "coredns-6f6b679f8f-phxht" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-302520" has status "Ready":"False"
	I0816 13:45:18.009175   57240 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:18.014083   57240 pod_ready.go:98] node "embed-certs-302520" hosting pod "etcd-embed-certs-302520" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-302520" has status "Ready":"False"
	I0816 13:45:18.014102   57240 pod_ready.go:82] duration metric: took 4.91913ms for pod "etcd-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	E0816 13:45:18.014118   57240 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-302520" hosting pod "etcd-embed-certs-302520" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-302520" has status "Ready":"False"
	I0816 13:45:18.014124   57240 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:18.018257   57240 pod_ready.go:98] node "embed-certs-302520" hosting pod "kube-apiserver-embed-certs-302520" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-302520" has status "Ready":"False"
	I0816 13:45:18.018276   57240 pod_ready.go:82] duration metric: took 4.14471ms for pod "kube-apiserver-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	E0816 13:45:18.018283   57240 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-302520" hosting pod "kube-apiserver-embed-certs-302520" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-302520" has status "Ready":"False"
	I0816 13:45:18.018288   57240 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:18.085229   57240 pod_ready.go:98] node "embed-certs-302520" hosting pod "kube-controller-manager-embed-certs-302520" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-302520" has status "Ready":"False"
	I0816 13:45:18.085257   57240 pod_ready.go:82] duration metric: took 66.95357ms for pod "kube-controller-manager-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	E0816 13:45:18.085269   57240 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-302520" hosting pod "kube-controller-manager-embed-certs-302520" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-302520" has status "Ready":"False"
	I0816 13:45:18.085276   57240 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-nnc6b" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:18.485094   57240 pod_ready.go:93] pod "kube-proxy-nnc6b" in "kube-system" namespace has status "Ready":"True"
	I0816 13:45:18.485124   57240 pod_ready.go:82] duration metric: took 399.831747ms for pod "kube-proxy-nnc6b" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:18.485135   57240 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:15.248141   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:15.747452   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:16.247654   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:16.747569   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:17.248203   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:17.747951   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:18.248147   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:18.747490   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:19.248135   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:19.748201   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:16.107986   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:18.606838   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:18.857109   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:20.858242   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:20.491635   57240 pod_ready.go:103] pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:22.492484   57240 pod_ready.go:103] pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:24.992054   57240 pod_ready.go:103] pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:20.247741   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:20.747432   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:21.247600   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:21.748309   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:22.247438   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:22.748379   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:23.247577   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:23.747950   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:24.247733   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:24.748079   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:21.107371   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:23.607589   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:23.357770   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:25.358102   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:26.992544   57240 pod_ready.go:103] pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:29.491552   57240 pod_ready.go:103] pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:25.247402   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:25.747623   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:26.248101   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:26.747403   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:27.248040   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:27.747380   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:28.247857   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:28.748374   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:29.247819   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:29.747331   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:26.106454   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:28.107564   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:30.115954   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:27.358671   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:29.857631   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:31.862487   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:30.491291   57240 pod_ready.go:93] pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace has status "Ready":"True"
	I0816 13:45:30.491320   57240 pod_ready.go:82] duration metric: took 12.006175772s for pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:30.491333   57240 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:32.497481   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:34.500397   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:30.247771   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:30.747706   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:31.247762   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:31.748013   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:32.247551   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:32.748020   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:33.247432   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:33.747594   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:34.247750   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:45:34.247831   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:45:34.295412   57945 cri.go:89] found id: ""
	I0816 13:45:34.295439   57945 logs.go:276] 0 containers: []
	W0816 13:45:34.295461   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:45:34.295468   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:45:34.295529   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:45:34.332061   57945 cri.go:89] found id: ""
	I0816 13:45:34.332085   57945 logs.go:276] 0 containers: []
	W0816 13:45:34.332093   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:45:34.332100   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:45:34.332158   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:45:34.369512   57945 cri.go:89] found id: ""
	I0816 13:45:34.369535   57945 logs.go:276] 0 containers: []
	W0816 13:45:34.369546   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:45:34.369553   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:45:34.369617   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:45:34.406324   57945 cri.go:89] found id: ""
	I0816 13:45:34.406351   57945 logs.go:276] 0 containers: []
	W0816 13:45:34.406362   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:45:34.406370   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:45:34.406436   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:45:34.442193   57945 cri.go:89] found id: ""
	I0816 13:45:34.442229   57945 logs.go:276] 0 containers: []
	W0816 13:45:34.442239   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:45:34.442244   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:45:34.442301   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:45:34.476563   57945 cri.go:89] found id: ""
	I0816 13:45:34.476600   57945 logs.go:276] 0 containers: []
	W0816 13:45:34.476616   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:45:34.476622   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:45:34.476670   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:45:34.515841   57945 cri.go:89] found id: ""
	I0816 13:45:34.515869   57945 logs.go:276] 0 containers: []
	W0816 13:45:34.515877   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:45:34.515883   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:45:34.515940   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:45:34.551242   57945 cri.go:89] found id: ""
	I0816 13:45:34.551276   57945 logs.go:276] 0 containers: []
	W0816 13:45:34.551288   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:45:34.551305   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:45:34.551321   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:45:34.564902   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:45:34.564944   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:45:34.694323   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:45:34.694349   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:45:34.694366   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:45:34.770548   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:45:34.770589   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:45:34.818339   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:45:34.818366   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:45:32.606912   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:34.607600   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:34.358649   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:36.856727   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:37.003939   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:39.498178   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:37.370390   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:37.383474   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:45:37.383558   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:45:37.419911   57945 cri.go:89] found id: ""
	I0816 13:45:37.419943   57945 logs.go:276] 0 containers: []
	W0816 13:45:37.419954   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:45:37.419961   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:45:37.420027   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:45:37.453845   57945 cri.go:89] found id: ""
	I0816 13:45:37.453876   57945 logs.go:276] 0 containers: []
	W0816 13:45:37.453884   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:45:37.453889   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:45:37.453949   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:45:37.489053   57945 cri.go:89] found id: ""
	I0816 13:45:37.489088   57945 logs.go:276] 0 containers: []
	W0816 13:45:37.489099   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:45:37.489106   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:45:37.489176   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:45:37.525631   57945 cri.go:89] found id: ""
	I0816 13:45:37.525664   57945 logs.go:276] 0 containers: []
	W0816 13:45:37.525676   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:45:37.525684   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:45:37.525743   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:45:37.560064   57945 cri.go:89] found id: ""
	I0816 13:45:37.560089   57945 logs.go:276] 0 containers: []
	W0816 13:45:37.560101   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:45:37.560109   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:45:37.560168   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:45:37.593856   57945 cri.go:89] found id: ""
	I0816 13:45:37.593888   57945 logs.go:276] 0 containers: []
	W0816 13:45:37.593899   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:45:37.593907   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:45:37.593969   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:45:37.627775   57945 cri.go:89] found id: ""
	I0816 13:45:37.627808   57945 logs.go:276] 0 containers: []
	W0816 13:45:37.627818   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:45:37.627828   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:45:37.627888   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:45:37.660926   57945 cri.go:89] found id: ""
	I0816 13:45:37.660962   57945 logs.go:276] 0 containers: []
	W0816 13:45:37.660973   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:45:37.660991   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:45:37.661008   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:45:37.738954   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:45:37.738993   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:45:37.778976   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:45:37.779006   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:45:37.831361   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:45:37.831397   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:45:37.845096   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:45:37.845122   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:45:37.930797   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:45:37.106303   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:39.107343   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:38.857564   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:40.858908   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:41.998945   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:43.999474   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:40.431616   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:40.445298   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:45:40.445365   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:45:40.478229   57945 cri.go:89] found id: ""
	I0816 13:45:40.478252   57945 logs.go:276] 0 containers: []
	W0816 13:45:40.478259   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:45:40.478265   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:45:40.478313   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:45:40.514721   57945 cri.go:89] found id: ""
	I0816 13:45:40.514744   57945 logs.go:276] 0 containers: []
	W0816 13:45:40.514754   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:45:40.514761   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:45:40.514819   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:45:40.550604   57945 cri.go:89] found id: ""
	I0816 13:45:40.550629   57945 logs.go:276] 0 containers: []
	W0816 13:45:40.550637   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:45:40.550644   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:45:40.550700   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:45:40.589286   57945 cri.go:89] found id: ""
	I0816 13:45:40.589312   57945 logs.go:276] 0 containers: []
	W0816 13:45:40.589320   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:45:40.589326   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:45:40.589382   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:45:40.622689   57945 cri.go:89] found id: ""
	I0816 13:45:40.622709   57945 logs.go:276] 0 containers: []
	W0816 13:45:40.622717   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:45:40.622722   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:45:40.622778   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:45:40.660872   57945 cri.go:89] found id: ""
	I0816 13:45:40.660897   57945 logs.go:276] 0 containers: []
	W0816 13:45:40.660915   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:45:40.660925   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:45:40.660986   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:45:40.697369   57945 cri.go:89] found id: ""
	I0816 13:45:40.697395   57945 logs.go:276] 0 containers: []
	W0816 13:45:40.697404   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:45:40.697415   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:45:40.697521   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:45:40.733565   57945 cri.go:89] found id: ""
	I0816 13:45:40.733594   57945 logs.go:276] 0 containers: []
	W0816 13:45:40.733604   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:45:40.733615   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:45:40.733629   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:45:40.770951   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:45:40.770993   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:45:40.824983   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:45:40.825025   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:45:40.838846   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:45:40.838876   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:45:40.915687   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:45:40.915718   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:45:40.915733   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:45:43.496409   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:43.511419   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:45:43.511485   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:45:43.556996   57945 cri.go:89] found id: ""
	I0816 13:45:43.557031   57945 logs.go:276] 0 containers: []
	W0816 13:45:43.557042   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:45:43.557050   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:45:43.557102   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:45:43.609200   57945 cri.go:89] found id: ""
	I0816 13:45:43.609228   57945 logs.go:276] 0 containers: []
	W0816 13:45:43.609237   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:45:43.609244   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:45:43.609305   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:45:43.648434   57945 cri.go:89] found id: ""
	I0816 13:45:43.648458   57945 logs.go:276] 0 containers: []
	W0816 13:45:43.648467   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:45:43.648474   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:45:43.648538   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:45:43.687179   57945 cri.go:89] found id: ""
	I0816 13:45:43.687214   57945 logs.go:276] 0 containers: []
	W0816 13:45:43.687222   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:45:43.687228   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:45:43.687293   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:45:43.721723   57945 cri.go:89] found id: ""
	I0816 13:45:43.721751   57945 logs.go:276] 0 containers: []
	W0816 13:45:43.721762   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:45:43.721769   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:45:43.721847   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:45:43.756469   57945 cri.go:89] found id: ""
	I0816 13:45:43.756492   57945 logs.go:276] 0 containers: []
	W0816 13:45:43.756501   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:45:43.756506   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:45:43.756560   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:45:43.790241   57945 cri.go:89] found id: ""
	I0816 13:45:43.790267   57945 logs.go:276] 0 containers: []
	W0816 13:45:43.790275   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:45:43.790281   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:45:43.790329   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:45:43.828620   57945 cri.go:89] found id: ""
	I0816 13:45:43.828646   57945 logs.go:276] 0 containers: []
	W0816 13:45:43.828654   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:45:43.828662   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:45:43.828677   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:45:43.879573   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:45:43.879607   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:45:43.893813   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:45:43.893842   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:45:43.975188   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:45:43.975209   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:45:43.975220   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:45:44.054231   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:45:44.054266   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:45:41.609813   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:44.116781   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:43.358670   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:45.857710   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:46.497146   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:48.498302   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:46.593190   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:46.607472   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:45:46.607568   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:45:46.642764   57945 cri.go:89] found id: ""
	I0816 13:45:46.642787   57945 logs.go:276] 0 containers: []
	W0816 13:45:46.642795   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:45:46.642800   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:45:46.642848   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:45:46.678965   57945 cri.go:89] found id: ""
	I0816 13:45:46.678992   57945 logs.go:276] 0 containers: []
	W0816 13:45:46.679000   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:45:46.679005   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:45:46.679051   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:45:46.717632   57945 cri.go:89] found id: ""
	I0816 13:45:46.717657   57945 logs.go:276] 0 containers: []
	W0816 13:45:46.717666   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:45:46.717671   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:45:46.717720   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:45:46.758359   57945 cri.go:89] found id: ""
	I0816 13:45:46.758407   57945 logs.go:276] 0 containers: []
	W0816 13:45:46.758419   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:45:46.758427   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:45:46.758487   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:45:46.798405   57945 cri.go:89] found id: ""
	I0816 13:45:46.798437   57945 logs.go:276] 0 containers: []
	W0816 13:45:46.798448   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:45:46.798472   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:45:46.798547   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:45:46.834977   57945 cri.go:89] found id: ""
	I0816 13:45:46.835008   57945 logs.go:276] 0 containers: []
	W0816 13:45:46.835019   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:45:46.835026   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:45:46.835077   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:45:46.873589   57945 cri.go:89] found id: ""
	I0816 13:45:46.873622   57945 logs.go:276] 0 containers: []
	W0816 13:45:46.873631   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:45:46.873638   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:45:46.873689   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:45:46.912649   57945 cri.go:89] found id: ""
	I0816 13:45:46.912680   57945 logs.go:276] 0 containers: []
	W0816 13:45:46.912691   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:45:46.912701   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:45:46.912720   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:45:46.966998   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:45:46.967038   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:45:46.980897   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:45:46.980937   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:45:47.053055   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:45:47.053079   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:45:47.053091   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:45:47.136251   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:45:47.136291   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:45:49.678283   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:49.691134   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:45:49.691244   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:45:49.726598   57945 cri.go:89] found id: ""
	I0816 13:45:49.726644   57945 logs.go:276] 0 containers: []
	W0816 13:45:49.726656   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:45:49.726665   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:45:49.726729   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:45:49.760499   57945 cri.go:89] found id: ""
	I0816 13:45:49.760526   57945 logs.go:276] 0 containers: []
	W0816 13:45:49.760536   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:45:49.760543   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:45:49.760602   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:45:49.794064   57945 cri.go:89] found id: ""
	I0816 13:45:49.794087   57945 logs.go:276] 0 containers: []
	W0816 13:45:49.794094   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:45:49.794099   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:45:49.794162   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:45:49.830016   57945 cri.go:89] found id: ""
	I0816 13:45:49.830045   57945 logs.go:276] 0 containers: []
	W0816 13:45:49.830057   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:45:49.830071   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:45:49.830125   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:45:49.865230   57945 cri.go:89] found id: ""
	I0816 13:45:49.865248   57945 logs.go:276] 0 containers: []
	W0816 13:45:49.865255   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:45:49.865261   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:45:49.865310   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:45:49.898715   57945 cri.go:89] found id: ""
	I0816 13:45:49.898743   57945 logs.go:276] 0 containers: []
	W0816 13:45:49.898752   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:45:49.898758   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:45:49.898807   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:45:49.932831   57945 cri.go:89] found id: ""
	I0816 13:45:49.932857   57945 logs.go:276] 0 containers: []
	W0816 13:45:49.932868   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:45:49.932875   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:45:49.932948   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:45:49.965580   57945 cri.go:89] found id: ""
	I0816 13:45:49.965609   57945 logs.go:276] 0 containers: []
	W0816 13:45:49.965617   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:45:49.965626   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:45:49.965642   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:45:50.058462   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:45:50.058516   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:45:46.606815   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:49.107387   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:47.858274   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:49.861382   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:50.999007   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:53.497248   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:50.111179   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:45:50.111206   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:45:50.162529   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:45:50.162561   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:45:50.176552   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:45:50.176579   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:45:50.243818   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:45:52.744808   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:52.757430   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:45:52.757513   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:45:52.793177   57945 cri.go:89] found id: ""
	I0816 13:45:52.793209   57945 logs.go:276] 0 containers: []
	W0816 13:45:52.793217   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:45:52.793224   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:45:52.793276   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:45:52.827846   57945 cri.go:89] found id: ""
	I0816 13:45:52.827874   57945 logs.go:276] 0 containers: []
	W0816 13:45:52.827886   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:45:52.827894   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:45:52.827959   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:45:52.864662   57945 cri.go:89] found id: ""
	I0816 13:45:52.864693   57945 logs.go:276] 0 containers: []
	W0816 13:45:52.864705   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:45:52.864711   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:45:52.864761   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:45:52.901124   57945 cri.go:89] found id: ""
	I0816 13:45:52.901154   57945 logs.go:276] 0 containers: []
	W0816 13:45:52.901166   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:45:52.901174   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:45:52.901234   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:45:52.939763   57945 cri.go:89] found id: ""
	I0816 13:45:52.939791   57945 logs.go:276] 0 containers: []
	W0816 13:45:52.939799   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:45:52.939805   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:45:52.939858   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:45:52.975045   57945 cri.go:89] found id: ""
	I0816 13:45:52.975075   57945 logs.go:276] 0 containers: []
	W0816 13:45:52.975086   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:45:52.975092   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:45:52.975141   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:45:53.014686   57945 cri.go:89] found id: ""
	I0816 13:45:53.014714   57945 logs.go:276] 0 containers: []
	W0816 13:45:53.014725   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:45:53.014732   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:45:53.014794   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:45:53.049445   57945 cri.go:89] found id: ""
	I0816 13:45:53.049466   57945 logs.go:276] 0 containers: []
	W0816 13:45:53.049473   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:45:53.049482   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:45:53.049492   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:45:53.101819   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:45:53.101850   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:45:53.116165   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:45:53.116191   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:45:53.191022   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:45:53.191047   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:45:53.191062   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:45:53.268901   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:45:53.268952   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:45:51.607047   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:54.106991   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:52.363317   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:54.857924   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:55.497520   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:57.498597   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:59.997729   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:55.814862   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:55.828817   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:45:55.828875   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:45:55.877556   57945 cri.go:89] found id: ""
	I0816 13:45:55.877586   57945 logs.go:276] 0 containers: []
	W0816 13:45:55.877595   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:45:55.877606   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:45:55.877667   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:45:55.912820   57945 cri.go:89] found id: ""
	I0816 13:45:55.912848   57945 logs.go:276] 0 containers: []
	W0816 13:45:55.912855   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:45:55.912862   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:45:55.912918   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:45:55.947419   57945 cri.go:89] found id: ""
	I0816 13:45:55.947449   57945 logs.go:276] 0 containers: []
	W0816 13:45:55.947460   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:45:55.947467   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:45:55.947532   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:45:55.980964   57945 cri.go:89] found id: ""
	I0816 13:45:55.980990   57945 logs.go:276] 0 containers: []
	W0816 13:45:55.981001   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:45:55.981008   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:45:55.981068   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:45:56.019021   57945 cri.go:89] found id: ""
	I0816 13:45:56.019045   57945 logs.go:276] 0 containers: []
	W0816 13:45:56.019053   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:45:56.019059   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:45:56.019116   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:45:56.054950   57945 cri.go:89] found id: ""
	I0816 13:45:56.054974   57945 logs.go:276] 0 containers: []
	W0816 13:45:56.054985   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:45:56.054992   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:45:56.055057   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:45:56.091165   57945 cri.go:89] found id: ""
	I0816 13:45:56.091192   57945 logs.go:276] 0 containers: []
	W0816 13:45:56.091202   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:45:56.091211   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:45:56.091268   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:45:56.125748   57945 cri.go:89] found id: ""
	I0816 13:45:56.125775   57945 logs.go:276] 0 containers: []
	W0816 13:45:56.125787   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:45:56.125797   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:45:56.125811   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:45:56.174836   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:45:56.174870   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:45:56.188501   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:45:56.188529   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:45:56.266017   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:45:56.266038   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:45:56.266050   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:45:56.346482   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:45:56.346519   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:45:58.887176   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:58.900464   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:45:58.900531   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:45:58.939526   57945 cri.go:89] found id: ""
	I0816 13:45:58.939558   57945 logs.go:276] 0 containers: []
	W0816 13:45:58.939568   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:45:58.939576   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:45:58.939639   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:45:58.975256   57945 cri.go:89] found id: ""
	I0816 13:45:58.975281   57945 logs.go:276] 0 containers: []
	W0816 13:45:58.975289   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:45:58.975294   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:45:58.975350   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:45:59.012708   57945 cri.go:89] found id: ""
	I0816 13:45:59.012736   57945 logs.go:276] 0 containers: []
	W0816 13:45:59.012746   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:45:59.012754   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:45:59.012820   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:45:59.049385   57945 cri.go:89] found id: ""
	I0816 13:45:59.049417   57945 logs.go:276] 0 containers: []
	W0816 13:45:59.049430   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:45:59.049438   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:45:59.049505   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:45:59.084750   57945 cri.go:89] found id: ""
	I0816 13:45:59.084773   57945 logs.go:276] 0 containers: []
	W0816 13:45:59.084781   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:45:59.084786   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:45:59.084834   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:45:59.129464   57945 cri.go:89] found id: ""
	I0816 13:45:59.129495   57945 logs.go:276] 0 containers: []
	W0816 13:45:59.129506   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:45:59.129514   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:45:59.129578   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:45:59.166772   57945 cri.go:89] found id: ""
	I0816 13:45:59.166794   57945 logs.go:276] 0 containers: []
	W0816 13:45:59.166802   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:45:59.166808   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:45:59.166867   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:45:59.203843   57945 cri.go:89] found id: ""
	I0816 13:45:59.203876   57945 logs.go:276] 0 containers: []
	W0816 13:45:59.203886   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:45:59.203897   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:45:59.203911   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:45:59.285798   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:45:59.285837   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:45:59.324704   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:45:59.324729   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:45:59.377532   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:45:59.377566   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:45:59.391209   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:45:59.391236   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:45:59.463420   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:45:56.107187   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:58.606550   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:57.358875   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:59.857940   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:01.859677   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:01.998260   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:04.498473   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:01.964395   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:01.977380   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:01.977452   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:02.014480   57945 cri.go:89] found id: ""
	I0816 13:46:02.014504   57945 logs.go:276] 0 containers: []
	W0816 13:46:02.014511   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:02.014517   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:02.014578   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:02.057233   57945 cri.go:89] found id: ""
	I0816 13:46:02.057262   57945 logs.go:276] 0 containers: []
	W0816 13:46:02.057270   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:02.057277   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:02.057326   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:02.095936   57945 cri.go:89] found id: ""
	I0816 13:46:02.095962   57945 logs.go:276] 0 containers: []
	W0816 13:46:02.095970   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:02.095976   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:02.096020   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:02.136949   57945 cri.go:89] found id: ""
	I0816 13:46:02.136980   57945 logs.go:276] 0 containers: []
	W0816 13:46:02.136992   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:02.136998   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:02.137047   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:02.172610   57945 cri.go:89] found id: ""
	I0816 13:46:02.172648   57945 logs.go:276] 0 containers: []
	W0816 13:46:02.172658   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:02.172666   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:02.172729   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:02.211216   57945 cri.go:89] found id: ""
	I0816 13:46:02.211247   57945 logs.go:276] 0 containers: []
	W0816 13:46:02.211257   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:02.211266   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:02.211334   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:02.245705   57945 cri.go:89] found id: ""
	I0816 13:46:02.245735   57945 logs.go:276] 0 containers: []
	W0816 13:46:02.245746   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:02.245753   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:02.245823   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:02.281057   57945 cri.go:89] found id: ""
	I0816 13:46:02.281082   57945 logs.go:276] 0 containers: []
	W0816 13:46:02.281093   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:02.281103   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:02.281128   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:02.333334   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:02.333377   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:02.347520   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:02.347546   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:02.427543   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:02.427572   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:02.427587   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:02.514871   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:02.514908   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:05.057817   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:05.070491   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:05.070554   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:01.106533   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:03.107325   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:05.107629   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:04.359077   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:06.857557   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:06.997606   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:08.998915   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:05.108262   57945 cri.go:89] found id: ""
	I0816 13:46:05.108290   57945 logs.go:276] 0 containers: []
	W0816 13:46:05.108301   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:05.108308   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:05.108361   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:05.143962   57945 cri.go:89] found id: ""
	I0816 13:46:05.143995   57945 logs.go:276] 0 containers: []
	W0816 13:46:05.144005   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:05.144011   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:05.144067   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:05.180032   57945 cri.go:89] found id: ""
	I0816 13:46:05.180058   57945 logs.go:276] 0 containers: []
	W0816 13:46:05.180068   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:05.180076   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:05.180128   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:05.214077   57945 cri.go:89] found id: ""
	I0816 13:46:05.214107   57945 logs.go:276] 0 containers: []
	W0816 13:46:05.214115   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:05.214121   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:05.214171   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:05.250887   57945 cri.go:89] found id: ""
	I0816 13:46:05.250920   57945 logs.go:276] 0 containers: []
	W0816 13:46:05.250930   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:05.250937   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:05.251000   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:05.285592   57945 cri.go:89] found id: ""
	I0816 13:46:05.285615   57945 logs.go:276] 0 containers: []
	W0816 13:46:05.285623   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:05.285629   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:05.285675   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:05.325221   57945 cri.go:89] found id: ""
	I0816 13:46:05.325248   57945 logs.go:276] 0 containers: []
	W0816 13:46:05.325258   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:05.325264   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:05.325307   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:05.364025   57945 cri.go:89] found id: ""
	I0816 13:46:05.364047   57945 logs.go:276] 0 containers: []
	W0816 13:46:05.364055   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:05.364062   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:05.364074   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:05.413364   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:05.413395   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:05.427328   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:05.427358   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:05.504040   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:05.504067   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:05.504086   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:05.580975   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:05.581010   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:08.123111   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:08.136822   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:08.136902   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:08.169471   57945 cri.go:89] found id: ""
	I0816 13:46:08.169495   57945 logs.go:276] 0 containers: []
	W0816 13:46:08.169503   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:08.169508   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:08.169556   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:08.211041   57945 cri.go:89] found id: ""
	I0816 13:46:08.211069   57945 logs.go:276] 0 containers: []
	W0816 13:46:08.211081   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:08.211087   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:08.211148   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:08.247564   57945 cri.go:89] found id: ""
	I0816 13:46:08.247590   57945 logs.go:276] 0 containers: []
	W0816 13:46:08.247600   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:08.247607   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:08.247670   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:08.284283   57945 cri.go:89] found id: ""
	I0816 13:46:08.284312   57945 logs.go:276] 0 containers: []
	W0816 13:46:08.284324   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:08.284332   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:08.284384   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:08.320287   57945 cri.go:89] found id: ""
	I0816 13:46:08.320311   57945 logs.go:276] 0 containers: []
	W0816 13:46:08.320319   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:08.320325   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:08.320371   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:08.358294   57945 cri.go:89] found id: ""
	I0816 13:46:08.358324   57945 logs.go:276] 0 containers: []
	W0816 13:46:08.358342   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:08.358356   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:08.358423   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:08.394386   57945 cri.go:89] found id: ""
	I0816 13:46:08.394414   57945 logs.go:276] 0 containers: []
	W0816 13:46:08.394424   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:08.394432   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:08.394502   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:08.439608   57945 cri.go:89] found id: ""
	I0816 13:46:08.439635   57945 logs.go:276] 0 containers: []
	W0816 13:46:08.439643   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:08.439653   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:08.439668   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:08.493878   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:08.493918   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:08.508080   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:08.508114   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:08.584703   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:08.584727   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:08.584745   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:08.663741   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:08.663776   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:07.606112   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:09.608137   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:09.357201   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:11.359055   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:11.497851   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:13.998849   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:11.204946   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:11.218720   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:11.218800   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:11.257825   57945 cri.go:89] found id: ""
	I0816 13:46:11.257852   57945 logs.go:276] 0 containers: []
	W0816 13:46:11.257862   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:11.257870   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:11.257930   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:11.293910   57945 cri.go:89] found id: ""
	I0816 13:46:11.293946   57945 logs.go:276] 0 containers: []
	W0816 13:46:11.293958   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:11.293966   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:11.294023   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:11.330005   57945 cri.go:89] found id: ""
	I0816 13:46:11.330031   57945 logs.go:276] 0 containers: []
	W0816 13:46:11.330039   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:11.330045   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:11.330101   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:11.365057   57945 cri.go:89] found id: ""
	I0816 13:46:11.365083   57945 logs.go:276] 0 containers: []
	W0816 13:46:11.365093   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:11.365101   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:11.365159   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:11.401440   57945 cri.go:89] found id: ""
	I0816 13:46:11.401467   57945 logs.go:276] 0 containers: []
	W0816 13:46:11.401475   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:11.401481   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:11.401532   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:11.435329   57945 cri.go:89] found id: ""
	I0816 13:46:11.435354   57945 logs.go:276] 0 containers: []
	W0816 13:46:11.435361   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:11.435368   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:11.435427   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:11.468343   57945 cri.go:89] found id: ""
	I0816 13:46:11.468373   57945 logs.go:276] 0 containers: []
	W0816 13:46:11.468393   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:11.468401   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:11.468465   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:11.503326   57945 cri.go:89] found id: ""
	I0816 13:46:11.503347   57945 logs.go:276] 0 containers: []
	W0816 13:46:11.503362   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:11.503370   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:11.503386   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:11.554681   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:11.554712   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:11.568056   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:11.568087   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:11.646023   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:11.646049   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:11.646060   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:11.726154   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:11.726191   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:14.266008   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:14.280328   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:14.280408   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:14.316359   57945 cri.go:89] found id: ""
	I0816 13:46:14.316388   57945 logs.go:276] 0 containers: []
	W0816 13:46:14.316398   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:14.316406   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:14.316470   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:14.360143   57945 cri.go:89] found id: ""
	I0816 13:46:14.360165   57945 logs.go:276] 0 containers: []
	W0816 13:46:14.360172   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:14.360183   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:14.360234   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:14.394692   57945 cri.go:89] found id: ""
	I0816 13:46:14.394717   57945 logs.go:276] 0 containers: []
	W0816 13:46:14.394724   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:14.394730   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:14.394789   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:14.431928   57945 cri.go:89] found id: ""
	I0816 13:46:14.431957   57945 logs.go:276] 0 containers: []
	W0816 13:46:14.431968   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:14.431975   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:14.432041   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:14.469223   57945 cri.go:89] found id: ""
	I0816 13:46:14.469253   57945 logs.go:276] 0 containers: []
	W0816 13:46:14.469265   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:14.469272   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:14.469334   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:14.506893   57945 cri.go:89] found id: ""
	I0816 13:46:14.506917   57945 logs.go:276] 0 containers: []
	W0816 13:46:14.506925   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:14.506931   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:14.506991   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:14.544801   57945 cri.go:89] found id: ""
	I0816 13:46:14.544825   57945 logs.go:276] 0 containers: []
	W0816 13:46:14.544833   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:14.544839   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:14.544891   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:14.579489   57945 cri.go:89] found id: ""
	I0816 13:46:14.579528   57945 logs.go:276] 0 containers: []
	W0816 13:46:14.579541   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:14.579556   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:14.579572   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:14.656527   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:14.656551   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:14.656573   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:14.736792   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:14.736823   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:14.775976   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:14.776010   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:14.827804   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:14.827836   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:12.106330   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:14.106732   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:13.857302   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:15.858233   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:16.497347   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:18.497948   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:17.341506   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:17.357136   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:17.357214   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:17.397810   57945 cri.go:89] found id: ""
	I0816 13:46:17.397839   57945 logs.go:276] 0 containers: []
	W0816 13:46:17.397867   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:17.397874   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:17.397936   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:17.435170   57945 cri.go:89] found id: ""
	I0816 13:46:17.435198   57945 logs.go:276] 0 containers: []
	W0816 13:46:17.435208   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:17.435214   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:17.435260   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:17.468837   57945 cri.go:89] found id: ""
	I0816 13:46:17.468871   57945 logs.go:276] 0 containers: []
	W0816 13:46:17.468882   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:17.468891   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:17.468962   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:17.503884   57945 cri.go:89] found id: ""
	I0816 13:46:17.503910   57945 logs.go:276] 0 containers: []
	W0816 13:46:17.503921   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:17.503930   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:17.503977   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:17.541204   57945 cri.go:89] found id: ""
	I0816 13:46:17.541232   57945 logs.go:276] 0 containers: []
	W0816 13:46:17.541244   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:17.541251   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:17.541312   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:17.577007   57945 cri.go:89] found id: ""
	I0816 13:46:17.577031   57945 logs.go:276] 0 containers: []
	W0816 13:46:17.577038   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:17.577045   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:17.577092   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:17.611352   57945 cri.go:89] found id: ""
	I0816 13:46:17.611373   57945 logs.go:276] 0 containers: []
	W0816 13:46:17.611380   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:17.611386   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:17.611433   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:17.648108   57945 cri.go:89] found id: ""
	I0816 13:46:17.648147   57945 logs.go:276] 0 containers: []
	W0816 13:46:17.648155   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:17.648164   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:17.648176   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:17.720475   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:17.720500   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:17.720512   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:17.797602   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:17.797636   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:17.842985   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:17.843019   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:17.893581   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:17.893617   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:16.107456   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:18.107650   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:20.608155   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:18.357472   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:20.857964   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:20.498563   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:22.998319   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:20.408415   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:20.423303   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:20.423384   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:20.459057   57945 cri.go:89] found id: ""
	I0816 13:46:20.459083   57945 logs.go:276] 0 containers: []
	W0816 13:46:20.459091   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:20.459096   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:20.459152   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:20.496447   57945 cri.go:89] found id: ""
	I0816 13:46:20.496471   57945 logs.go:276] 0 containers: []
	W0816 13:46:20.496479   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:20.496485   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:20.496532   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:20.538508   57945 cri.go:89] found id: ""
	I0816 13:46:20.538531   57945 logs.go:276] 0 containers: []
	W0816 13:46:20.538539   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:20.538544   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:20.538600   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:20.579350   57945 cri.go:89] found id: ""
	I0816 13:46:20.579382   57945 logs.go:276] 0 containers: []
	W0816 13:46:20.579390   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:20.579396   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:20.579465   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:20.615088   57945 cri.go:89] found id: ""
	I0816 13:46:20.615118   57945 logs.go:276] 0 containers: []
	W0816 13:46:20.615130   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:20.615137   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:20.615203   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:20.650849   57945 cri.go:89] found id: ""
	I0816 13:46:20.650877   57945 logs.go:276] 0 containers: []
	W0816 13:46:20.650884   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:20.650890   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:20.650950   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:20.691439   57945 cri.go:89] found id: ""
	I0816 13:46:20.691471   57945 logs.go:276] 0 containers: []
	W0816 13:46:20.691482   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:20.691490   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:20.691553   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:20.727795   57945 cri.go:89] found id: ""
	I0816 13:46:20.727820   57945 logs.go:276] 0 containers: []
	W0816 13:46:20.727829   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:20.727836   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:20.727847   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:20.806369   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:20.806390   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:20.806402   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:20.886313   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:20.886345   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:20.926079   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:20.926104   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:20.981052   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:20.981088   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:23.496179   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:23.509918   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:23.509983   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:23.546175   57945 cri.go:89] found id: ""
	I0816 13:46:23.546214   57945 logs.go:276] 0 containers: []
	W0816 13:46:23.546224   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:23.546231   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:23.546293   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:23.581553   57945 cri.go:89] found id: ""
	I0816 13:46:23.581581   57945 logs.go:276] 0 containers: []
	W0816 13:46:23.581594   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:23.581600   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:23.581648   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:23.614559   57945 cri.go:89] found id: ""
	I0816 13:46:23.614584   57945 logs.go:276] 0 containers: []
	W0816 13:46:23.614592   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:23.614597   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:23.614651   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:23.649239   57945 cri.go:89] found id: ""
	I0816 13:46:23.649272   57945 logs.go:276] 0 containers: []
	W0816 13:46:23.649283   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:23.649291   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:23.649354   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:23.688017   57945 cri.go:89] found id: ""
	I0816 13:46:23.688044   57945 logs.go:276] 0 containers: []
	W0816 13:46:23.688054   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:23.688062   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:23.688126   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:23.723475   57945 cri.go:89] found id: ""
	I0816 13:46:23.723507   57945 logs.go:276] 0 containers: []
	W0816 13:46:23.723517   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:23.723525   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:23.723585   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:23.756028   57945 cri.go:89] found id: ""
	I0816 13:46:23.756055   57945 logs.go:276] 0 containers: []
	W0816 13:46:23.756063   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:23.756069   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:23.756121   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:23.789965   57945 cri.go:89] found id: ""
	I0816 13:46:23.789993   57945 logs.go:276] 0 containers: []
	W0816 13:46:23.790000   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:23.790009   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:23.790029   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:23.803669   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:23.803696   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:23.882614   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:23.882642   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:23.882659   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:23.957733   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:23.957773   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:23.994270   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:23.994298   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:23.106190   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:25.106765   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:23.356443   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:25.356705   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:25.496930   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:27.497933   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:29.500639   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:26.546600   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:26.560153   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:26.560221   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:26.594482   57945 cri.go:89] found id: ""
	I0816 13:46:26.594506   57945 logs.go:276] 0 containers: []
	W0816 13:46:26.594520   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:26.594528   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:26.594585   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:26.628020   57945 cri.go:89] found id: ""
	I0816 13:46:26.628051   57945 logs.go:276] 0 containers: []
	W0816 13:46:26.628061   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:26.628068   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:26.628126   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:26.664248   57945 cri.go:89] found id: ""
	I0816 13:46:26.664277   57945 logs.go:276] 0 containers: []
	W0816 13:46:26.664288   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:26.664295   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:26.664357   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:26.700365   57945 cri.go:89] found id: ""
	I0816 13:46:26.700389   57945 logs.go:276] 0 containers: []
	W0816 13:46:26.700397   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:26.700403   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:26.700464   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:26.736170   57945 cri.go:89] found id: ""
	I0816 13:46:26.736204   57945 logs.go:276] 0 containers: []
	W0816 13:46:26.736212   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:26.736219   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:26.736268   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:26.773411   57945 cri.go:89] found id: ""
	I0816 13:46:26.773441   57945 logs.go:276] 0 containers: []
	W0816 13:46:26.773449   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:26.773455   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:26.773514   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:26.811994   57945 cri.go:89] found id: ""
	I0816 13:46:26.812022   57945 logs.go:276] 0 containers: []
	W0816 13:46:26.812030   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:26.812036   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:26.812087   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:26.846621   57945 cri.go:89] found id: ""
	I0816 13:46:26.846647   57945 logs.go:276] 0 containers: []
	W0816 13:46:26.846654   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:26.846662   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:26.846680   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:26.902255   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:26.902293   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:26.916117   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:26.916148   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:26.986755   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:26.986786   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:26.986802   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:27.069607   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:27.069644   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:29.610859   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:29.624599   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:29.624654   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:29.660421   57945 cri.go:89] found id: ""
	I0816 13:46:29.660454   57945 logs.go:276] 0 containers: []
	W0816 13:46:29.660465   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:29.660474   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:29.660534   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:29.694828   57945 cri.go:89] found id: ""
	I0816 13:46:29.694853   57945 logs.go:276] 0 containers: []
	W0816 13:46:29.694861   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:29.694867   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:29.694933   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:29.734054   57945 cri.go:89] found id: ""
	I0816 13:46:29.734083   57945 logs.go:276] 0 containers: []
	W0816 13:46:29.734093   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:29.734100   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:29.734159   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:29.771358   57945 cri.go:89] found id: ""
	I0816 13:46:29.771383   57945 logs.go:276] 0 containers: []
	W0816 13:46:29.771391   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:29.771397   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:29.771464   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:29.806781   57945 cri.go:89] found id: ""
	I0816 13:46:29.806804   57945 logs.go:276] 0 containers: []
	W0816 13:46:29.806812   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:29.806819   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:29.806879   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:29.841716   57945 cri.go:89] found id: ""
	I0816 13:46:29.841743   57945 logs.go:276] 0 containers: []
	W0816 13:46:29.841754   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:29.841762   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:29.841827   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:29.880115   57945 cri.go:89] found id: ""
	I0816 13:46:29.880144   57945 logs.go:276] 0 containers: []
	W0816 13:46:29.880152   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:29.880158   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:29.880226   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:29.916282   57945 cri.go:89] found id: ""
	I0816 13:46:29.916311   57945 logs.go:276] 0 containers: []
	W0816 13:46:29.916321   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:29.916331   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:29.916347   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:29.996027   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:29.996059   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:30.035284   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:30.035315   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:30.085336   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:30.085368   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:30.099534   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:30.099562   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 13:46:27.606739   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:29.606870   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:27.357970   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:29.861012   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:31.998584   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:34.497511   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	W0816 13:46:30.174105   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:32.674746   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:32.688631   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:32.688699   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:32.722967   57945 cri.go:89] found id: ""
	I0816 13:46:32.722997   57945 logs.go:276] 0 containers: []
	W0816 13:46:32.723007   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:32.723014   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:32.723075   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:32.757223   57945 cri.go:89] found id: ""
	I0816 13:46:32.757257   57945 logs.go:276] 0 containers: []
	W0816 13:46:32.757267   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:32.757275   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:32.757342   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:32.793773   57945 cri.go:89] found id: ""
	I0816 13:46:32.793795   57945 logs.go:276] 0 containers: []
	W0816 13:46:32.793804   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:32.793811   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:32.793879   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:32.829541   57945 cri.go:89] found id: ""
	I0816 13:46:32.829565   57945 logs.go:276] 0 containers: []
	W0816 13:46:32.829573   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:32.829578   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:32.829626   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:32.864053   57945 cri.go:89] found id: ""
	I0816 13:46:32.864079   57945 logs.go:276] 0 containers: []
	W0816 13:46:32.864090   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:32.864097   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:32.864155   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:32.901420   57945 cri.go:89] found id: ""
	I0816 13:46:32.901451   57945 logs.go:276] 0 containers: []
	W0816 13:46:32.901459   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:32.901466   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:32.901522   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:32.933082   57945 cri.go:89] found id: ""
	I0816 13:46:32.933110   57945 logs.go:276] 0 containers: []
	W0816 13:46:32.933118   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:32.933125   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:32.933186   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:32.966640   57945 cri.go:89] found id: ""
	I0816 13:46:32.966664   57945 logs.go:276] 0 containers: []
	W0816 13:46:32.966672   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:32.966680   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:32.966692   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:33.048593   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:33.048627   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:33.089329   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:33.089366   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:33.144728   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:33.144764   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:33.158639   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:33.158666   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:33.227076   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:32.106718   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:34.606961   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:32.357555   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:34.857062   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:36.857679   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:36.997085   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:38.999741   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:35.727465   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:35.740850   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:35.740940   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:35.777294   57945 cri.go:89] found id: ""
	I0816 13:46:35.777317   57945 logs.go:276] 0 containers: []
	W0816 13:46:35.777325   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:35.777330   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:35.777394   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:35.815582   57945 cri.go:89] found id: ""
	I0816 13:46:35.815604   57945 logs.go:276] 0 containers: []
	W0816 13:46:35.815612   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:35.815618   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:35.815672   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:35.848338   57945 cri.go:89] found id: ""
	I0816 13:46:35.848363   57945 logs.go:276] 0 containers: []
	W0816 13:46:35.848370   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:35.848376   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:35.848432   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:35.884834   57945 cri.go:89] found id: ""
	I0816 13:46:35.884862   57945 logs.go:276] 0 containers: []
	W0816 13:46:35.884870   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:35.884876   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:35.884953   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:35.919022   57945 cri.go:89] found id: ""
	I0816 13:46:35.919046   57945 logs.go:276] 0 containers: []
	W0816 13:46:35.919058   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:35.919063   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:35.919150   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:35.953087   57945 cri.go:89] found id: ""
	I0816 13:46:35.953111   57945 logs.go:276] 0 containers: []
	W0816 13:46:35.953119   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:35.953124   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:35.953182   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:35.984776   57945 cri.go:89] found id: ""
	I0816 13:46:35.984804   57945 logs.go:276] 0 containers: []
	W0816 13:46:35.984814   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:35.984821   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:35.984882   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:36.028921   57945 cri.go:89] found id: ""
	I0816 13:46:36.028946   57945 logs.go:276] 0 containers: []
	W0816 13:46:36.028954   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:36.028964   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:36.028976   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:36.091313   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:36.091342   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:36.116881   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:36.116915   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:36.186758   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:36.186778   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:36.186791   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:36.268618   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:36.268653   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:38.808419   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:38.821646   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:38.821708   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:38.860623   57945 cri.go:89] found id: ""
	I0816 13:46:38.860647   57945 logs.go:276] 0 containers: []
	W0816 13:46:38.860655   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:38.860660   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:38.860712   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:38.894728   57945 cri.go:89] found id: ""
	I0816 13:46:38.894782   57945 logs.go:276] 0 containers: []
	W0816 13:46:38.894795   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:38.894804   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:38.894870   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:38.928945   57945 cri.go:89] found id: ""
	I0816 13:46:38.928974   57945 logs.go:276] 0 containers: []
	W0816 13:46:38.928988   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:38.928994   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:38.929048   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:38.966450   57945 cri.go:89] found id: ""
	I0816 13:46:38.966474   57945 logs.go:276] 0 containers: []
	W0816 13:46:38.966482   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:38.966487   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:38.966548   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:39.001554   57945 cri.go:89] found id: ""
	I0816 13:46:39.001577   57945 logs.go:276] 0 containers: []
	W0816 13:46:39.001589   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:39.001595   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:39.001656   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:39.036621   57945 cri.go:89] found id: ""
	I0816 13:46:39.036646   57945 logs.go:276] 0 containers: []
	W0816 13:46:39.036654   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:39.036660   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:39.036725   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:39.071244   57945 cri.go:89] found id: ""
	I0816 13:46:39.071271   57945 logs.go:276] 0 containers: []
	W0816 13:46:39.071281   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:39.071289   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:39.071355   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:39.107325   57945 cri.go:89] found id: ""
	I0816 13:46:39.107352   57945 logs.go:276] 0 containers: []
	W0816 13:46:39.107361   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:39.107371   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:39.107401   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:39.189172   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:39.189208   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:39.229060   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:39.229094   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:39.281983   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:39.282025   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:39.296515   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:39.296545   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:39.368488   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:37.113026   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:39.606526   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:38.857809   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:41.358047   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:41.497724   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:43.498815   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:41.868721   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:41.883796   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:41.883869   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:41.922181   57945 cri.go:89] found id: ""
	I0816 13:46:41.922211   57945 logs.go:276] 0 containers: []
	W0816 13:46:41.922222   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:41.922232   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:41.922297   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:41.962213   57945 cri.go:89] found id: ""
	I0816 13:46:41.962239   57945 logs.go:276] 0 containers: []
	W0816 13:46:41.962249   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:41.962257   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:41.962321   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:42.003214   57945 cri.go:89] found id: ""
	I0816 13:46:42.003243   57945 logs.go:276] 0 containers: []
	W0816 13:46:42.003251   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:42.003257   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:42.003316   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:42.038594   57945 cri.go:89] found id: ""
	I0816 13:46:42.038622   57945 logs.go:276] 0 containers: []
	W0816 13:46:42.038635   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:42.038641   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:42.038691   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:42.071377   57945 cri.go:89] found id: ""
	I0816 13:46:42.071409   57945 logs.go:276] 0 containers: []
	W0816 13:46:42.071421   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:42.071429   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:42.071489   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:42.104777   57945 cri.go:89] found id: ""
	I0816 13:46:42.104804   57945 logs.go:276] 0 containers: []
	W0816 13:46:42.104815   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:42.104823   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:42.104879   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:42.140292   57945 cri.go:89] found id: ""
	I0816 13:46:42.140324   57945 logs.go:276] 0 containers: []
	W0816 13:46:42.140335   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:42.140342   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:42.140404   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:42.174823   57945 cri.go:89] found id: ""
	I0816 13:46:42.174861   57945 logs.go:276] 0 containers: []
	W0816 13:46:42.174870   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:42.174887   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:42.174906   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:42.216308   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:42.216337   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:42.269277   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:42.269304   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:42.282347   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:42.282374   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:42.358776   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:42.358796   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:42.358807   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:44.942195   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:44.955384   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:44.955465   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:44.994181   57945 cri.go:89] found id: ""
	I0816 13:46:44.994212   57945 logs.go:276] 0 containers: []
	W0816 13:46:44.994223   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:44.994230   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:44.994286   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:45.028937   57945 cri.go:89] found id: ""
	I0816 13:46:45.028972   57945 logs.go:276] 0 containers: []
	W0816 13:46:45.028984   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:45.028991   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:45.029049   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:45.068193   57945 cri.go:89] found id: ""
	I0816 13:46:45.068223   57945 logs.go:276] 0 containers: []
	W0816 13:46:45.068237   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:45.068249   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:45.068309   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:42.108651   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:44.606597   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:43.856419   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:45.858360   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:45.998195   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:48.497584   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:45.100553   57945 cri.go:89] found id: ""
	I0816 13:46:45.100653   57945 logs.go:276] 0 containers: []
	W0816 13:46:45.100667   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:45.100674   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:45.100734   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:45.135676   57945 cri.go:89] found id: ""
	I0816 13:46:45.135704   57945 logs.go:276] 0 containers: []
	W0816 13:46:45.135714   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:45.135721   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:45.135784   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:45.174611   57945 cri.go:89] found id: ""
	I0816 13:46:45.174642   57945 logs.go:276] 0 containers: []
	W0816 13:46:45.174653   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:45.174660   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:45.174713   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:45.209544   57945 cri.go:89] found id: ""
	I0816 13:46:45.209573   57945 logs.go:276] 0 containers: []
	W0816 13:46:45.209582   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:45.209588   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:45.209649   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:45.245622   57945 cri.go:89] found id: ""
	I0816 13:46:45.245654   57945 logs.go:276] 0 containers: []
	W0816 13:46:45.245664   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:45.245677   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:45.245692   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:45.284294   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:45.284322   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:45.335720   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:45.335751   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:45.350014   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:45.350039   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:45.419816   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:45.419839   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:45.419854   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:48.005991   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:48.019754   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:48.019814   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:48.053269   57945 cri.go:89] found id: ""
	I0816 13:46:48.053331   57945 logs.go:276] 0 containers: []
	W0816 13:46:48.053344   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:48.053351   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:48.053404   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:48.086992   57945 cri.go:89] found id: ""
	I0816 13:46:48.087024   57945 logs.go:276] 0 containers: []
	W0816 13:46:48.087032   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:48.087037   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:48.087098   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:48.123008   57945 cri.go:89] found id: ""
	I0816 13:46:48.123037   57945 logs.go:276] 0 containers: []
	W0816 13:46:48.123046   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:48.123053   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:48.123110   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:48.158035   57945 cri.go:89] found id: ""
	I0816 13:46:48.158064   57945 logs.go:276] 0 containers: []
	W0816 13:46:48.158075   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:48.158082   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:48.158146   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:48.194576   57945 cri.go:89] found id: ""
	I0816 13:46:48.194605   57945 logs.go:276] 0 containers: []
	W0816 13:46:48.194616   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:48.194624   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:48.194687   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:48.232844   57945 cri.go:89] found id: ""
	I0816 13:46:48.232870   57945 logs.go:276] 0 containers: []
	W0816 13:46:48.232878   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:48.232883   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:48.232955   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:48.267525   57945 cri.go:89] found id: ""
	I0816 13:46:48.267551   57945 logs.go:276] 0 containers: []
	W0816 13:46:48.267559   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:48.267564   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:48.267629   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:48.305436   57945 cri.go:89] found id: ""
	I0816 13:46:48.305465   57945 logs.go:276] 0 containers: []
	W0816 13:46:48.305477   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:48.305487   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:48.305502   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:48.357755   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:48.357781   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:48.372672   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:48.372703   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:48.439076   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:48.439099   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:48.439114   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:48.524142   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:48.524181   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:47.106288   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:49.108117   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:48.357517   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:50.857069   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:50.501014   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:52.998618   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:51.065770   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:51.078797   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:51.078868   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:51.118864   57945 cri.go:89] found id: ""
	I0816 13:46:51.118891   57945 logs.go:276] 0 containers: []
	W0816 13:46:51.118899   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:51.118905   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:51.118964   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:51.153024   57945 cri.go:89] found id: ""
	I0816 13:46:51.153049   57945 logs.go:276] 0 containers: []
	W0816 13:46:51.153057   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:51.153062   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:51.153111   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:51.189505   57945 cri.go:89] found id: ""
	I0816 13:46:51.189531   57945 logs.go:276] 0 containers: []
	W0816 13:46:51.189542   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:51.189550   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:51.189611   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:51.228456   57945 cri.go:89] found id: ""
	I0816 13:46:51.228483   57945 logs.go:276] 0 containers: []
	W0816 13:46:51.228494   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:51.228502   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:51.228565   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:51.264436   57945 cri.go:89] found id: ""
	I0816 13:46:51.264463   57945 logs.go:276] 0 containers: []
	W0816 13:46:51.264474   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:51.264482   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:51.264542   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:51.300291   57945 cri.go:89] found id: ""
	I0816 13:46:51.300315   57945 logs.go:276] 0 containers: []
	W0816 13:46:51.300323   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:51.300329   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:51.300379   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:51.334878   57945 cri.go:89] found id: ""
	I0816 13:46:51.334902   57945 logs.go:276] 0 containers: []
	W0816 13:46:51.334909   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:51.334917   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:51.334969   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:51.376467   57945 cri.go:89] found id: ""
	I0816 13:46:51.376491   57945 logs.go:276] 0 containers: []
	W0816 13:46:51.376499   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:51.376507   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:51.376518   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:51.420168   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:51.420194   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:51.470869   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:51.470900   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:51.484877   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:51.484903   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:51.557587   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:51.557614   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:51.557631   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:54.141123   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:54.154790   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:54.154864   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:54.189468   57945 cri.go:89] found id: ""
	I0816 13:46:54.189495   57945 logs.go:276] 0 containers: []
	W0816 13:46:54.189503   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:54.189509   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:54.189562   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:54.223774   57945 cri.go:89] found id: ""
	I0816 13:46:54.223805   57945 logs.go:276] 0 containers: []
	W0816 13:46:54.223817   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:54.223826   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:54.223883   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:54.257975   57945 cri.go:89] found id: ""
	I0816 13:46:54.258004   57945 logs.go:276] 0 containers: []
	W0816 13:46:54.258014   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:54.258022   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:54.258078   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:54.296144   57945 cri.go:89] found id: ""
	I0816 13:46:54.296174   57945 logs.go:276] 0 containers: []
	W0816 13:46:54.296193   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:54.296201   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:54.296276   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:54.336734   57945 cri.go:89] found id: ""
	I0816 13:46:54.336760   57945 logs.go:276] 0 containers: []
	W0816 13:46:54.336770   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:54.336775   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:54.336839   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:54.370572   57945 cri.go:89] found id: ""
	I0816 13:46:54.370602   57945 logs.go:276] 0 containers: []
	W0816 13:46:54.370609   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:54.370615   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:54.370676   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:54.405703   57945 cri.go:89] found id: ""
	I0816 13:46:54.405735   57945 logs.go:276] 0 containers: []
	W0816 13:46:54.405745   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:54.405753   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:54.405816   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:54.441466   57945 cri.go:89] found id: ""
	I0816 13:46:54.441492   57945 logs.go:276] 0 containers: []
	W0816 13:46:54.441500   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:54.441509   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:54.441521   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:54.492539   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:54.492570   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:54.506313   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:54.506341   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:54.580127   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:54.580151   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:54.580172   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:54.658597   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:54.658633   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:51.607335   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:54.106631   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:53.357847   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:55.857456   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:55.497897   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:57.999173   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:57.198267   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:57.213292   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:57.213354   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:57.248838   57945 cri.go:89] found id: ""
	I0816 13:46:57.248862   57945 logs.go:276] 0 containers: []
	W0816 13:46:57.248870   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:57.248876   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:57.248951   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:57.283868   57945 cri.go:89] found id: ""
	I0816 13:46:57.283895   57945 logs.go:276] 0 containers: []
	W0816 13:46:57.283903   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:57.283908   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:57.283958   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:57.319363   57945 cri.go:89] found id: ""
	I0816 13:46:57.319392   57945 logs.go:276] 0 containers: []
	W0816 13:46:57.319405   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:57.319412   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:57.319465   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:57.359895   57945 cri.go:89] found id: ""
	I0816 13:46:57.359937   57945 logs.go:276] 0 containers: []
	W0816 13:46:57.359949   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:57.359957   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:57.360024   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:57.398025   57945 cri.go:89] found id: ""
	I0816 13:46:57.398057   57945 logs.go:276] 0 containers: []
	W0816 13:46:57.398068   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:57.398075   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:57.398140   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:57.436101   57945 cri.go:89] found id: ""
	I0816 13:46:57.436132   57945 logs.go:276] 0 containers: []
	W0816 13:46:57.436140   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:57.436147   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:57.436223   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:57.471737   57945 cri.go:89] found id: ""
	I0816 13:46:57.471767   57945 logs.go:276] 0 containers: []
	W0816 13:46:57.471778   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:57.471785   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:57.471845   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:57.508664   57945 cri.go:89] found id: ""
	I0816 13:46:57.508694   57945 logs.go:276] 0 containers: []
	W0816 13:46:57.508705   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:57.508716   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:57.508730   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:57.559122   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:57.559155   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:57.572504   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:57.572529   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:57.646721   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:57.646743   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:57.646756   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:57.725107   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:57.725153   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:56.107168   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:58.606805   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:00.607098   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:57.857681   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:00.357433   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:00.497738   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:02.998036   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:04.998316   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:00.269137   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:00.284285   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:00.284363   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:00.325613   57945 cri.go:89] found id: ""
	I0816 13:47:00.325645   57945 logs.go:276] 0 containers: []
	W0816 13:47:00.325654   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:00.325662   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:00.325721   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:00.361706   57945 cri.go:89] found id: ""
	I0816 13:47:00.361732   57945 logs.go:276] 0 containers: []
	W0816 13:47:00.361742   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:00.361750   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:00.361808   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:00.398453   57945 cri.go:89] found id: ""
	I0816 13:47:00.398478   57945 logs.go:276] 0 containers: []
	W0816 13:47:00.398486   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:00.398491   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:00.398544   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:00.434233   57945 cri.go:89] found id: ""
	I0816 13:47:00.434265   57945 logs.go:276] 0 containers: []
	W0816 13:47:00.434278   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:00.434286   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:00.434391   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:00.473020   57945 cri.go:89] found id: ""
	I0816 13:47:00.473042   57945 logs.go:276] 0 containers: []
	W0816 13:47:00.473050   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:00.473056   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:00.473117   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:00.511480   57945 cri.go:89] found id: ""
	I0816 13:47:00.511507   57945 logs.go:276] 0 containers: []
	W0816 13:47:00.511518   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:00.511525   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:00.511595   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:00.546166   57945 cri.go:89] found id: ""
	I0816 13:47:00.546202   57945 logs.go:276] 0 containers: []
	W0816 13:47:00.546209   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:00.546216   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:00.546263   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:00.585285   57945 cri.go:89] found id: ""
	I0816 13:47:00.585310   57945 logs.go:276] 0 containers: []
	W0816 13:47:00.585320   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:00.585329   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:00.585348   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:00.633346   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:00.633373   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:00.687904   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:00.687937   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:00.703773   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:00.703801   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:00.775179   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:00.775210   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:00.775226   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:03.354676   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:03.370107   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:03.370178   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:03.406212   57945 cri.go:89] found id: ""
	I0816 13:47:03.406245   57945 logs.go:276] 0 containers: []
	W0816 13:47:03.406256   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:03.406263   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:03.406333   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:03.442887   57945 cri.go:89] found id: ""
	I0816 13:47:03.442925   57945 logs.go:276] 0 containers: []
	W0816 13:47:03.442937   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:03.442943   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:03.443000   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:03.479225   57945 cri.go:89] found id: ""
	I0816 13:47:03.479259   57945 logs.go:276] 0 containers: []
	W0816 13:47:03.479270   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:03.479278   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:03.479340   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:03.516145   57945 cri.go:89] found id: ""
	I0816 13:47:03.516181   57945 logs.go:276] 0 containers: []
	W0816 13:47:03.516192   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:03.516203   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:03.516265   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:03.548225   57945 cri.go:89] found id: ""
	I0816 13:47:03.548252   57945 logs.go:276] 0 containers: []
	W0816 13:47:03.548260   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:03.548267   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:03.548324   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:03.582038   57945 cri.go:89] found id: ""
	I0816 13:47:03.582071   57945 logs.go:276] 0 containers: []
	W0816 13:47:03.582082   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:03.582089   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:03.582160   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:03.618693   57945 cri.go:89] found id: ""
	I0816 13:47:03.618720   57945 logs.go:276] 0 containers: []
	W0816 13:47:03.618730   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:03.618737   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:03.618793   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:03.653717   57945 cri.go:89] found id: ""
	I0816 13:47:03.653742   57945 logs.go:276] 0 containers: []
	W0816 13:47:03.653751   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:03.653759   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:03.653771   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:03.705909   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:03.705942   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:03.720727   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:03.720751   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:03.795064   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:03.795089   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:03.795104   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:03.874061   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:03.874105   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:02.607546   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:05.106955   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:02.358368   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:04.359618   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:06.858437   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:06.999109   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:09.498087   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:06.420149   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:06.437062   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:06.437124   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:06.473620   57945 cri.go:89] found id: ""
	I0816 13:47:06.473651   57945 logs.go:276] 0 containers: []
	W0816 13:47:06.473659   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:06.473664   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:06.473720   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:06.510281   57945 cri.go:89] found id: ""
	I0816 13:47:06.510307   57945 logs.go:276] 0 containers: []
	W0816 13:47:06.510315   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:06.510321   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:06.510372   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:06.546589   57945 cri.go:89] found id: ""
	I0816 13:47:06.546623   57945 logs.go:276] 0 containers: []
	W0816 13:47:06.546634   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:06.546642   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:06.546702   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:06.580629   57945 cri.go:89] found id: ""
	I0816 13:47:06.580652   57945 logs.go:276] 0 containers: []
	W0816 13:47:06.580665   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:06.580671   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:06.580718   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:06.617411   57945 cri.go:89] found id: ""
	I0816 13:47:06.617439   57945 logs.go:276] 0 containers: []
	W0816 13:47:06.617459   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:06.617468   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:06.617533   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:06.654017   57945 cri.go:89] found id: ""
	I0816 13:47:06.654045   57945 logs.go:276] 0 containers: []
	W0816 13:47:06.654057   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:06.654064   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:06.654124   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:06.695109   57945 cri.go:89] found id: ""
	I0816 13:47:06.695139   57945 logs.go:276] 0 containers: []
	W0816 13:47:06.695147   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:06.695153   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:06.695205   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:06.731545   57945 cri.go:89] found id: ""
	I0816 13:47:06.731620   57945 logs.go:276] 0 containers: []
	W0816 13:47:06.731635   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:06.731647   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:06.731668   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:06.782862   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:06.782900   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:06.797524   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:06.797550   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:06.877445   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:06.877476   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:06.877493   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:06.957932   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:06.957965   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:09.498843   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:09.513398   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:09.513468   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:09.551246   57945 cri.go:89] found id: ""
	I0816 13:47:09.551275   57945 logs.go:276] 0 containers: []
	W0816 13:47:09.551284   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:09.551290   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:09.551339   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:09.585033   57945 cri.go:89] found id: ""
	I0816 13:47:09.585059   57945 logs.go:276] 0 containers: []
	W0816 13:47:09.585066   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:09.585072   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:09.585120   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:09.623498   57945 cri.go:89] found id: ""
	I0816 13:47:09.623524   57945 logs.go:276] 0 containers: []
	W0816 13:47:09.623531   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:09.623537   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:09.623584   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:09.657476   57945 cri.go:89] found id: ""
	I0816 13:47:09.657504   57945 logs.go:276] 0 containers: []
	W0816 13:47:09.657515   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:09.657523   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:09.657578   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:09.693715   57945 cri.go:89] found id: ""
	I0816 13:47:09.693746   57945 logs.go:276] 0 containers: []
	W0816 13:47:09.693757   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:09.693765   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:09.693825   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:09.727396   57945 cri.go:89] found id: ""
	I0816 13:47:09.727426   57945 logs.go:276] 0 containers: []
	W0816 13:47:09.727437   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:09.727451   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:09.727511   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:09.764334   57945 cri.go:89] found id: ""
	I0816 13:47:09.764361   57945 logs.go:276] 0 containers: []
	W0816 13:47:09.764368   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:09.764374   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:09.764428   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:09.799460   57945 cri.go:89] found id: ""
	I0816 13:47:09.799485   57945 logs.go:276] 0 containers: []
	W0816 13:47:09.799497   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:09.799508   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:09.799521   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:09.849637   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:09.849678   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:09.869665   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:09.869702   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:09.954878   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:09.954907   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:09.954922   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:10.032473   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:10.032507   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:07.107809   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:09.606867   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:09.358384   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:11.359451   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:11.997273   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:13.998709   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:12.574303   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:12.587684   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:12.587746   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:12.625568   57945 cri.go:89] found id: ""
	I0816 13:47:12.625593   57945 logs.go:276] 0 containers: []
	W0816 13:47:12.625604   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:12.625611   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:12.625719   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:12.665018   57945 cri.go:89] found id: ""
	I0816 13:47:12.665048   57945 logs.go:276] 0 containers: []
	W0816 13:47:12.665059   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:12.665067   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:12.665128   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:12.701125   57945 cri.go:89] found id: ""
	I0816 13:47:12.701150   57945 logs.go:276] 0 containers: []
	W0816 13:47:12.701158   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:12.701163   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:12.701218   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:12.740613   57945 cri.go:89] found id: ""
	I0816 13:47:12.740644   57945 logs.go:276] 0 containers: []
	W0816 13:47:12.740654   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:12.740662   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:12.740727   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:12.779620   57945 cri.go:89] found id: ""
	I0816 13:47:12.779652   57945 logs.go:276] 0 containers: []
	W0816 13:47:12.779664   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:12.779678   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:12.779743   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:12.816222   57945 cri.go:89] found id: ""
	I0816 13:47:12.816248   57945 logs.go:276] 0 containers: []
	W0816 13:47:12.816269   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:12.816278   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:12.816327   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:12.853083   57945 cri.go:89] found id: ""
	I0816 13:47:12.853113   57945 logs.go:276] 0 containers: []
	W0816 13:47:12.853125   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:12.853133   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:12.853192   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:12.888197   57945 cri.go:89] found id: ""
	I0816 13:47:12.888223   57945 logs.go:276] 0 containers: []
	W0816 13:47:12.888232   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:12.888240   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:12.888255   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:12.941464   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:12.941502   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:12.955423   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:12.955456   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:13.025515   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:13.025537   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:13.025550   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:13.112409   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:13.112452   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:12.107421   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:14.606538   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:13.857389   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:15.857870   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:16.498127   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:18.498877   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:15.656240   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:15.669505   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:15.669568   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:15.703260   57945 cri.go:89] found id: ""
	I0816 13:47:15.703288   57945 logs.go:276] 0 containers: []
	W0816 13:47:15.703299   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:15.703306   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:15.703368   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:15.740555   57945 cri.go:89] found id: ""
	I0816 13:47:15.740580   57945 logs.go:276] 0 containers: []
	W0816 13:47:15.740590   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:15.740596   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:15.740660   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:15.776207   57945 cri.go:89] found id: ""
	I0816 13:47:15.776233   57945 logs.go:276] 0 containers: []
	W0816 13:47:15.776241   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:15.776247   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:15.776302   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:15.816845   57945 cri.go:89] found id: ""
	I0816 13:47:15.816871   57945 logs.go:276] 0 containers: []
	W0816 13:47:15.816879   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:15.816884   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:15.816953   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:15.851279   57945 cri.go:89] found id: ""
	I0816 13:47:15.851306   57945 logs.go:276] 0 containers: []
	W0816 13:47:15.851318   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:15.851325   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:15.851391   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:15.884960   57945 cri.go:89] found id: ""
	I0816 13:47:15.884987   57945 logs.go:276] 0 containers: []
	W0816 13:47:15.884997   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:15.885004   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:15.885063   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:15.922027   57945 cri.go:89] found id: ""
	I0816 13:47:15.922051   57945 logs.go:276] 0 containers: []
	W0816 13:47:15.922060   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:15.922067   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:15.922130   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:15.956774   57945 cri.go:89] found id: ""
	I0816 13:47:15.956799   57945 logs.go:276] 0 containers: []
	W0816 13:47:15.956806   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:15.956814   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:15.956828   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:16.036342   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:16.036375   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:16.079006   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:16.079033   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:16.130374   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:16.130409   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:16.144707   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:16.144740   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:16.216466   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:18.716696   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:18.729670   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:18.729731   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:18.764481   57945 cri.go:89] found id: ""
	I0816 13:47:18.764513   57945 logs.go:276] 0 containers: []
	W0816 13:47:18.764521   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:18.764527   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:18.764574   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:18.803141   57945 cri.go:89] found id: ""
	I0816 13:47:18.803172   57945 logs.go:276] 0 containers: []
	W0816 13:47:18.803183   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:18.803192   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:18.803257   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:18.847951   57945 cri.go:89] found id: ""
	I0816 13:47:18.847977   57945 logs.go:276] 0 containers: []
	W0816 13:47:18.847985   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:18.847991   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:18.848038   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:18.881370   57945 cri.go:89] found id: ""
	I0816 13:47:18.881402   57945 logs.go:276] 0 containers: []
	W0816 13:47:18.881420   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:18.881434   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:18.881491   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:18.916206   57945 cri.go:89] found id: ""
	I0816 13:47:18.916237   57945 logs.go:276] 0 containers: []
	W0816 13:47:18.916247   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:18.916253   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:18.916314   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:18.946851   57945 cri.go:89] found id: ""
	I0816 13:47:18.946873   57945 logs.go:276] 0 containers: []
	W0816 13:47:18.946883   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:18.946891   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:18.946944   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:18.980684   57945 cri.go:89] found id: ""
	I0816 13:47:18.980710   57945 logs.go:276] 0 containers: []
	W0816 13:47:18.980718   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:18.980724   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:18.980789   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:19.015762   57945 cri.go:89] found id: ""
	I0816 13:47:19.015794   57945 logs.go:276] 0 containers: []
	W0816 13:47:19.015805   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:19.015817   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:19.015837   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:19.101544   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:19.101582   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:19.143587   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:19.143621   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:19.198788   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:19.198826   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:19.212697   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:19.212723   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:19.282719   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:16.607841   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:19.107952   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:18.358184   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:20.857525   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:20.499116   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:22.996642   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:24.998888   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:21.783729   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:21.797977   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:21.798056   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:21.833944   57945 cri.go:89] found id: ""
	I0816 13:47:21.833976   57945 logs.go:276] 0 containers: []
	W0816 13:47:21.833987   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:21.833996   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:21.834053   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:21.870079   57945 cri.go:89] found id: ""
	I0816 13:47:21.870110   57945 logs.go:276] 0 containers: []
	W0816 13:47:21.870120   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:21.870128   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:21.870191   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:21.905834   57945 cri.go:89] found id: ""
	I0816 13:47:21.905864   57945 logs.go:276] 0 containers: []
	W0816 13:47:21.905872   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:21.905878   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:21.905932   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:21.943319   57945 cri.go:89] found id: ""
	I0816 13:47:21.943341   57945 logs.go:276] 0 containers: []
	W0816 13:47:21.943349   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:21.943354   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:21.943412   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:21.982065   57945 cri.go:89] found id: ""
	I0816 13:47:21.982094   57945 logs.go:276] 0 containers: []
	W0816 13:47:21.982103   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:21.982110   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:21.982268   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:22.035131   57945 cri.go:89] found id: ""
	I0816 13:47:22.035167   57945 logs.go:276] 0 containers: []
	W0816 13:47:22.035179   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:22.035186   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:22.035250   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:22.082619   57945 cri.go:89] found id: ""
	I0816 13:47:22.082647   57945 logs.go:276] 0 containers: []
	W0816 13:47:22.082655   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:22.082661   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:22.082720   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:22.128521   57945 cri.go:89] found id: ""
	I0816 13:47:22.128550   57945 logs.go:276] 0 containers: []
	W0816 13:47:22.128559   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:22.128568   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:22.128581   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:22.182794   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:22.182824   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:22.196602   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:22.196628   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:22.264434   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:22.264457   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:22.264472   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:22.343796   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:22.343832   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:24.891164   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:24.904170   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:24.904244   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:24.941046   57945 cri.go:89] found id: ""
	I0816 13:47:24.941082   57945 logs.go:276] 0 containers: []
	W0816 13:47:24.941093   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:24.941101   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:24.941177   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:24.976520   57945 cri.go:89] found id: ""
	I0816 13:47:24.976553   57945 logs.go:276] 0 containers: []
	W0816 13:47:24.976564   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:24.976572   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:24.976635   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:25.024663   57945 cri.go:89] found id: ""
	I0816 13:47:25.024692   57945 logs.go:276] 0 containers: []
	W0816 13:47:25.024704   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:25.024712   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:25.024767   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:25.063892   57945 cri.go:89] found id: ""
	I0816 13:47:25.063920   57945 logs.go:276] 0 containers: []
	W0816 13:47:25.063928   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:25.063934   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:25.064014   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:21.607247   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:23.608388   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:22.857995   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:24.858506   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:27.497595   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:29.997611   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:25.105565   57945 cri.go:89] found id: ""
	I0816 13:47:25.105600   57945 logs.go:276] 0 containers: []
	W0816 13:47:25.105612   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:25.105619   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:25.105676   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:25.150965   57945 cri.go:89] found id: ""
	I0816 13:47:25.150995   57945 logs.go:276] 0 containers: []
	W0816 13:47:25.151006   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:25.151014   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:25.151074   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:25.191170   57945 cri.go:89] found id: ""
	I0816 13:47:25.191202   57945 logs.go:276] 0 containers: []
	W0816 13:47:25.191213   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:25.191220   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:25.191280   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:25.226614   57945 cri.go:89] found id: ""
	I0816 13:47:25.226643   57945 logs.go:276] 0 containers: []
	W0816 13:47:25.226653   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:25.226664   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:25.226680   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:25.239478   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:25.239516   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:25.315450   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:25.315478   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:25.315494   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:25.394755   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:25.394792   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:25.434737   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:25.434768   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:27.984829   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:28.000304   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:28.000378   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:28.042396   57945 cri.go:89] found id: ""
	I0816 13:47:28.042430   57945 logs.go:276] 0 containers: []
	W0816 13:47:28.042447   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:28.042455   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:28.042514   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:28.094491   57945 cri.go:89] found id: ""
	I0816 13:47:28.094515   57945 logs.go:276] 0 containers: []
	W0816 13:47:28.094523   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:28.094528   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:28.094586   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:28.146228   57945 cri.go:89] found id: ""
	I0816 13:47:28.146254   57945 logs.go:276] 0 containers: []
	W0816 13:47:28.146262   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:28.146267   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:28.146314   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:28.179302   57945 cri.go:89] found id: ""
	I0816 13:47:28.179335   57945 logs.go:276] 0 containers: []
	W0816 13:47:28.179347   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:28.179355   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:28.179417   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:28.216707   57945 cri.go:89] found id: ""
	I0816 13:47:28.216737   57945 logs.go:276] 0 containers: []
	W0816 13:47:28.216749   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:28.216757   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:28.216808   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:28.253800   57945 cri.go:89] found id: ""
	I0816 13:47:28.253832   57945 logs.go:276] 0 containers: []
	W0816 13:47:28.253843   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:28.253851   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:28.253906   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:28.289403   57945 cri.go:89] found id: ""
	I0816 13:47:28.289438   57945 logs.go:276] 0 containers: []
	W0816 13:47:28.289450   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:28.289458   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:28.289520   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:28.325174   57945 cri.go:89] found id: ""
	I0816 13:47:28.325206   57945 logs.go:276] 0 containers: []
	W0816 13:47:28.325214   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:28.325222   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:28.325233   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:28.377043   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:28.377077   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:28.390991   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:28.391028   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:28.463563   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:28.463584   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:28.463598   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:28.546593   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:28.546628   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:26.107830   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:28.607294   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:30.613619   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:27.356723   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:29.358026   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:31.857750   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:32.497685   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:34.500214   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:31.084932   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:31.100742   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:31.100809   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:31.134888   57945 cri.go:89] found id: ""
	I0816 13:47:31.134914   57945 logs.go:276] 0 containers: []
	W0816 13:47:31.134921   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:31.134929   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:31.134979   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:31.169533   57945 cri.go:89] found id: ""
	I0816 13:47:31.169558   57945 logs.go:276] 0 containers: []
	W0816 13:47:31.169566   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:31.169572   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:31.169630   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:31.203888   57945 cri.go:89] found id: ""
	I0816 13:47:31.203914   57945 logs.go:276] 0 containers: []
	W0816 13:47:31.203924   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:31.203931   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:31.203993   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:31.239346   57945 cri.go:89] found id: ""
	I0816 13:47:31.239374   57945 logs.go:276] 0 containers: []
	W0816 13:47:31.239387   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:31.239393   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:31.239443   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:31.274011   57945 cri.go:89] found id: ""
	I0816 13:47:31.274038   57945 logs.go:276] 0 containers: []
	W0816 13:47:31.274046   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:31.274053   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:31.274117   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:31.308812   57945 cri.go:89] found id: ""
	I0816 13:47:31.308845   57945 logs.go:276] 0 containers: []
	W0816 13:47:31.308856   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:31.308863   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:31.308950   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:31.343041   57945 cri.go:89] found id: ""
	I0816 13:47:31.343067   57945 logs.go:276] 0 containers: []
	W0816 13:47:31.343075   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:31.343082   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:31.343143   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:31.380969   57945 cri.go:89] found id: ""
	I0816 13:47:31.380998   57945 logs.go:276] 0 containers: []
	W0816 13:47:31.381006   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:31.381015   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:31.381028   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:31.434431   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:31.434465   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:31.449374   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:31.449404   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:31.522134   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:31.522159   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:31.522174   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:31.602707   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:31.602736   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:34.142413   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:34.155531   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:34.155595   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:34.195926   57945 cri.go:89] found id: ""
	I0816 13:47:34.195953   57945 logs.go:276] 0 containers: []
	W0816 13:47:34.195964   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:34.195972   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:34.196040   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:34.230064   57945 cri.go:89] found id: ""
	I0816 13:47:34.230092   57945 logs.go:276] 0 containers: []
	W0816 13:47:34.230103   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:34.230109   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:34.230163   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:34.263973   57945 cri.go:89] found id: ""
	I0816 13:47:34.263998   57945 logs.go:276] 0 containers: []
	W0816 13:47:34.264005   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:34.264012   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:34.264069   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:34.298478   57945 cri.go:89] found id: ""
	I0816 13:47:34.298523   57945 logs.go:276] 0 containers: []
	W0816 13:47:34.298532   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:34.298539   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:34.298597   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:34.337196   57945 cri.go:89] found id: ""
	I0816 13:47:34.337225   57945 logs.go:276] 0 containers: []
	W0816 13:47:34.337233   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:34.337239   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:34.337291   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:34.374716   57945 cri.go:89] found id: ""
	I0816 13:47:34.374751   57945 logs.go:276] 0 containers: []
	W0816 13:47:34.374763   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:34.374771   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:34.374830   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:34.413453   57945 cri.go:89] found id: ""
	I0816 13:47:34.413480   57945 logs.go:276] 0 containers: []
	W0816 13:47:34.413491   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:34.413498   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:34.413563   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:34.450074   57945 cri.go:89] found id: ""
	I0816 13:47:34.450107   57945 logs.go:276] 0 containers: []
	W0816 13:47:34.450119   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:34.450156   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:34.450176   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:34.490214   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:34.490239   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:34.542861   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:34.542895   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:34.557371   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:34.557400   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:34.627976   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:34.627995   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:34.628011   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:33.106665   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:35.107026   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:34.358059   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:36.858347   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:36.998289   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:39.499047   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:37.205741   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:37.219207   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:37.219286   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:37.258254   57945 cri.go:89] found id: ""
	I0816 13:47:37.258288   57945 logs.go:276] 0 containers: []
	W0816 13:47:37.258300   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:37.258307   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:37.258359   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:37.293604   57945 cri.go:89] found id: ""
	I0816 13:47:37.293635   57945 logs.go:276] 0 containers: []
	W0816 13:47:37.293647   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:37.293654   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:37.293715   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:37.334043   57945 cri.go:89] found id: ""
	I0816 13:47:37.334072   57945 logs.go:276] 0 containers: []
	W0816 13:47:37.334084   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:37.334091   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:37.334153   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:37.369745   57945 cri.go:89] found id: ""
	I0816 13:47:37.369770   57945 logs.go:276] 0 containers: []
	W0816 13:47:37.369777   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:37.369784   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:37.369835   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:37.406277   57945 cri.go:89] found id: ""
	I0816 13:47:37.406305   57945 logs.go:276] 0 containers: []
	W0816 13:47:37.406317   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:37.406325   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:37.406407   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:37.440418   57945 cri.go:89] found id: ""
	I0816 13:47:37.440449   57945 logs.go:276] 0 containers: []
	W0816 13:47:37.440456   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:37.440463   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:37.440515   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:37.474527   57945 cri.go:89] found id: ""
	I0816 13:47:37.474561   57945 logs.go:276] 0 containers: []
	W0816 13:47:37.474572   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:37.474580   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:37.474642   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:37.513959   57945 cri.go:89] found id: ""
	I0816 13:47:37.513987   57945 logs.go:276] 0 containers: []
	W0816 13:47:37.513995   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:37.514004   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:37.514020   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:37.569561   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:37.569597   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:37.584095   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:37.584127   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:37.652289   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:37.652317   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:37.652333   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:37.737388   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:37.737434   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:37.107091   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:39.108555   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:39.358316   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:41.858946   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:41.998041   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:44.498467   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:40.281872   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:40.295704   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:40.295763   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:40.336641   57945 cri.go:89] found id: ""
	I0816 13:47:40.336667   57945 logs.go:276] 0 containers: []
	W0816 13:47:40.336678   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:40.336686   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:40.336748   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:40.373500   57945 cri.go:89] found id: ""
	I0816 13:47:40.373524   57945 logs.go:276] 0 containers: []
	W0816 13:47:40.373531   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:40.373536   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:40.373593   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:40.417553   57945 cri.go:89] found id: ""
	I0816 13:47:40.417575   57945 logs.go:276] 0 containers: []
	W0816 13:47:40.417583   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:40.417589   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:40.417645   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:40.452778   57945 cri.go:89] found id: ""
	I0816 13:47:40.452809   57945 logs.go:276] 0 containers: []
	W0816 13:47:40.452819   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:40.452827   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:40.452896   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:40.491389   57945 cri.go:89] found id: ""
	I0816 13:47:40.491424   57945 logs.go:276] 0 containers: []
	W0816 13:47:40.491436   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:40.491445   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:40.491505   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:40.529780   57945 cri.go:89] found id: ""
	I0816 13:47:40.529815   57945 logs.go:276] 0 containers: []
	W0816 13:47:40.529826   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:40.529835   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:40.529903   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:40.567724   57945 cri.go:89] found id: ""
	I0816 13:47:40.567751   57945 logs.go:276] 0 containers: []
	W0816 13:47:40.567761   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:40.567768   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:40.567825   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:40.604260   57945 cri.go:89] found id: ""
	I0816 13:47:40.604299   57945 logs.go:276] 0 containers: []
	W0816 13:47:40.604309   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:40.604319   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:40.604335   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:40.676611   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:40.676642   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:40.676659   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:40.755779   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:40.755815   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:40.793780   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:40.793811   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:40.845869   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:40.845902   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:43.361766   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:43.376247   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:43.376309   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:43.416527   57945 cri.go:89] found id: ""
	I0816 13:47:43.416559   57945 logs.go:276] 0 containers: []
	W0816 13:47:43.416567   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:43.416573   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:43.416621   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:43.458203   57945 cri.go:89] found id: ""
	I0816 13:47:43.458228   57945 logs.go:276] 0 containers: []
	W0816 13:47:43.458239   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:43.458246   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:43.458312   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:43.498122   57945 cri.go:89] found id: ""
	I0816 13:47:43.498146   57945 logs.go:276] 0 containers: []
	W0816 13:47:43.498158   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:43.498166   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:43.498231   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:43.533392   57945 cri.go:89] found id: ""
	I0816 13:47:43.533418   57945 logs.go:276] 0 containers: []
	W0816 13:47:43.533428   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:43.533436   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:43.533510   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:43.569258   57945 cri.go:89] found id: ""
	I0816 13:47:43.569294   57945 logs.go:276] 0 containers: []
	W0816 13:47:43.569301   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:43.569309   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:43.569368   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:43.603599   57945 cri.go:89] found id: ""
	I0816 13:47:43.603624   57945 logs.go:276] 0 containers: []
	W0816 13:47:43.603633   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:43.603639   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:43.603696   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:43.643204   57945 cri.go:89] found id: ""
	I0816 13:47:43.643236   57945 logs.go:276] 0 containers: []
	W0816 13:47:43.643248   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:43.643256   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:43.643343   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:43.678365   57945 cri.go:89] found id: ""
	I0816 13:47:43.678393   57945 logs.go:276] 0 containers: []
	W0816 13:47:43.678412   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:43.678424   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:43.678440   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:43.729472   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:43.729522   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:43.743714   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:43.743749   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:43.819210   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:43.819237   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:43.819252   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:43.899800   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:43.899835   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:41.606734   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:43.608097   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:44.357080   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:46.357589   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:46.503576   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:48.998084   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:46.437795   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:46.450756   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:46.450828   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:46.487036   57945 cri.go:89] found id: ""
	I0816 13:47:46.487059   57945 logs.go:276] 0 containers: []
	W0816 13:47:46.487067   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:46.487073   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:46.487119   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:46.524268   57945 cri.go:89] found id: ""
	I0816 13:47:46.524294   57945 logs.go:276] 0 containers: []
	W0816 13:47:46.524303   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:46.524308   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:46.524360   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:46.561202   57945 cri.go:89] found id: ""
	I0816 13:47:46.561232   57945 logs.go:276] 0 containers: []
	W0816 13:47:46.561244   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:46.561251   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:46.561311   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:46.596006   57945 cri.go:89] found id: ""
	I0816 13:47:46.596032   57945 logs.go:276] 0 containers: []
	W0816 13:47:46.596039   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:46.596045   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:46.596094   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:46.632279   57945 cri.go:89] found id: ""
	I0816 13:47:46.632306   57945 logs.go:276] 0 containers: []
	W0816 13:47:46.632313   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:46.632319   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:46.632372   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:46.669139   57945 cri.go:89] found id: ""
	I0816 13:47:46.669166   57945 logs.go:276] 0 containers: []
	W0816 13:47:46.669174   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:46.669179   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:46.669237   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:46.704084   57945 cri.go:89] found id: ""
	I0816 13:47:46.704115   57945 logs.go:276] 0 containers: []
	W0816 13:47:46.704126   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:46.704134   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:46.704207   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:46.740275   57945 cri.go:89] found id: ""
	I0816 13:47:46.740303   57945 logs.go:276] 0 containers: []
	W0816 13:47:46.740314   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:46.740325   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:46.740341   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:46.792777   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:46.792811   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:46.807390   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:46.807429   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:46.877563   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:46.877589   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:46.877605   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:46.954703   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:46.954737   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:49.497506   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:49.510913   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:49.511007   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:49.547461   57945 cri.go:89] found id: ""
	I0816 13:47:49.547491   57945 logs.go:276] 0 containers: []
	W0816 13:47:49.547503   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:49.547517   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:49.547579   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:49.581972   57945 cri.go:89] found id: ""
	I0816 13:47:49.582005   57945 logs.go:276] 0 containers: []
	W0816 13:47:49.582014   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:49.582021   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:49.582084   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:49.617148   57945 cri.go:89] found id: ""
	I0816 13:47:49.617176   57945 logs.go:276] 0 containers: []
	W0816 13:47:49.617185   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:49.617193   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:49.617260   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:49.652546   57945 cri.go:89] found id: ""
	I0816 13:47:49.652569   57945 logs.go:276] 0 containers: []
	W0816 13:47:49.652578   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:49.652584   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:49.652631   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:49.688040   57945 cri.go:89] found id: ""
	I0816 13:47:49.688071   57945 logs.go:276] 0 containers: []
	W0816 13:47:49.688079   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:49.688084   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:49.688154   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:49.721779   57945 cri.go:89] found id: ""
	I0816 13:47:49.721809   57945 logs.go:276] 0 containers: []
	W0816 13:47:49.721819   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:49.721827   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:49.721890   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:49.758926   57945 cri.go:89] found id: ""
	I0816 13:47:49.758953   57945 logs.go:276] 0 containers: []
	W0816 13:47:49.758960   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:49.758966   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:49.759020   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:49.796328   57945 cri.go:89] found id: ""
	I0816 13:47:49.796358   57945 logs.go:276] 0 containers: []
	W0816 13:47:49.796368   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:49.796378   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:49.796393   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:49.851818   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:49.851855   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:49.867320   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:49.867350   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:49.934885   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:49.934907   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:49.934921   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:50.018012   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:50.018055   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:46.105523   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:48.107122   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:50.606969   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:48.357769   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:50.859617   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:50.998256   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:53.498046   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:52.563101   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:52.576817   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:52.576879   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:52.613425   57945 cri.go:89] found id: ""
	I0816 13:47:52.613459   57945 logs.go:276] 0 containers: []
	W0816 13:47:52.613469   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:52.613475   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:52.613522   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:52.650086   57945 cri.go:89] found id: ""
	I0816 13:47:52.650109   57945 logs.go:276] 0 containers: []
	W0816 13:47:52.650117   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:52.650123   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:52.650186   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:52.686993   57945 cri.go:89] found id: ""
	I0816 13:47:52.687020   57945 logs.go:276] 0 containers: []
	W0816 13:47:52.687028   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:52.687034   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:52.687080   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:52.724307   57945 cri.go:89] found id: ""
	I0816 13:47:52.724337   57945 logs.go:276] 0 containers: []
	W0816 13:47:52.724349   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:52.724357   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:52.724421   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:52.759250   57945 cri.go:89] found id: ""
	I0816 13:47:52.759281   57945 logs.go:276] 0 containers: []
	W0816 13:47:52.759290   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:52.759295   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:52.759350   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:52.798634   57945 cri.go:89] found id: ""
	I0816 13:47:52.798660   57945 logs.go:276] 0 containers: []
	W0816 13:47:52.798670   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:52.798677   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:52.798741   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:52.833923   57945 cri.go:89] found id: ""
	I0816 13:47:52.833946   57945 logs.go:276] 0 containers: []
	W0816 13:47:52.833954   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:52.833960   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:52.834005   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:52.873647   57945 cri.go:89] found id: ""
	I0816 13:47:52.873671   57945 logs.go:276] 0 containers: []
	W0816 13:47:52.873679   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:52.873687   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:52.873701   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:52.887667   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:52.887697   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:52.960494   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:52.960516   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:52.960529   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:53.037132   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:53.037167   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:53.076769   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:53.076799   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:52.607529   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:55.107256   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:53.357315   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:55.357380   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:55.498193   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:57.498238   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:59.997582   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:55.625565   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:55.639296   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:55.639367   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:55.675104   57945 cri.go:89] found id: ""
	I0816 13:47:55.675137   57945 logs.go:276] 0 containers: []
	W0816 13:47:55.675149   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:55.675156   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:55.675220   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:55.710108   57945 cri.go:89] found id: ""
	I0816 13:47:55.710137   57945 logs.go:276] 0 containers: []
	W0816 13:47:55.710149   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:55.710156   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:55.710218   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:55.744190   57945 cri.go:89] found id: ""
	I0816 13:47:55.744212   57945 logs.go:276] 0 containers: []
	W0816 13:47:55.744220   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:55.744225   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:55.744288   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:55.781775   57945 cri.go:89] found id: ""
	I0816 13:47:55.781806   57945 logs.go:276] 0 containers: []
	W0816 13:47:55.781815   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:55.781821   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:55.781879   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:55.818877   57945 cri.go:89] found id: ""
	I0816 13:47:55.818907   57945 logs.go:276] 0 containers: []
	W0816 13:47:55.818915   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:55.818921   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:55.818973   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:55.858751   57945 cri.go:89] found id: ""
	I0816 13:47:55.858773   57945 logs.go:276] 0 containers: []
	W0816 13:47:55.858782   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:55.858790   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:55.858852   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:55.894745   57945 cri.go:89] found id: ""
	I0816 13:47:55.894776   57945 logs.go:276] 0 containers: []
	W0816 13:47:55.894787   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:55.894796   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:55.894854   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:55.928805   57945 cri.go:89] found id: ""
	I0816 13:47:55.928832   57945 logs.go:276] 0 containers: []
	W0816 13:47:55.928843   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:55.928853   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:55.928872   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:55.982684   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:55.982717   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:55.997319   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:55.997354   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:56.063016   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:56.063043   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:56.063059   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:56.147138   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:56.147177   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:58.686160   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:58.699135   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:58.699260   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:58.737566   57945 cri.go:89] found id: ""
	I0816 13:47:58.737597   57945 logs.go:276] 0 containers: []
	W0816 13:47:58.737606   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:58.737613   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:58.737662   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:58.778119   57945 cri.go:89] found id: ""
	I0816 13:47:58.778149   57945 logs.go:276] 0 containers: []
	W0816 13:47:58.778164   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:58.778173   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:58.778243   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:58.815003   57945 cri.go:89] found id: ""
	I0816 13:47:58.815031   57945 logs.go:276] 0 containers: []
	W0816 13:47:58.815040   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:58.815046   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:58.815094   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:58.847912   57945 cri.go:89] found id: ""
	I0816 13:47:58.847941   57945 logs.go:276] 0 containers: []
	W0816 13:47:58.847949   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:58.847955   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:58.848005   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:58.882600   57945 cri.go:89] found id: ""
	I0816 13:47:58.882623   57945 logs.go:276] 0 containers: []
	W0816 13:47:58.882631   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:58.882637   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:58.882686   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:58.920459   57945 cri.go:89] found id: ""
	I0816 13:47:58.920489   57945 logs.go:276] 0 containers: []
	W0816 13:47:58.920500   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:58.920507   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:58.920571   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:58.952411   57945 cri.go:89] found id: ""
	I0816 13:47:58.952445   57945 logs.go:276] 0 containers: []
	W0816 13:47:58.952453   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:58.952460   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:58.952570   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:58.985546   57945 cri.go:89] found id: ""
	I0816 13:47:58.985573   57945 logs.go:276] 0 containers: []
	W0816 13:47:58.985581   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:58.985589   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:58.985600   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:59.067406   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:59.067439   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:59.108076   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:59.108107   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:59.162698   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:59.162734   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:59.178734   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:59.178759   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:59.255267   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:57.606146   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:59.606603   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:57.358416   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:59.861332   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:01.998633   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:04.498646   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:01.756248   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:01.768940   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:01.769009   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:01.804884   57945 cri.go:89] found id: ""
	I0816 13:48:01.804924   57945 logs.go:276] 0 containers: []
	W0816 13:48:01.804936   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:01.804946   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:01.805000   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:01.844010   57945 cri.go:89] found id: ""
	I0816 13:48:01.844035   57945 logs.go:276] 0 containers: []
	W0816 13:48:01.844042   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:01.844051   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:01.844104   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:01.882450   57945 cri.go:89] found id: ""
	I0816 13:48:01.882488   57945 logs.go:276] 0 containers: []
	W0816 13:48:01.882500   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:01.882507   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:01.882568   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:01.916995   57945 cri.go:89] found id: ""
	I0816 13:48:01.917028   57945 logs.go:276] 0 containers: []
	W0816 13:48:01.917039   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:01.917048   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:01.917109   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:01.956289   57945 cri.go:89] found id: ""
	I0816 13:48:01.956312   57945 logs.go:276] 0 containers: []
	W0816 13:48:01.956319   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:01.956325   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:01.956378   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:01.991823   57945 cri.go:89] found id: ""
	I0816 13:48:01.991862   57945 logs.go:276] 0 containers: []
	W0816 13:48:01.991875   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:01.991882   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:01.991953   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:02.034244   57945 cri.go:89] found id: ""
	I0816 13:48:02.034272   57945 logs.go:276] 0 containers: []
	W0816 13:48:02.034282   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:02.034290   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:02.034357   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:02.067902   57945 cri.go:89] found id: ""
	I0816 13:48:02.067930   57945 logs.go:276] 0 containers: []
	W0816 13:48:02.067942   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:02.067953   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:02.067971   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:02.121170   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:02.121196   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:02.177468   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:02.177498   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:02.191721   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:02.191757   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:02.270433   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:02.270463   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:02.270500   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:04.855768   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:04.869098   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:04.869175   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:04.907817   57945 cri.go:89] found id: ""
	I0816 13:48:04.907848   57945 logs.go:276] 0 containers: []
	W0816 13:48:04.907856   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:04.907863   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:04.907919   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:04.943307   57945 cri.go:89] found id: ""
	I0816 13:48:04.943339   57945 logs.go:276] 0 containers: []
	W0816 13:48:04.943349   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:04.943356   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:04.943416   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:04.979884   57945 cri.go:89] found id: ""
	I0816 13:48:04.979914   57945 logs.go:276] 0 containers: []
	W0816 13:48:04.979922   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:04.979929   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:04.979978   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:05.021400   57945 cri.go:89] found id: ""
	I0816 13:48:05.021442   57945 logs.go:276] 0 containers: []
	W0816 13:48:05.021453   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:05.021463   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:05.021542   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:05.057780   57945 cri.go:89] found id: ""
	I0816 13:48:05.057800   57945 logs.go:276] 0 containers: []
	W0816 13:48:05.057808   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:05.057814   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:05.057864   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:05.091947   57945 cri.go:89] found id: ""
	I0816 13:48:05.091976   57945 logs.go:276] 0 containers: []
	W0816 13:48:05.091987   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:05.091995   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:05.092058   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:01.607315   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:04.107759   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:02.358142   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:04.857766   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:06.998437   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:09.496888   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:05.129740   57945 cri.go:89] found id: ""
	I0816 13:48:05.129771   57945 logs.go:276] 0 containers: []
	W0816 13:48:05.129781   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:05.129788   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:05.129857   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:05.163020   57945 cri.go:89] found id: ""
	I0816 13:48:05.163049   57945 logs.go:276] 0 containers: []
	W0816 13:48:05.163060   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:05.163070   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:05.163087   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:05.236240   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:05.236266   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:05.236281   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:05.310559   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:05.310595   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:05.351614   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:05.351646   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:05.404938   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:05.404971   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:07.921010   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:07.934181   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:07.934255   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:07.969474   57945 cri.go:89] found id: ""
	I0816 13:48:07.969502   57945 logs.go:276] 0 containers: []
	W0816 13:48:07.969512   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:07.969520   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:07.969575   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:08.007423   57945 cri.go:89] found id: ""
	I0816 13:48:08.007447   57945 logs.go:276] 0 containers: []
	W0816 13:48:08.007454   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:08.007460   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:08.007515   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:08.043981   57945 cri.go:89] found id: ""
	I0816 13:48:08.044010   57945 logs.go:276] 0 containers: []
	W0816 13:48:08.044021   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:08.044027   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:08.044076   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:08.078631   57945 cri.go:89] found id: ""
	I0816 13:48:08.078656   57945 logs.go:276] 0 containers: []
	W0816 13:48:08.078664   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:08.078669   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:08.078720   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:08.114970   57945 cri.go:89] found id: ""
	I0816 13:48:08.114998   57945 logs.go:276] 0 containers: []
	W0816 13:48:08.115010   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:08.115020   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:08.115081   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:08.149901   57945 cri.go:89] found id: ""
	I0816 13:48:08.149936   57945 logs.go:276] 0 containers: []
	W0816 13:48:08.149944   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:08.149951   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:08.150007   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:08.183104   57945 cri.go:89] found id: ""
	I0816 13:48:08.183128   57945 logs.go:276] 0 containers: []
	W0816 13:48:08.183136   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:08.183141   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:08.183189   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:08.216972   57945 cri.go:89] found id: ""
	I0816 13:48:08.217005   57945 logs.go:276] 0 containers: []
	W0816 13:48:08.217016   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:08.217027   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:08.217043   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:08.231192   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:08.231223   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:08.306779   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:08.306807   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:08.306823   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:08.388235   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:08.388274   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:08.429040   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:08.429071   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:06.110473   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:08.606467   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:07.356589   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:09.357419   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:11.357839   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:11.497754   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:13.997641   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:10.983867   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:10.997649   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:10.997722   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:11.033315   57945 cri.go:89] found id: ""
	I0816 13:48:11.033351   57945 logs.go:276] 0 containers: []
	W0816 13:48:11.033362   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:11.033370   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:11.033437   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:11.069000   57945 cri.go:89] found id: ""
	I0816 13:48:11.069030   57945 logs.go:276] 0 containers: []
	W0816 13:48:11.069038   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:11.069044   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:11.069102   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:11.100668   57945 cri.go:89] found id: ""
	I0816 13:48:11.100691   57945 logs.go:276] 0 containers: []
	W0816 13:48:11.100698   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:11.100704   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:11.100755   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:11.134753   57945 cri.go:89] found id: ""
	I0816 13:48:11.134782   57945 logs.go:276] 0 containers: []
	W0816 13:48:11.134792   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:11.134800   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:11.134857   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:11.169691   57945 cri.go:89] found id: ""
	I0816 13:48:11.169717   57945 logs.go:276] 0 containers: []
	W0816 13:48:11.169726   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:11.169734   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:11.169797   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:11.204048   57945 cri.go:89] found id: ""
	I0816 13:48:11.204077   57945 logs.go:276] 0 containers: []
	W0816 13:48:11.204088   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:11.204095   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:11.204147   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:11.237659   57945 cri.go:89] found id: ""
	I0816 13:48:11.237687   57945 logs.go:276] 0 containers: []
	W0816 13:48:11.237698   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:11.237706   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:11.237768   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:11.271886   57945 cri.go:89] found id: ""
	I0816 13:48:11.271911   57945 logs.go:276] 0 containers: []
	W0816 13:48:11.271922   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:11.271932   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:11.271946   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:11.327237   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:11.327274   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:11.343215   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:11.343256   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:11.419725   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:11.419752   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:11.419768   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:11.498221   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:11.498252   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:14.044619   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:14.057479   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:14.057537   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:14.093405   57945 cri.go:89] found id: ""
	I0816 13:48:14.093439   57945 logs.go:276] 0 containers: []
	W0816 13:48:14.093450   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:14.093459   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:14.093516   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:14.127089   57945 cri.go:89] found id: ""
	I0816 13:48:14.127111   57945 logs.go:276] 0 containers: []
	W0816 13:48:14.127120   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:14.127127   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:14.127190   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:14.165676   57945 cri.go:89] found id: ""
	I0816 13:48:14.165708   57945 logs.go:276] 0 containers: []
	W0816 13:48:14.165719   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:14.165726   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:14.165791   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:14.198630   57945 cri.go:89] found id: ""
	I0816 13:48:14.198652   57945 logs.go:276] 0 containers: []
	W0816 13:48:14.198660   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:14.198665   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:14.198717   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:14.246679   57945 cri.go:89] found id: ""
	I0816 13:48:14.246706   57945 logs.go:276] 0 containers: []
	W0816 13:48:14.246714   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:14.246719   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:14.246774   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:14.290928   57945 cri.go:89] found id: ""
	I0816 13:48:14.290960   57945 logs.go:276] 0 containers: []
	W0816 13:48:14.290972   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:14.290979   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:14.291043   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:14.342499   57945 cri.go:89] found id: ""
	I0816 13:48:14.342527   57945 logs.go:276] 0 containers: []
	W0816 13:48:14.342537   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:14.342544   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:14.342613   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:14.377858   57945 cri.go:89] found id: ""
	I0816 13:48:14.377891   57945 logs.go:276] 0 containers: []
	W0816 13:48:14.377899   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:14.377913   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:14.377928   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:14.431180   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:14.431218   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:14.445355   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:14.445381   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:14.513970   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:14.513991   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:14.514006   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:14.591381   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:14.591416   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:11.108299   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:13.612816   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:13.856979   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:15.857269   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:15.999100   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:18.497473   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:17.133406   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:17.146647   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:17.146703   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:17.180991   57945 cri.go:89] found id: ""
	I0816 13:48:17.181022   57945 logs.go:276] 0 containers: []
	W0816 13:48:17.181032   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:17.181041   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:17.181103   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:17.214862   57945 cri.go:89] found id: ""
	I0816 13:48:17.214892   57945 logs.go:276] 0 containers: []
	W0816 13:48:17.214903   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:17.214910   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:17.214971   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:17.250316   57945 cri.go:89] found id: ""
	I0816 13:48:17.250344   57945 logs.go:276] 0 containers: []
	W0816 13:48:17.250355   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:17.250362   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:17.250425   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:17.282959   57945 cri.go:89] found id: ""
	I0816 13:48:17.282991   57945 logs.go:276] 0 containers: []
	W0816 13:48:17.283001   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:17.283008   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:17.283070   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:17.316185   57945 cri.go:89] found id: ""
	I0816 13:48:17.316213   57945 logs.go:276] 0 containers: []
	W0816 13:48:17.316224   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:17.316232   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:17.316292   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:17.353383   57945 cri.go:89] found id: ""
	I0816 13:48:17.353410   57945 logs.go:276] 0 containers: []
	W0816 13:48:17.353420   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:17.353428   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:17.353487   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:17.390808   57945 cri.go:89] found id: ""
	I0816 13:48:17.390836   57945 logs.go:276] 0 containers: []
	W0816 13:48:17.390844   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:17.390850   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:17.390898   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:17.425484   57945 cri.go:89] found id: ""
	I0816 13:48:17.425517   57945 logs.go:276] 0 containers: []
	W0816 13:48:17.425529   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:17.425539   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:17.425556   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:17.439184   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:17.439220   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:17.511813   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:17.511838   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:17.511853   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:17.597415   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:17.597447   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:17.636703   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:17.636738   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:16.105992   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:18.606940   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:20.607532   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:18.357812   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:20.358351   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:20.498644   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:22.998103   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:24.999122   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:20.193694   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:20.207488   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:20.207549   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:20.246584   57945 cri.go:89] found id: ""
	I0816 13:48:20.246610   57945 logs.go:276] 0 containers: []
	W0816 13:48:20.246618   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:20.246624   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:20.246678   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:20.282030   57945 cri.go:89] found id: ""
	I0816 13:48:20.282060   57945 logs.go:276] 0 containers: []
	W0816 13:48:20.282071   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:20.282078   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:20.282142   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:20.317530   57945 cri.go:89] found id: ""
	I0816 13:48:20.317562   57945 logs.go:276] 0 containers: []
	W0816 13:48:20.317571   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:20.317578   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:20.317630   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:20.352964   57945 cri.go:89] found id: ""
	I0816 13:48:20.352990   57945 logs.go:276] 0 containers: []
	W0816 13:48:20.353000   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:20.353008   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:20.353066   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:20.388108   57945 cri.go:89] found id: ""
	I0816 13:48:20.388138   57945 logs.go:276] 0 containers: []
	W0816 13:48:20.388148   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:20.388156   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:20.388224   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:20.423627   57945 cri.go:89] found id: ""
	I0816 13:48:20.423660   57945 logs.go:276] 0 containers: []
	W0816 13:48:20.423672   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:20.423680   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:20.423741   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:20.460975   57945 cri.go:89] found id: ""
	I0816 13:48:20.461003   57945 logs.go:276] 0 containers: []
	W0816 13:48:20.461011   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:20.461017   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:20.461081   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:20.497707   57945 cri.go:89] found id: ""
	I0816 13:48:20.497728   57945 logs.go:276] 0 containers: []
	W0816 13:48:20.497735   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:20.497743   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:20.497758   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:20.584887   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:20.584939   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:20.627020   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:20.627054   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:20.680716   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:20.680756   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:20.694945   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:20.694973   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:20.770900   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:23.271654   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:23.284709   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:23.284788   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:23.324342   57945 cri.go:89] found id: ""
	I0816 13:48:23.324374   57945 logs.go:276] 0 containers: []
	W0816 13:48:23.324384   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:23.324393   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:23.324453   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:23.358846   57945 cri.go:89] found id: ""
	I0816 13:48:23.358869   57945 logs.go:276] 0 containers: []
	W0816 13:48:23.358879   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:23.358885   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:23.358943   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:23.392580   57945 cri.go:89] found id: ""
	I0816 13:48:23.392607   57945 logs.go:276] 0 containers: []
	W0816 13:48:23.392618   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:23.392626   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:23.392686   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:23.428035   57945 cri.go:89] found id: ""
	I0816 13:48:23.428066   57945 logs.go:276] 0 containers: []
	W0816 13:48:23.428076   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:23.428083   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:23.428164   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:23.470027   57945 cri.go:89] found id: ""
	I0816 13:48:23.470054   57945 logs.go:276] 0 containers: []
	W0816 13:48:23.470066   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:23.470076   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:23.470242   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:23.506497   57945 cri.go:89] found id: ""
	I0816 13:48:23.506522   57945 logs.go:276] 0 containers: []
	W0816 13:48:23.506530   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:23.506536   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:23.506588   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:23.542571   57945 cri.go:89] found id: ""
	I0816 13:48:23.542600   57945 logs.go:276] 0 containers: []
	W0816 13:48:23.542611   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:23.542619   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:23.542683   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:23.578552   57945 cri.go:89] found id: ""
	I0816 13:48:23.578584   57945 logs.go:276] 0 containers: []
	W0816 13:48:23.578592   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:23.578601   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:23.578612   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:23.633145   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:23.633181   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:23.648089   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:23.648129   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:23.724645   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:23.724663   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:23.724675   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:23.812979   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:23.813013   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:23.107986   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:25.607110   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:22.858674   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:25.358411   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:27.497538   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:29.498345   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:26.353455   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:26.367433   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:26.367504   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:26.406717   57945 cri.go:89] found id: ""
	I0816 13:48:26.406746   57945 logs.go:276] 0 containers: []
	W0816 13:48:26.406756   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:26.406764   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:26.406825   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:26.440267   57945 cri.go:89] found id: ""
	I0816 13:48:26.440298   57945 logs.go:276] 0 containers: []
	W0816 13:48:26.440309   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:26.440317   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:26.440379   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:26.479627   57945 cri.go:89] found id: ""
	I0816 13:48:26.479653   57945 logs.go:276] 0 containers: []
	W0816 13:48:26.479662   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:26.479667   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:26.479714   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:26.516608   57945 cri.go:89] found id: ""
	I0816 13:48:26.516638   57945 logs.go:276] 0 containers: []
	W0816 13:48:26.516646   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:26.516653   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:26.516713   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:26.553474   57945 cri.go:89] found id: ""
	I0816 13:48:26.553496   57945 logs.go:276] 0 containers: []
	W0816 13:48:26.553505   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:26.553510   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:26.553566   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:26.586090   57945 cri.go:89] found id: ""
	I0816 13:48:26.586147   57945 logs.go:276] 0 containers: []
	W0816 13:48:26.586160   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:26.586167   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:26.586217   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:26.621874   57945 cri.go:89] found id: ""
	I0816 13:48:26.621903   57945 logs.go:276] 0 containers: []
	W0816 13:48:26.621914   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:26.621923   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:26.621999   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:26.656643   57945 cri.go:89] found id: ""
	I0816 13:48:26.656668   57945 logs.go:276] 0 containers: []
	W0816 13:48:26.656676   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:26.656684   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:26.656694   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:26.710589   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:26.710628   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:26.724403   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:26.724423   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:26.795530   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:26.795550   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:26.795568   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:26.879670   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:26.879709   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:29.420540   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:29.434301   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:29.434368   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:29.471409   57945 cri.go:89] found id: ""
	I0816 13:48:29.471438   57945 logs.go:276] 0 containers: []
	W0816 13:48:29.471455   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:29.471464   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:29.471527   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:29.510841   57945 cri.go:89] found id: ""
	I0816 13:48:29.510865   57945 logs.go:276] 0 containers: []
	W0816 13:48:29.510873   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:29.510880   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:29.510928   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:29.546300   57945 cri.go:89] found id: ""
	I0816 13:48:29.546331   57945 logs.go:276] 0 containers: []
	W0816 13:48:29.546342   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:29.546349   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:29.546409   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:29.579324   57945 cri.go:89] found id: ""
	I0816 13:48:29.579349   57945 logs.go:276] 0 containers: []
	W0816 13:48:29.579357   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:29.579363   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:29.579414   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:29.613729   57945 cri.go:89] found id: ""
	I0816 13:48:29.613755   57945 logs.go:276] 0 containers: []
	W0816 13:48:29.613765   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:29.613772   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:29.613831   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:29.649401   57945 cri.go:89] found id: ""
	I0816 13:48:29.649428   57945 logs.go:276] 0 containers: []
	W0816 13:48:29.649439   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:29.649447   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:29.649510   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:29.685391   57945 cri.go:89] found id: ""
	I0816 13:48:29.685420   57945 logs.go:276] 0 containers: []
	W0816 13:48:29.685428   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:29.685436   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:29.685504   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:29.720954   57945 cri.go:89] found id: ""
	I0816 13:48:29.720981   57945 logs.go:276] 0 containers: []
	W0816 13:48:29.720993   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:29.721004   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:29.721019   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:29.791602   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:29.791625   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:29.791637   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:29.876595   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:29.876633   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:29.917172   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:29.917203   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:29.969511   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:29.969548   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:27.607276   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:30.106660   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:27.856585   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:29.857836   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:31.498615   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:33.999039   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:32.484186   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:32.499320   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:32.499386   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:32.537301   57945 cri.go:89] found id: ""
	I0816 13:48:32.537351   57945 logs.go:276] 0 containers: []
	W0816 13:48:32.537365   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:32.537373   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:32.537441   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:32.574324   57945 cri.go:89] found id: ""
	I0816 13:48:32.574350   57945 logs.go:276] 0 containers: []
	W0816 13:48:32.574360   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:32.574367   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:32.574445   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:32.610672   57945 cri.go:89] found id: ""
	I0816 13:48:32.610697   57945 logs.go:276] 0 containers: []
	W0816 13:48:32.610704   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:32.610709   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:32.610760   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:32.649916   57945 cri.go:89] found id: ""
	I0816 13:48:32.649941   57945 logs.go:276] 0 containers: []
	W0816 13:48:32.649949   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:32.649955   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:32.650010   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:32.684204   57945 cri.go:89] found id: ""
	I0816 13:48:32.684234   57945 logs.go:276] 0 containers: []
	W0816 13:48:32.684245   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:32.684257   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:32.684319   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:32.723735   57945 cri.go:89] found id: ""
	I0816 13:48:32.723764   57945 logs.go:276] 0 containers: []
	W0816 13:48:32.723772   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:32.723778   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:32.723838   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:32.759709   57945 cri.go:89] found id: ""
	I0816 13:48:32.759732   57945 logs.go:276] 0 containers: []
	W0816 13:48:32.759740   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:32.759746   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:32.759798   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:32.798782   57945 cri.go:89] found id: ""
	I0816 13:48:32.798807   57945 logs.go:276] 0 containers: []
	W0816 13:48:32.798815   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:32.798823   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:32.798835   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:32.876166   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:32.876188   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:32.876199   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:32.956218   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:32.956253   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:32.996625   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:32.996662   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:33.050093   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:33.050128   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:32.107363   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:34.607045   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:32.357801   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:34.856980   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:36.857321   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:36.497064   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:38.498666   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:35.565097   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:35.578582   57945 kubeadm.go:597] duration metric: took 4m3.330349632s to restartPrimaryControlPlane
	W0816 13:48:35.578670   57945 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0816 13:48:35.578704   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 13:48:36.655625   57945 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.076898816s)
	I0816 13:48:36.655703   57945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 13:48:36.670273   57945 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 13:48:36.681600   57945 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 13:48:36.691816   57945 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 13:48:36.691835   57945 kubeadm.go:157] found existing configuration files:
	
	I0816 13:48:36.691877   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 13:48:36.701841   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 13:48:36.701901   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 13:48:36.711571   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 13:48:36.720990   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 13:48:36.721055   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 13:48:36.730948   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 13:48:36.740294   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 13:48:36.740361   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 13:48:36.750725   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 13:48:36.761936   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 13:48:36.762009   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 13:48:36.772572   57945 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 13:48:37.001184   57945 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 13:48:36.608364   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:39.106585   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:38.857386   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:41.358217   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:40.997776   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:42.998819   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:44.999474   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:41.106806   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:43.107007   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:45.606716   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:42.357715   57440 pod_ready.go:82] duration metric: took 4m0.006671881s for pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace to be "Ready" ...
	E0816 13:48:42.357741   57440 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0816 13:48:42.357749   57440 pod_ready.go:39] duration metric: took 4m4.542302811s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:48:42.357762   57440 api_server.go:52] waiting for apiserver process to appear ...
	I0816 13:48:42.357787   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:42.357834   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:42.415231   57440 cri.go:89] found id: "17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1"
	I0816 13:48:42.415255   57440 cri.go:89] found id: ""
	I0816 13:48:42.415265   57440 logs.go:276] 1 containers: [17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1]
	I0816 13:48:42.415324   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:42.421713   57440 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:42.421779   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:42.462840   57440 cri.go:89] found id: "43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453"
	I0816 13:48:42.462867   57440 cri.go:89] found id: ""
	I0816 13:48:42.462878   57440 logs.go:276] 1 containers: [43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453]
	I0816 13:48:42.462940   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:42.467260   57440 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:42.467321   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:42.505423   57440 cri.go:89] found id: "1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc"
	I0816 13:48:42.505449   57440 cri.go:89] found id: ""
	I0816 13:48:42.505458   57440 logs.go:276] 1 containers: [1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc]
	I0816 13:48:42.505517   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:42.510072   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:42.510124   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:42.551873   57440 cri.go:89] found id: "db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4"
	I0816 13:48:42.551894   57440 cri.go:89] found id: ""
	I0816 13:48:42.551902   57440 logs.go:276] 1 containers: [db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4]
	I0816 13:48:42.551949   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:42.556735   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:42.556783   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:42.595853   57440 cri.go:89] found id: "ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5"
	I0816 13:48:42.595884   57440 cri.go:89] found id: ""
	I0816 13:48:42.595895   57440 logs.go:276] 1 containers: [ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5]
	I0816 13:48:42.595948   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:42.600951   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:42.601003   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:42.639288   57440 cri.go:89] found id: "d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd"
	I0816 13:48:42.639311   57440 cri.go:89] found id: ""
	I0816 13:48:42.639320   57440 logs.go:276] 1 containers: [d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd]
	I0816 13:48:42.639367   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:42.644502   57440 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:42.644554   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:42.685041   57440 cri.go:89] found id: ""
	I0816 13:48:42.685065   57440 logs.go:276] 0 containers: []
	W0816 13:48:42.685074   57440 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:42.685079   57440 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 13:48:42.685137   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 13:48:42.722485   57440 cri.go:89] found id: "b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae"
	I0816 13:48:42.722506   57440 cri.go:89] found id: "35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f"
	I0816 13:48:42.722510   57440 cri.go:89] found id: ""
	I0816 13:48:42.722519   57440 logs.go:276] 2 containers: [b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae 35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f]
	I0816 13:48:42.722590   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:42.727136   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:42.731169   57440 logs.go:123] Gathering logs for kube-controller-manager [d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd] ...
	I0816 13:48:42.731189   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd"
	I0816 13:48:42.794303   57440 logs.go:123] Gathering logs for storage-provisioner [b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae] ...
	I0816 13:48:42.794334   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae"
	I0816 13:48:42.833686   57440 logs.go:123] Gathering logs for container status ...
	I0816 13:48:42.833715   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:42.874606   57440 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:42.874632   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:42.948074   57440 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:42.948111   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:42.963546   57440 logs.go:123] Gathering logs for etcd [43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453] ...
	I0816 13:48:42.963571   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453"
	I0816 13:48:43.027410   57440 logs.go:123] Gathering logs for kube-scheduler [db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4] ...
	I0816 13:48:43.027446   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4"
	I0816 13:48:43.067643   57440 logs.go:123] Gathering logs for kube-proxy [ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5] ...
	I0816 13:48:43.067670   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5"
	I0816 13:48:43.115156   57440 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:43.115183   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 13:48:43.246588   57440 logs.go:123] Gathering logs for kube-apiserver [17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1] ...
	I0816 13:48:43.246618   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1"
	I0816 13:48:43.291042   57440 logs.go:123] Gathering logs for coredns [1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc] ...
	I0816 13:48:43.291069   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc"
	I0816 13:48:43.330741   57440 logs.go:123] Gathering logs for storage-provisioner [35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f] ...
	I0816 13:48:43.330771   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f"
	I0816 13:48:43.371970   57440 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:43.371999   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:46.357313   57440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:46.373368   57440 api_server.go:72] duration metric: took 4m16.32601859s to wait for apiserver process to appear ...
	I0816 13:48:46.373396   57440 api_server.go:88] waiting for apiserver healthz status ...
	I0816 13:48:46.373426   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:46.373473   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:46.411034   57440 cri.go:89] found id: "17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1"
	I0816 13:48:46.411059   57440 cri.go:89] found id: ""
	I0816 13:48:46.411067   57440 logs.go:276] 1 containers: [17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1]
	I0816 13:48:46.411121   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:46.415948   57440 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:46.416009   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:46.458648   57440 cri.go:89] found id: "43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453"
	I0816 13:48:46.458673   57440 cri.go:89] found id: ""
	I0816 13:48:46.458684   57440 logs.go:276] 1 containers: [43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453]
	I0816 13:48:46.458735   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:46.463268   57440 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:46.463332   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:46.502120   57440 cri.go:89] found id: "1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc"
	I0816 13:48:46.502139   57440 cri.go:89] found id: ""
	I0816 13:48:46.502149   57440 logs.go:276] 1 containers: [1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc]
	I0816 13:48:46.502319   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:46.508632   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:46.508692   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:46.552732   57440 cri.go:89] found id: "db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4"
	I0816 13:48:46.552757   57440 cri.go:89] found id: ""
	I0816 13:48:46.552765   57440 logs.go:276] 1 containers: [db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4]
	I0816 13:48:46.552812   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:46.557459   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:46.557524   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:46.598286   57440 cri.go:89] found id: "ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5"
	I0816 13:48:46.598308   57440 cri.go:89] found id: ""
	I0816 13:48:46.598330   57440 logs.go:276] 1 containers: [ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5]
	I0816 13:48:46.598403   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:46.603050   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:46.603110   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:46.641616   57440 cri.go:89] found id: "d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd"
	I0816 13:48:46.641638   57440 cri.go:89] found id: ""
	I0816 13:48:46.641648   57440 logs.go:276] 1 containers: [d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd]
	I0816 13:48:46.641712   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:46.646008   57440 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:46.646076   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:46.682259   57440 cri.go:89] found id: ""
	I0816 13:48:46.682290   57440 logs.go:276] 0 containers: []
	W0816 13:48:46.682302   57440 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:46.682310   57440 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 13:48:46.682366   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 13:48:46.718955   57440 cri.go:89] found id: "b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae"
	I0816 13:48:46.718979   57440 cri.go:89] found id: "35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f"
	I0816 13:48:46.718985   57440 cri.go:89] found id: ""
	I0816 13:48:46.718993   57440 logs.go:276] 2 containers: [b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae 35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f]
	I0816 13:48:46.719049   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:46.723519   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:46.727942   57440 logs.go:123] Gathering logs for kube-scheduler [db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4] ...
	I0816 13:48:46.727968   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4"
	I0816 13:48:46.771942   57440 logs.go:123] Gathering logs for storage-provisioner [b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae] ...
	I0816 13:48:46.771971   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae"
	I0816 13:48:46.818294   57440 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:46.818319   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:46.887977   57440 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:46.888021   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:46.903567   57440 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:46.903599   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 13:48:47.010715   57440 logs.go:123] Gathering logs for kube-apiserver [17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1] ...
	I0816 13:48:47.010747   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1"
	I0816 13:48:47.056317   57440 logs.go:123] Gathering logs for etcd [43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453] ...
	I0816 13:48:47.056346   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453"
	I0816 13:48:47.114669   57440 logs.go:123] Gathering logs for coredns [1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc] ...
	I0816 13:48:47.114696   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc"
	I0816 13:48:47.498472   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:49.998541   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:47.606991   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:49.607458   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:47.157046   57440 logs.go:123] Gathering logs for storage-provisioner [35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f] ...
	I0816 13:48:47.157073   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f"
	I0816 13:48:47.199364   57440 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:47.199393   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:47.640964   57440 logs.go:123] Gathering logs for kube-proxy [ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5] ...
	I0816 13:48:47.641003   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5"
	I0816 13:48:47.683503   57440 logs.go:123] Gathering logs for kube-controller-manager [d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd] ...
	I0816 13:48:47.683541   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd"
	I0816 13:48:47.746748   57440 logs.go:123] Gathering logs for container status ...
	I0816 13:48:47.746798   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:50.296176   57440 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0816 13:48:50.300482   57440 api_server.go:279] https://192.168.61.116:8443/healthz returned 200:
	ok
	I0816 13:48:50.301550   57440 api_server.go:141] control plane version: v1.31.0
	I0816 13:48:50.301570   57440 api_server.go:131] duration metric: took 3.928168044s to wait for apiserver health ...
	I0816 13:48:50.301578   57440 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 13:48:50.301599   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:50.301653   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:50.343199   57440 cri.go:89] found id: "17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1"
	I0816 13:48:50.343223   57440 cri.go:89] found id: ""
	I0816 13:48:50.343231   57440 logs.go:276] 1 containers: [17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1]
	I0816 13:48:50.343276   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:50.347576   57440 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:50.347651   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:50.387912   57440 cri.go:89] found id: "43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453"
	I0816 13:48:50.387937   57440 cri.go:89] found id: ""
	I0816 13:48:50.387947   57440 logs.go:276] 1 containers: [43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453]
	I0816 13:48:50.388004   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:50.392120   57440 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:50.392188   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:50.428655   57440 cri.go:89] found id: "1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc"
	I0816 13:48:50.428680   57440 cri.go:89] found id: ""
	I0816 13:48:50.428688   57440 logs.go:276] 1 containers: [1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc]
	I0816 13:48:50.428734   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:50.432863   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:50.432941   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:50.472269   57440 cri.go:89] found id: "db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4"
	I0816 13:48:50.472295   57440 cri.go:89] found id: ""
	I0816 13:48:50.472304   57440 logs.go:276] 1 containers: [db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4]
	I0816 13:48:50.472351   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:50.476961   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:50.477006   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:50.514772   57440 cri.go:89] found id: "ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5"
	I0816 13:48:50.514793   57440 cri.go:89] found id: ""
	I0816 13:48:50.514801   57440 logs.go:276] 1 containers: [ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5]
	I0816 13:48:50.514857   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:50.520430   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:50.520492   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:50.564708   57440 cri.go:89] found id: "d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd"
	I0816 13:48:50.564733   57440 cri.go:89] found id: ""
	I0816 13:48:50.564741   57440 logs.go:276] 1 containers: [d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd]
	I0816 13:48:50.564788   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:50.569255   57440 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:50.569306   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:50.607803   57440 cri.go:89] found id: ""
	I0816 13:48:50.607823   57440 logs.go:276] 0 containers: []
	W0816 13:48:50.607829   57440 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:50.607835   57440 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 13:48:50.607888   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 13:48:50.643909   57440 cri.go:89] found id: "b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae"
	I0816 13:48:50.643934   57440 cri.go:89] found id: "35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f"
	I0816 13:48:50.643940   57440 cri.go:89] found id: ""
	I0816 13:48:50.643949   57440 logs.go:276] 2 containers: [b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae 35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f]
	I0816 13:48:50.643994   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:50.648575   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:50.653322   57440 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:50.653354   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:50.667847   57440 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:50.667878   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 13:48:50.774932   57440 logs.go:123] Gathering logs for kube-apiserver [17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1] ...
	I0816 13:48:50.774969   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1"
	I0816 13:48:50.823473   57440 logs.go:123] Gathering logs for etcd [43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453] ...
	I0816 13:48:50.823503   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453"
	I0816 13:48:50.884009   57440 logs.go:123] Gathering logs for storage-provisioner [b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae] ...
	I0816 13:48:50.884044   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae"
	I0816 13:48:50.925187   57440 logs.go:123] Gathering logs for container status ...
	I0816 13:48:50.925219   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:50.965019   57440 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:50.965046   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:51.033614   57440 logs.go:123] Gathering logs for coredns [1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc] ...
	I0816 13:48:51.033651   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc"
	I0816 13:48:51.068360   57440 logs.go:123] Gathering logs for kube-scheduler [db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4] ...
	I0816 13:48:51.068387   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4"
	I0816 13:48:51.107768   57440 logs.go:123] Gathering logs for kube-proxy [ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5] ...
	I0816 13:48:51.107792   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5"
	I0816 13:48:51.163637   57440 logs.go:123] Gathering logs for kube-controller-manager [d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd] ...
	I0816 13:48:51.163673   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd"
	I0816 13:48:51.227436   57440 logs.go:123] Gathering logs for storage-provisioner [35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f] ...
	I0816 13:48:51.227462   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f"
	I0816 13:48:51.265505   57440 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:51.265531   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:54.130801   57440 system_pods.go:59] 8 kube-system pods found
	I0816 13:48:54.130828   57440 system_pods.go:61] "coredns-6f6b679f8f-8kbs6" [e732183e-3b22-4a11-909a-246de5fc1c8a] Running
	I0816 13:48:54.130833   57440 system_pods.go:61] "etcd-no-preload-311070" [e05deb95-06ff-4a8d-b520-0350b2332814] Running
	I0816 13:48:54.130837   57440 system_pods.go:61] "kube-apiserver-no-preload-311070" [06cb4989-0511-4c14-a3ec-e966829cf2ec] Running
	I0816 13:48:54.130840   57440 system_pods.go:61] "kube-controller-manager-no-preload-311070" [baa9f379-4e14-4873-a456-42e90a816b0b] Running
	I0816 13:48:54.130843   57440 system_pods.go:61] "kube-proxy-b8d5b" [9ed1c33b-903f-43e8-880c-b9a49c658806] Running
	I0816 13:48:54.130846   57440 system_pods.go:61] "kube-scheduler-no-preload-311070" [166c0f64-ebf6-413f-82d5-f1c32991c63a] Running
	I0816 13:48:54.130852   57440 system_pods.go:61] "metrics-server-6867b74b74-mgxhv" [e9654a8e-4db2-494d-93a7-a134b0e2bb50] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 13:48:54.130855   57440 system_pods.go:61] "storage-provisioner" [f340d2e3-2889-4200-b477-830494b827c6] Running
	I0816 13:48:54.130862   57440 system_pods.go:74] duration metric: took 3.829279192s to wait for pod list to return data ...
	I0816 13:48:54.130868   57440 default_sa.go:34] waiting for default service account to be created ...
	I0816 13:48:54.133253   57440 default_sa.go:45] found service account: "default"
	I0816 13:48:54.133282   57440 default_sa.go:55] duration metric: took 2.407297ms for default service account to be created ...
	I0816 13:48:54.133292   57440 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 13:48:54.138812   57440 system_pods.go:86] 8 kube-system pods found
	I0816 13:48:54.138835   57440 system_pods.go:89] "coredns-6f6b679f8f-8kbs6" [e732183e-3b22-4a11-909a-246de5fc1c8a] Running
	I0816 13:48:54.138841   57440 system_pods.go:89] "etcd-no-preload-311070" [e05deb95-06ff-4a8d-b520-0350b2332814] Running
	I0816 13:48:54.138845   57440 system_pods.go:89] "kube-apiserver-no-preload-311070" [06cb4989-0511-4c14-a3ec-e966829cf2ec] Running
	I0816 13:48:54.138849   57440 system_pods.go:89] "kube-controller-manager-no-preload-311070" [baa9f379-4e14-4873-a456-42e90a816b0b] Running
	I0816 13:48:54.138853   57440 system_pods.go:89] "kube-proxy-b8d5b" [9ed1c33b-903f-43e8-880c-b9a49c658806] Running
	I0816 13:48:54.138856   57440 system_pods.go:89] "kube-scheduler-no-preload-311070" [166c0f64-ebf6-413f-82d5-f1c32991c63a] Running
	I0816 13:48:54.138863   57440 system_pods.go:89] "metrics-server-6867b74b74-mgxhv" [e9654a8e-4db2-494d-93a7-a134b0e2bb50] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 13:48:54.138868   57440 system_pods.go:89] "storage-provisioner" [f340d2e3-2889-4200-b477-830494b827c6] Running
	I0816 13:48:54.138874   57440 system_pods.go:126] duration metric: took 5.576801ms to wait for k8s-apps to be running ...
	I0816 13:48:54.138879   57440 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 13:48:54.138922   57440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 13:48:54.154406   57440 system_svc.go:56] duration metric: took 15.507123ms WaitForService to wait for kubelet
	I0816 13:48:54.154438   57440 kubeadm.go:582] duration metric: took 4m24.107091364s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 13:48:54.154463   57440 node_conditions.go:102] verifying NodePressure condition ...
	I0816 13:48:54.156991   57440 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 13:48:54.157012   57440 node_conditions.go:123] node cpu capacity is 2
	I0816 13:48:54.157027   57440 node_conditions.go:105] duration metric: took 2.558338ms to run NodePressure ...
	I0816 13:48:54.157041   57440 start.go:241] waiting for startup goroutines ...
	I0816 13:48:54.157052   57440 start.go:246] waiting for cluster config update ...
	I0816 13:48:54.157070   57440 start.go:255] writing updated cluster config ...
	I0816 13:48:54.157381   57440 ssh_runner.go:195] Run: rm -f paused
	I0816 13:48:54.205583   57440 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 13:48:54.207845   57440 out.go:177] * Done! kubectl is now configured to use "no-preload-311070" cluster and "default" namespace by default
	I0816 13:48:51.999301   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:54.498057   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:52.107465   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:54.606735   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:56.498967   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:58.997311   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:56.606925   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:58.606970   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:00.607943   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:00.997760   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:02.998653   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:03.107555   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:05.606363   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:05.497723   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:07.498572   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:09.997905   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:07.607916   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:09.606579   58430 pod_ready.go:82] duration metric: took 4m0.00617652s for pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace to be "Ready" ...
	E0816 13:49:09.606602   58430 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0816 13:49:09.606612   58430 pod_ready.go:39] duration metric: took 4m3.606005486s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:49:09.606627   58430 api_server.go:52] waiting for apiserver process to appear ...
	I0816 13:49:09.606652   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:49:09.606698   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:49:09.660442   58430 cri.go:89] found id: "4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190"
	I0816 13:49:09.660461   58430 cri.go:89] found id: ""
	I0816 13:49:09.660469   58430 logs.go:276] 1 containers: [4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190]
	I0816 13:49:09.660519   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:09.664752   58430 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:49:09.664813   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:49:09.701589   58430 cri.go:89] found id: "83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176"
	I0816 13:49:09.701615   58430 cri.go:89] found id: ""
	I0816 13:49:09.701625   58430 logs.go:276] 1 containers: [83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176]
	I0816 13:49:09.701681   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:09.706048   58430 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:49:09.706114   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:49:09.743810   58430 cri.go:89] found id: "8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910"
	I0816 13:49:09.743832   58430 cri.go:89] found id: ""
	I0816 13:49:09.743841   58430 logs.go:276] 1 containers: [8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910]
	I0816 13:49:09.743898   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:09.748197   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:49:09.748271   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:49:09.783730   58430 cri.go:89] found id: "ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9"
	I0816 13:49:09.783752   58430 cri.go:89] found id: ""
	I0816 13:49:09.783765   58430 logs.go:276] 1 containers: [ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9]
	I0816 13:49:09.783828   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:09.787845   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:49:09.787909   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:49:09.828449   58430 cri.go:89] found id: "99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb"
	I0816 13:49:09.828472   58430 cri.go:89] found id: ""
	I0816 13:49:09.828481   58430 logs.go:276] 1 containers: [99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb]
	I0816 13:49:09.828546   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:09.832890   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:49:09.832963   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:49:09.880136   58430 cri.go:89] found id: "590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239"
	I0816 13:49:09.880164   58430 cri.go:89] found id: ""
	I0816 13:49:09.880175   58430 logs.go:276] 1 containers: [590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239]
	I0816 13:49:09.880232   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:09.884533   58430 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:49:09.884599   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:49:09.924776   58430 cri.go:89] found id: ""
	I0816 13:49:09.924805   58430 logs.go:276] 0 containers: []
	W0816 13:49:09.924816   58430 logs.go:278] No container was found matching "kindnet"
	I0816 13:49:09.924828   58430 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 13:49:09.924889   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 13:49:09.971663   58430 cri.go:89] found id: "7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b"
	I0816 13:49:09.971689   58430 cri.go:89] found id: "17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825"
	I0816 13:49:09.971695   58430 cri.go:89] found id: ""
	I0816 13:49:09.971705   58430 logs.go:276] 2 containers: [7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b 17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825]
	I0816 13:49:09.971770   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:09.976297   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:09.980815   58430 logs.go:123] Gathering logs for coredns [8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910] ...
	I0816 13:49:09.980844   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910"
	I0816 13:49:10.020287   58430 logs.go:123] Gathering logs for kube-scheduler [ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9] ...
	I0816 13:49:10.020317   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9"
	I0816 13:49:10.060266   58430 logs.go:123] Gathering logs for kube-controller-manager [590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239] ...
	I0816 13:49:10.060291   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239"
	I0816 13:49:10.113574   58430 logs.go:123] Gathering logs for storage-provisioner [7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b] ...
	I0816 13:49:10.113608   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b"
	I0816 13:49:10.153457   58430 logs.go:123] Gathering logs for storage-provisioner [17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825] ...
	I0816 13:49:10.153482   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825"
	I0816 13:49:10.191530   58430 logs.go:123] Gathering logs for dmesg ...
	I0816 13:49:10.191559   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:49:10.206267   58430 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:49:10.206296   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 13:49:10.326723   58430 logs.go:123] Gathering logs for etcd [83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176] ...
	I0816 13:49:10.326753   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176"
	I0816 13:49:10.377541   58430 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:49:10.377574   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:49:10.895387   58430 logs.go:123] Gathering logs for container status ...
	I0816 13:49:10.895445   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:49:10.947447   58430 logs.go:123] Gathering logs for kubelet ...
	I0816 13:49:10.947475   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:49:11.997943   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:13.998932   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:11.020745   58430 logs.go:123] Gathering logs for kube-apiserver [4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190] ...
	I0816 13:49:11.020786   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190"
	I0816 13:49:11.081224   58430 logs.go:123] Gathering logs for kube-proxy [99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb] ...
	I0816 13:49:11.081257   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb"
	I0816 13:49:13.632726   58430 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:49:13.651185   58430 api_server.go:72] duration metric: took 4m14.880109274s to wait for apiserver process to appear ...
	I0816 13:49:13.651214   58430 api_server.go:88] waiting for apiserver healthz status ...
	I0816 13:49:13.651254   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:49:13.651308   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:49:13.691473   58430 cri.go:89] found id: "4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190"
	I0816 13:49:13.691495   58430 cri.go:89] found id: ""
	I0816 13:49:13.691503   58430 logs.go:276] 1 containers: [4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190]
	I0816 13:49:13.691582   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:13.695945   58430 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:49:13.695998   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:49:13.730798   58430 cri.go:89] found id: "83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176"
	I0816 13:49:13.730830   58430 cri.go:89] found id: ""
	I0816 13:49:13.730840   58430 logs.go:276] 1 containers: [83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176]
	I0816 13:49:13.730913   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:13.735156   58430 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:49:13.735222   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:49:13.769612   58430 cri.go:89] found id: "8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910"
	I0816 13:49:13.769639   58430 cri.go:89] found id: ""
	I0816 13:49:13.769650   58430 logs.go:276] 1 containers: [8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910]
	I0816 13:49:13.769710   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:13.773690   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:49:13.773745   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:49:13.815417   58430 cri.go:89] found id: "ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9"
	I0816 13:49:13.815444   58430 cri.go:89] found id: ""
	I0816 13:49:13.815454   58430 logs.go:276] 1 containers: [ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9]
	I0816 13:49:13.815515   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:13.819596   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:49:13.819666   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:49:13.852562   58430 cri.go:89] found id: "99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb"
	I0816 13:49:13.852587   58430 cri.go:89] found id: ""
	I0816 13:49:13.852597   58430 logs.go:276] 1 containers: [99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb]
	I0816 13:49:13.852657   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:13.856697   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:49:13.856757   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:49:13.902327   58430 cri.go:89] found id: "590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239"
	I0816 13:49:13.902346   58430 cri.go:89] found id: ""
	I0816 13:49:13.902353   58430 logs.go:276] 1 containers: [590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239]
	I0816 13:49:13.902416   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:13.906789   58430 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:49:13.906840   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:49:13.943401   58430 cri.go:89] found id: ""
	I0816 13:49:13.943430   58430 logs.go:276] 0 containers: []
	W0816 13:49:13.943438   58430 logs.go:278] No container was found matching "kindnet"
	I0816 13:49:13.943443   58430 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 13:49:13.943490   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 13:49:13.979154   58430 cri.go:89] found id: "7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b"
	I0816 13:49:13.979178   58430 cri.go:89] found id: "17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825"
	I0816 13:49:13.979182   58430 cri.go:89] found id: ""
	I0816 13:49:13.979189   58430 logs.go:276] 2 containers: [7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b 17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825]
	I0816 13:49:13.979235   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:13.983301   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:13.988522   58430 logs.go:123] Gathering logs for dmesg ...
	I0816 13:49:13.988545   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:49:14.005891   58430 logs.go:123] Gathering logs for kube-apiserver [4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190] ...
	I0816 13:49:14.005916   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190"
	I0816 13:49:14.055686   58430 logs.go:123] Gathering logs for etcd [83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176] ...
	I0816 13:49:14.055713   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176"
	I0816 13:49:14.104975   58430 logs.go:123] Gathering logs for kube-proxy [99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb] ...
	I0816 13:49:14.105010   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb"
	I0816 13:49:14.145761   58430 logs.go:123] Gathering logs for kube-controller-manager [590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239] ...
	I0816 13:49:14.145786   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239"
	I0816 13:49:14.198935   58430 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:49:14.198966   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:49:14.662287   58430 logs.go:123] Gathering logs for container status ...
	I0816 13:49:14.662323   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:49:14.717227   58430 logs.go:123] Gathering logs for kubelet ...
	I0816 13:49:14.717256   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:49:14.789824   58430 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:49:14.789868   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 13:49:14.902892   58430 logs.go:123] Gathering logs for coredns [8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910] ...
	I0816 13:49:14.902922   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910"
	I0816 13:49:14.946711   58430 logs.go:123] Gathering logs for kube-scheduler [ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9] ...
	I0816 13:49:14.946736   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9"
	I0816 13:49:14.986143   58430 logs.go:123] Gathering logs for storage-provisioner [7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b] ...
	I0816 13:49:14.986175   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b"
	I0816 13:49:15.022107   58430 logs.go:123] Gathering logs for storage-provisioner [17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825] ...
	I0816 13:49:15.022138   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825"
	I0816 13:49:16.497493   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:18.497979   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:17.556820   58430 api_server.go:253] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 13:49:17.562249   58430 api_server.go:279] https://192.168.50.186:8444/healthz returned 200:
	ok
	I0816 13:49:17.563264   58430 api_server.go:141] control plane version: v1.31.0
	I0816 13:49:17.563280   58430 api_server.go:131] duration metric: took 3.912060569s to wait for apiserver health ...
	I0816 13:49:17.563288   58430 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 13:49:17.563312   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:49:17.563377   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:49:17.604072   58430 cri.go:89] found id: "4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190"
	I0816 13:49:17.604099   58430 cri.go:89] found id: ""
	I0816 13:49:17.604109   58430 logs.go:276] 1 containers: [4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190]
	I0816 13:49:17.604163   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:17.608623   58430 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:49:17.608678   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:49:17.650241   58430 cri.go:89] found id: "83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176"
	I0816 13:49:17.650267   58430 cri.go:89] found id: ""
	I0816 13:49:17.650275   58430 logs.go:276] 1 containers: [83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176]
	I0816 13:49:17.650328   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:17.654928   58430 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:49:17.655000   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:49:17.690057   58430 cri.go:89] found id: "8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910"
	I0816 13:49:17.690085   58430 cri.go:89] found id: ""
	I0816 13:49:17.690095   58430 logs.go:276] 1 containers: [8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910]
	I0816 13:49:17.690164   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:17.694636   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:49:17.694692   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:49:17.730134   58430 cri.go:89] found id: "ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9"
	I0816 13:49:17.730167   58430 cri.go:89] found id: ""
	I0816 13:49:17.730177   58430 logs.go:276] 1 containers: [ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9]
	I0816 13:49:17.730238   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:17.734364   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:49:17.734420   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:49:17.769579   58430 cri.go:89] found id: "99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb"
	I0816 13:49:17.769595   58430 cri.go:89] found id: ""
	I0816 13:49:17.769603   58430 logs.go:276] 1 containers: [99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb]
	I0816 13:49:17.769643   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:17.773543   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:49:17.773601   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:49:17.814287   58430 cri.go:89] found id: "590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239"
	I0816 13:49:17.814310   58430 cri.go:89] found id: ""
	I0816 13:49:17.814319   58430 logs.go:276] 1 containers: [590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239]
	I0816 13:49:17.814393   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:17.818904   58430 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:49:17.818977   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:49:17.858587   58430 cri.go:89] found id: ""
	I0816 13:49:17.858614   58430 logs.go:276] 0 containers: []
	W0816 13:49:17.858622   58430 logs.go:278] No container was found matching "kindnet"
	I0816 13:49:17.858627   58430 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 13:49:17.858674   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 13:49:17.901759   58430 cri.go:89] found id: "7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b"
	I0816 13:49:17.901784   58430 cri.go:89] found id: "17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825"
	I0816 13:49:17.901788   58430 cri.go:89] found id: ""
	I0816 13:49:17.901796   58430 logs.go:276] 2 containers: [7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b 17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825]
	I0816 13:49:17.901853   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:17.906139   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:17.910273   58430 logs.go:123] Gathering logs for dmesg ...
	I0816 13:49:17.910293   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:49:17.924565   58430 logs.go:123] Gathering logs for etcd [83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176] ...
	I0816 13:49:17.924590   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176"
	I0816 13:49:17.971895   58430 logs.go:123] Gathering logs for coredns [8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910] ...
	I0816 13:49:17.971927   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910"
	I0816 13:49:18.011332   58430 logs.go:123] Gathering logs for kube-scheduler [ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9] ...
	I0816 13:49:18.011364   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9"
	I0816 13:49:18.049264   58430 logs.go:123] Gathering logs for storage-provisioner [7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b] ...
	I0816 13:49:18.049292   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b"
	I0816 13:49:18.084004   58430 logs.go:123] Gathering logs for container status ...
	I0816 13:49:18.084030   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:49:18.136961   58430 logs.go:123] Gathering logs for kubelet ...
	I0816 13:49:18.137000   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:49:18.210452   58430 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:49:18.210483   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 13:49:18.327398   58430 logs.go:123] Gathering logs for kube-apiserver [4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190] ...
	I0816 13:49:18.327429   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190"
	I0816 13:49:18.378777   58430 logs.go:123] Gathering logs for kube-proxy [99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb] ...
	I0816 13:49:18.378809   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb"
	I0816 13:49:18.430052   58430 logs.go:123] Gathering logs for kube-controller-manager [590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239] ...
	I0816 13:49:18.430088   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239"
	I0816 13:49:18.496775   58430 logs.go:123] Gathering logs for storage-provisioner [17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825] ...
	I0816 13:49:18.496806   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825"
	I0816 13:49:18.540493   58430 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:49:18.540523   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:49:21.451644   58430 system_pods.go:59] 8 kube-system pods found
	I0816 13:49:21.451673   58430 system_pods.go:61] "coredns-6f6b679f8f-xdwhx" [66987c52-9a8c-4ddd-a6cf-ac84172d8c8c] Running
	I0816 13:49:21.451679   58430 system_pods.go:61] "etcd-default-k8s-diff-port-893736" [54f5bb47-00f5-4c65-88ae-460571872fd1] Running
	I0816 13:49:21.451682   58430 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-893736" [c8c57619-3a4c-4cc9-b637-7842194071ad] Running
	I0816 13:49:21.451687   58430 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-893736" [e553aced-34bd-4d30-b473-082001fe2948] Running
	I0816 13:49:21.451691   58430 system_pods.go:61] "kube-proxy-btq6r" [a2b7b283-da62-4cb8-a039-07a509491e5e] Running
	I0816 13:49:21.451694   58430 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-893736" [0a30cbd0-a2ac-4ca9-bddc-674e6fc9ae77] Running
	I0816 13:49:21.451701   58430 system_pods.go:61] "metrics-server-6867b74b74-j9tqh" [ef077e6d-f368-4872-bb87-9e031d3ea764] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 13:49:21.451705   58430 system_pods.go:61] "storage-provisioner" [e2fbf16a-3bc7-4300-8023-5dbb20ba70bc] Running
	I0816 13:49:21.451713   58430 system_pods.go:74] duration metric: took 3.888418707s to wait for pod list to return data ...
	I0816 13:49:21.451719   58430 default_sa.go:34] waiting for default service account to be created ...
	I0816 13:49:21.454558   58430 default_sa.go:45] found service account: "default"
	I0816 13:49:21.454578   58430 default_sa.go:55] duration metric: took 2.853068ms for default service account to be created ...
	I0816 13:49:21.454585   58430 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 13:49:21.458906   58430 system_pods.go:86] 8 kube-system pods found
	I0816 13:49:21.458930   58430 system_pods.go:89] "coredns-6f6b679f8f-xdwhx" [66987c52-9a8c-4ddd-a6cf-ac84172d8c8c] Running
	I0816 13:49:21.458935   58430 system_pods.go:89] "etcd-default-k8s-diff-port-893736" [54f5bb47-00f5-4c65-88ae-460571872fd1] Running
	I0816 13:49:21.458941   58430 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-893736" [c8c57619-3a4c-4cc9-b637-7842194071ad] Running
	I0816 13:49:21.458944   58430 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-893736" [e553aced-34bd-4d30-b473-082001fe2948] Running
	I0816 13:49:21.458948   58430 system_pods.go:89] "kube-proxy-btq6r" [a2b7b283-da62-4cb8-a039-07a509491e5e] Running
	I0816 13:49:21.458951   58430 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-893736" [0a30cbd0-a2ac-4ca9-bddc-674e6fc9ae77] Running
	I0816 13:49:21.458958   58430 system_pods.go:89] "metrics-server-6867b74b74-j9tqh" [ef077e6d-f368-4872-bb87-9e031d3ea764] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 13:49:21.458961   58430 system_pods.go:89] "storage-provisioner" [e2fbf16a-3bc7-4300-8023-5dbb20ba70bc] Running
	I0816 13:49:21.458968   58430 system_pods.go:126] duration metric: took 4.378971ms to wait for k8s-apps to be running ...
	I0816 13:49:21.458975   58430 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 13:49:21.459016   58430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 13:49:21.476060   58430 system_svc.go:56] duration metric: took 17.075817ms WaitForService to wait for kubelet
	I0816 13:49:21.476086   58430 kubeadm.go:582] duration metric: took 4m22.705015833s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 13:49:21.476109   58430 node_conditions.go:102] verifying NodePressure condition ...
	I0816 13:49:21.479557   58430 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 13:49:21.479585   58430 node_conditions.go:123] node cpu capacity is 2
	I0816 13:49:21.479600   58430 node_conditions.go:105] duration metric: took 3.483638ms to run NodePressure ...
	I0816 13:49:21.479613   58430 start.go:241] waiting for startup goroutines ...
	I0816 13:49:21.479622   58430 start.go:246] waiting for cluster config update ...
	I0816 13:49:21.479637   58430 start.go:255] writing updated cluster config ...
	I0816 13:49:21.479949   58430 ssh_runner.go:195] Run: rm -f paused
	I0816 13:49:21.530237   58430 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 13:49:21.532328   58430 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-893736" cluster and "default" namespace by default
	I0816 13:49:20.998486   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:23.498358   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:25.498502   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:27.998622   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:30.491886   57240 pod_ready.go:82] duration metric: took 4m0.000539211s for pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace to be "Ready" ...
	E0816 13:49:30.491929   57240 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace to be "Ready" (will not retry!)
	I0816 13:49:30.491945   57240 pod_ready.go:39] duration metric: took 4m12.492024576s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:49:30.491972   57240 kubeadm.go:597] duration metric: took 4m19.795438093s to restartPrimaryControlPlane
	W0816 13:49:30.492032   57240 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0816 13:49:30.492059   57240 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 13:49:56.783263   57240 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.29118348s)
	I0816 13:49:56.783321   57240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 13:49:56.798550   57240 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 13:49:56.810542   57240 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 13:49:56.820837   57240 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 13:49:56.820873   57240 kubeadm.go:157] found existing configuration files:
	
	I0816 13:49:56.820947   57240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 13:49:56.831998   57240 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 13:49:56.832057   57240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 13:49:56.842351   57240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 13:49:56.852062   57240 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 13:49:56.852119   57240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 13:49:56.862337   57240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 13:49:56.872000   57240 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 13:49:56.872050   57240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 13:49:56.881764   57240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 13:49:56.891211   57240 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 13:49:56.891276   57240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 13:49:56.900969   57240 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 13:49:56.942823   57240 kubeadm.go:310] W0816 13:49:56.895203    2544 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 13:49:56.943751   57240 kubeadm.go:310] W0816 13:49:56.896255    2544 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 13:49:57.049491   57240 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 13:50:05.244505   57240 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0816 13:50:05.244561   57240 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 13:50:05.244657   57240 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 13:50:05.244775   57240 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 13:50:05.244901   57240 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0816 13:50:05.244989   57240 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 13:50:05.246568   57240 out.go:235]   - Generating certificates and keys ...
	I0816 13:50:05.246667   57240 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 13:50:05.246779   57240 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 13:50:05.246885   57240 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 13:50:05.246968   57240 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 13:50:05.247065   57240 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 13:50:05.247125   57240 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 13:50:05.247195   57240 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 13:50:05.247260   57240 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 13:50:05.247372   57240 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 13:50:05.247480   57240 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 13:50:05.247521   57240 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 13:50:05.247590   57240 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 13:50:05.247670   57240 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 13:50:05.247751   57240 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0816 13:50:05.247830   57240 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 13:50:05.247886   57240 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 13:50:05.247965   57240 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 13:50:05.248046   57240 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 13:50:05.248100   57240 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 13:50:05.249601   57240 out.go:235]   - Booting up control plane ...
	I0816 13:50:05.249698   57240 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 13:50:05.249779   57240 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 13:50:05.249835   57240 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 13:50:05.249930   57240 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 13:50:05.250007   57240 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 13:50:05.250046   57240 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 13:50:05.250184   57240 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0816 13:50:05.250289   57240 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0816 13:50:05.250343   57240 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002296228s
	I0816 13:50:05.250403   57240 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0816 13:50:05.250456   57240 kubeadm.go:310] [api-check] The API server is healthy after 5.002119618s
	I0816 13:50:05.250546   57240 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0816 13:50:05.250651   57240 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0816 13:50:05.250700   57240 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0816 13:50:05.250876   57240 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-302520 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0816 13:50:05.250930   57240 kubeadm.go:310] [bootstrap-token] Using token: dta4cr.diyk2wto3tx3ixlb
	I0816 13:50:05.252120   57240 out.go:235]   - Configuring RBAC rules ...
	I0816 13:50:05.252207   57240 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0816 13:50:05.252287   57240 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0816 13:50:05.252418   57240 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0816 13:50:05.252542   57240 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0816 13:50:05.252648   57240 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0816 13:50:05.252724   57240 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0816 13:50:05.252819   57240 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0816 13:50:05.252856   57240 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0816 13:50:05.252895   57240 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0816 13:50:05.252901   57240 kubeadm.go:310] 
	I0816 13:50:05.253004   57240 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0816 13:50:05.253022   57240 kubeadm.go:310] 
	I0816 13:50:05.253116   57240 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0816 13:50:05.253126   57240 kubeadm.go:310] 
	I0816 13:50:05.253155   57240 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0816 13:50:05.253240   57240 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0816 13:50:05.253283   57240 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0816 13:50:05.253289   57240 kubeadm.go:310] 
	I0816 13:50:05.253340   57240 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0816 13:50:05.253347   57240 kubeadm.go:310] 
	I0816 13:50:05.253405   57240 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0816 13:50:05.253423   57240 kubeadm.go:310] 
	I0816 13:50:05.253484   57240 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0816 13:50:05.253556   57240 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0816 13:50:05.253621   57240 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0816 13:50:05.253629   57240 kubeadm.go:310] 
	I0816 13:50:05.253710   57240 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0816 13:50:05.253840   57240 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0816 13:50:05.253855   57240 kubeadm.go:310] 
	I0816 13:50:05.253962   57240 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token dta4cr.diyk2wto3tx3ixlb \
	I0816 13:50:05.254087   57240 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4dc33a2efd2478f5cbf5aa38b4f971f510c5b41ce450063af834610254851641 \
	I0816 13:50:05.254122   57240 kubeadm.go:310] 	--control-plane 
	I0816 13:50:05.254126   57240 kubeadm.go:310] 
	I0816 13:50:05.254202   57240 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0816 13:50:05.254209   57240 kubeadm.go:310] 
	I0816 13:50:05.254280   57240 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token dta4cr.diyk2wto3tx3ixlb \
	I0816 13:50:05.254394   57240 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4dc33a2efd2478f5cbf5aa38b4f971f510c5b41ce450063af834610254851641 
	I0816 13:50:05.254407   57240 cni.go:84] Creating CNI manager for ""
	I0816 13:50:05.254416   57240 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:50:05.255889   57240 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 13:50:05.257086   57240 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 13:50:05.268668   57240 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 13:50:05.288676   57240 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 13:50:05.288735   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 13:50:05.288755   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-302520 minikube.k8s.io/updated_at=2024_08_16T13_50_05_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ab84f9bc76071a77c857a14f5c66dccc01002b05 minikube.k8s.io/name=embed-certs-302520 minikube.k8s.io/primary=true
	I0816 13:50:05.494987   57240 ops.go:34] apiserver oom_adj: -16
	I0816 13:50:05.495066   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 13:50:05.995792   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 13:50:06.495937   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 13:50:06.995513   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 13:50:07.495437   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 13:50:07.995600   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 13:50:08.495194   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 13:50:08.995101   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 13:50:09.495533   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 13:50:09.659383   57240 kubeadm.go:1113] duration metric: took 4.370714211s to wait for elevateKubeSystemPrivileges
	I0816 13:50:09.659425   57240 kubeadm.go:394] duration metric: took 4m59.010243945s to StartCluster
	I0816 13:50:09.659448   57240 settings.go:142] acquiring lock: {Name:mk96ffc9331ceacd8f3c1c33a59b38e047898a0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:50:09.659529   57240 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 13:50:09.661178   57240 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/kubeconfig: {Name:mk102d7d0e1fecb6a50b9d6c1ee82dcff7f7a898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:50:09.661475   57240 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 13:50:09.661579   57240 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 13:50:09.661662   57240 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-302520"
	I0816 13:50:09.661678   57240 addons.go:69] Setting default-storageclass=true in profile "embed-certs-302520"
	I0816 13:50:09.661693   57240 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-302520"
	W0816 13:50:09.661701   57240 addons.go:243] addon storage-provisioner should already be in state true
	I0816 13:50:09.661683   57240 config.go:182] Loaded profile config "embed-certs-302520": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 13:50:09.661707   57240 addons.go:69] Setting metrics-server=true in profile "embed-certs-302520"
	I0816 13:50:09.661730   57240 host.go:66] Checking if "embed-certs-302520" exists ...
	I0816 13:50:09.661732   57240 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-302520"
	I0816 13:50:09.661744   57240 addons.go:234] Setting addon metrics-server=true in "embed-certs-302520"
	W0816 13:50:09.661758   57240 addons.go:243] addon metrics-server should already be in state true
	I0816 13:50:09.661789   57240 host.go:66] Checking if "embed-certs-302520" exists ...
	I0816 13:50:09.662063   57240 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:50:09.662070   57240 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:50:09.662092   57240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:50:09.662093   57240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:50:09.662125   57240 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:50:09.662177   57240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:50:09.663568   57240 out.go:177] * Verifying Kubernetes components...
	I0816 13:50:09.665144   57240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:50:09.679643   57240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44127
	I0816 13:50:09.679976   57240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33121
	I0816 13:50:09.680138   57240 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:50:09.680460   57240 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:50:09.680652   57240 main.go:141] libmachine: Using API Version  1
	I0816 13:50:09.680677   57240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:50:09.681040   57240 main.go:141] libmachine: Using API Version  1
	I0816 13:50:09.681060   57240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:50:09.681084   57240 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:50:09.681449   57240 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:50:09.681659   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetState
	I0816 13:50:09.681706   57240 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:50:09.681737   57240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:50:09.682300   57240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42691
	I0816 13:50:09.682644   57240 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:50:09.683099   57240 main.go:141] libmachine: Using API Version  1
	I0816 13:50:09.683121   57240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:50:09.683464   57240 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:50:09.683993   57240 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:50:09.684020   57240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:50:09.684695   57240 addons.go:234] Setting addon default-storageclass=true in "embed-certs-302520"
	W0816 13:50:09.684713   57240 addons.go:243] addon default-storageclass should already be in state true
	I0816 13:50:09.684733   57240 host.go:66] Checking if "embed-certs-302520" exists ...
	I0816 13:50:09.685016   57240 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:50:09.685044   57240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:50:09.699612   57240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40707
	I0816 13:50:09.700235   57240 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:50:09.700244   57240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36139
	I0816 13:50:09.700776   57240 main.go:141] libmachine: Using API Version  1
	I0816 13:50:09.700795   57240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:50:09.700827   57240 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:50:09.701285   57240 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:50:09.701369   57240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40047
	I0816 13:50:09.701457   57240 main.go:141] libmachine: Using API Version  1
	I0816 13:50:09.701467   57240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:50:09.701939   57240 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:50:09.701980   57240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:50:09.702188   57240 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:50:09.702209   57240 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:50:09.702494   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetState
	I0816 13:50:09.702618   57240 main.go:141] libmachine: Using API Version  1
	I0816 13:50:09.702635   57240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:50:09.703042   57240 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:50:09.703250   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetState
	I0816 13:50:09.704568   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	I0816 13:50:09.705308   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	I0816 13:50:09.707074   57240 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0816 13:50:09.707074   57240 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:50:09.708773   57240 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 13:50:09.708792   57240 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 13:50:09.708813   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:50:09.708894   57240 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 13:50:09.708924   57240 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 13:50:09.708941   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:50:09.714305   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:50:09.714338   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:50:09.714812   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:50:09.714840   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:50:09.714874   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:50:09.714928   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:50:09.715181   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:50:09.715215   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:50:09.715363   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:50:09.715399   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:50:09.715512   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:50:09.715556   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:50:09.715634   57240 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/embed-certs-302520/id_rsa Username:docker}
	I0816 13:50:09.715876   57240 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/embed-certs-302520/id_rsa Username:docker}
	I0816 13:50:09.724172   57240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37325
	I0816 13:50:09.724636   57240 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:50:09.725184   57240 main.go:141] libmachine: Using API Version  1
	I0816 13:50:09.725213   57240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:50:09.725596   57240 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:50:09.725799   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetState
	I0816 13:50:09.727188   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	I0816 13:50:09.727410   57240 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 13:50:09.727426   57240 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 13:50:09.727447   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:50:09.729840   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:50:09.730228   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:50:09.730255   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:50:09.730534   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:50:09.730723   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:50:09.730867   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:50:09.731014   57240 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/embed-certs-302520/id_rsa Username:docker}
	I0816 13:50:09.899195   57240 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 13:50:09.939173   57240 node_ready.go:35] waiting up to 6m0s for node "embed-certs-302520" to be "Ready" ...
	I0816 13:50:09.958087   57240 node_ready.go:49] node "embed-certs-302520" has status "Ready":"True"
	I0816 13:50:09.958119   57240 node_ready.go:38] duration metric: took 18.911367ms for node "embed-certs-302520" to be "Ready" ...
	I0816 13:50:09.958130   57240 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:50:09.963326   57240 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-whnqh" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:10.083721   57240 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 13:50:10.184794   57240 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 13:50:10.203192   57240 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 13:50:10.203214   57240 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0816 13:50:10.285922   57240 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 13:50:10.285950   57240 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 13:50:10.370797   57240 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 13:50:10.370825   57240 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 13:50:10.420892   57240 main.go:141] libmachine: Making call to close driver server
	I0816 13:50:10.420942   57240 main.go:141] libmachine: (embed-certs-302520) Calling .Close
	I0816 13:50:10.421261   57240 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:50:10.421280   57240 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:50:10.421282   57240 main.go:141] libmachine: (embed-certs-302520) DBG | Closing plugin on server side
	I0816 13:50:10.421293   57240 main.go:141] libmachine: Making call to close driver server
	I0816 13:50:10.421303   57240 main.go:141] libmachine: (embed-certs-302520) Calling .Close
	I0816 13:50:10.421556   57240 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:50:10.421620   57240 main.go:141] libmachine: (embed-certs-302520) DBG | Closing plugin on server side
	I0816 13:50:10.421625   57240 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:50:10.427229   57240 main.go:141] libmachine: Making call to close driver server
	I0816 13:50:10.427250   57240 main.go:141] libmachine: (embed-certs-302520) Calling .Close
	I0816 13:50:10.427591   57240 main.go:141] libmachine: (embed-certs-302520) DBG | Closing plugin on server side
	I0816 13:50:10.427638   57240 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:50:10.427655   57240 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:50:10.454486   57240 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 13:50:11.225905   57240 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.041077031s)
	I0816 13:50:11.225958   57240 main.go:141] libmachine: Making call to close driver server
	I0816 13:50:11.225969   57240 main.go:141] libmachine: (embed-certs-302520) Calling .Close
	I0816 13:50:11.226248   57240 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:50:11.226268   57240 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:50:11.226273   57240 main.go:141] libmachine: (embed-certs-302520) DBG | Closing plugin on server side
	I0816 13:50:11.226295   57240 main.go:141] libmachine: Making call to close driver server
	I0816 13:50:11.226310   57240 main.go:141] libmachine: (embed-certs-302520) Calling .Close
	I0816 13:50:11.226561   57240 main.go:141] libmachine: (embed-certs-302520) DBG | Closing plugin on server side
	I0816 13:50:11.226608   57240 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:50:11.226627   57240 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:50:11.447454   57240 main.go:141] libmachine: Making call to close driver server
	I0816 13:50:11.447484   57240 main.go:141] libmachine: (embed-certs-302520) Calling .Close
	I0816 13:50:11.447823   57240 main.go:141] libmachine: (embed-certs-302520) DBG | Closing plugin on server side
	I0816 13:50:11.447890   57240 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:50:11.447908   57240 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:50:11.447924   57240 main.go:141] libmachine: Making call to close driver server
	I0816 13:50:11.447936   57240 main.go:141] libmachine: (embed-certs-302520) Calling .Close
	I0816 13:50:11.448179   57240 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:50:11.448195   57240 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:50:11.448241   57240 addons.go:475] Verifying addon metrics-server=true in "embed-certs-302520"
	I0816 13:50:11.450274   57240 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0816 13:50:11.451676   57240 addons.go:510] duration metric: took 1.790101568s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0816 13:50:11.971087   57240 pod_ready.go:103] pod "coredns-6f6b679f8f-whnqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:50:12.470167   57240 pod_ready.go:93] pod "coredns-6f6b679f8f-whnqh" in "kube-system" namespace has status "Ready":"True"
	I0816 13:50:12.470193   57240 pod_ready.go:82] duration metric: took 2.506842546s for pod "coredns-6f6b679f8f-whnqh" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:12.470203   57240 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-zh69g" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:12.474959   57240 pod_ready.go:93] pod "coredns-6f6b679f8f-zh69g" in "kube-system" namespace has status "Ready":"True"
	I0816 13:50:12.474980   57240 pod_ready.go:82] duration metric: took 4.769458ms for pod "coredns-6f6b679f8f-zh69g" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:12.474988   57240 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:12.479388   57240 pod_ready.go:93] pod "etcd-embed-certs-302520" in "kube-system" namespace has status "Ready":"True"
	I0816 13:50:12.479410   57240 pod_ready.go:82] duration metric: took 4.41564ms for pod "etcd-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:12.479421   57240 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:12.483567   57240 pod_ready.go:93] pod "kube-apiserver-embed-certs-302520" in "kube-system" namespace has status "Ready":"True"
	I0816 13:50:12.483589   57240 pod_ready.go:82] duration metric: took 4.159906ms for pod "kube-apiserver-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:12.483600   57240 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:14.490212   57240 pod_ready.go:103] pod "kube-controller-manager-embed-certs-302520" in "kube-system" namespace has status "Ready":"False"
	I0816 13:50:15.990204   57240 pod_ready.go:93] pod "kube-controller-manager-embed-certs-302520" in "kube-system" namespace has status "Ready":"True"
	I0816 13:50:15.990226   57240 pod_ready.go:82] duration metric: took 3.506618768s for pod "kube-controller-manager-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:15.990235   57240 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-spgtw" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:15.994580   57240 pod_ready.go:93] pod "kube-proxy-spgtw" in "kube-system" namespace has status "Ready":"True"
	I0816 13:50:15.994597   57240 pod_ready.go:82] duration metric: took 4.356588ms for pod "kube-proxy-spgtw" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:15.994605   57240 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:16.068472   57240 pod_ready.go:93] pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace has status "Ready":"True"
	I0816 13:50:16.068495   57240 pod_ready.go:82] duration metric: took 73.884906ms for pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:16.068503   57240 pod_ready.go:39] duration metric: took 6.110362477s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:50:16.068519   57240 api_server.go:52] waiting for apiserver process to appear ...
	I0816 13:50:16.068579   57240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:50:16.086318   57240 api_server.go:72] duration metric: took 6.424804798s to wait for apiserver process to appear ...
	I0816 13:50:16.086345   57240 api_server.go:88] waiting for apiserver healthz status ...
	I0816 13:50:16.086361   57240 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0816 13:50:16.091170   57240 api_server.go:279] https://192.168.39.125:8443/healthz returned 200:
	ok
	I0816 13:50:16.092122   57240 api_server.go:141] control plane version: v1.31.0
	I0816 13:50:16.092138   57240 api_server.go:131] duration metric: took 5.787898ms to wait for apiserver health ...
	I0816 13:50:16.092146   57240 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 13:50:16.271303   57240 system_pods.go:59] 9 kube-system pods found
	I0816 13:50:16.271338   57240 system_pods.go:61] "coredns-6f6b679f8f-whnqh" [6f4d69de-4130-4959-b1ef-9ddfbe5d6a72] Running
	I0816 13:50:16.271344   57240 system_pods.go:61] "coredns-6f6b679f8f-zh69g" [b65235cd-590b-4108-b5fc-b5f6072c8f5f] Running
	I0816 13:50:16.271348   57240 system_pods.go:61] "etcd-embed-certs-302520" [54a46f37-7b4c-4732-908d-df64558dd74f] Running
	I0816 13:50:16.271353   57240 system_pods.go:61] "kube-apiserver-embed-certs-302520" [d58b625b-c94e-44a7-ac30-18b1e2e8691e] Running
	I0816 13:50:16.271359   57240 system_pods.go:61] "kube-controller-manager-embed-certs-302520" [6bb26bff-7111-40c5-9f18-9ca1b733f990] Running
	I0816 13:50:16.271364   57240 system_pods.go:61] "kube-proxy-spgtw" [e9b2b029-a32e-4dd5-b4ff-bd3d61b97a02] Running
	I0816 13:50:16.271370   57240 system_pods.go:61] "kube-scheduler-embed-certs-302520" [aea7ddf8-67b1-468d-9ab8-c78b0bfecdbb] Running
	I0816 13:50:16.271379   57240 system_pods.go:61] "metrics-server-6867b74b74-q58h2" [1351eabe-df61-4b9c-b67b-2e9c963b0eaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 13:50:16.271389   57240 system_pods.go:61] "storage-provisioner" [8e139aaf-e6d1-4661-8c7b-90c1cc9827d4] Running
	I0816 13:50:16.271398   57240 system_pods.go:74] duration metric: took 179.244421ms to wait for pod list to return data ...
	I0816 13:50:16.271410   57240 default_sa.go:34] waiting for default service account to be created ...
	I0816 13:50:16.468167   57240 default_sa.go:45] found service account: "default"
	I0816 13:50:16.468196   57240 default_sa.go:55] duration metric: took 196.779435ms for default service account to be created ...
	I0816 13:50:16.468207   57240 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 13:50:16.670917   57240 system_pods.go:86] 9 kube-system pods found
	I0816 13:50:16.670943   57240 system_pods.go:89] "coredns-6f6b679f8f-whnqh" [6f4d69de-4130-4959-b1ef-9ddfbe5d6a72] Running
	I0816 13:50:16.670949   57240 system_pods.go:89] "coredns-6f6b679f8f-zh69g" [b65235cd-590b-4108-b5fc-b5f6072c8f5f] Running
	I0816 13:50:16.670953   57240 system_pods.go:89] "etcd-embed-certs-302520" [54a46f37-7b4c-4732-908d-df64558dd74f] Running
	I0816 13:50:16.670957   57240 system_pods.go:89] "kube-apiserver-embed-certs-302520" [d58b625b-c94e-44a7-ac30-18b1e2e8691e] Running
	I0816 13:50:16.670960   57240 system_pods.go:89] "kube-controller-manager-embed-certs-302520" [6bb26bff-7111-40c5-9f18-9ca1b733f990] Running
	I0816 13:50:16.670963   57240 system_pods.go:89] "kube-proxy-spgtw" [e9b2b029-a32e-4dd5-b4ff-bd3d61b97a02] Running
	I0816 13:50:16.670967   57240 system_pods.go:89] "kube-scheduler-embed-certs-302520" [aea7ddf8-67b1-468d-9ab8-c78b0bfecdbb] Running
	I0816 13:50:16.670972   57240 system_pods.go:89] "metrics-server-6867b74b74-q58h2" [1351eabe-df61-4b9c-b67b-2e9c963b0eaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 13:50:16.670976   57240 system_pods.go:89] "storage-provisioner" [8e139aaf-e6d1-4661-8c7b-90c1cc9827d4] Running
	I0816 13:50:16.670984   57240 system_pods.go:126] duration metric: took 202.771216ms to wait for k8s-apps to be running ...
	I0816 13:50:16.670990   57240 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 13:50:16.671040   57240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 13:50:16.686873   57240 system_svc.go:56] duration metric: took 15.876641ms WaitForService to wait for kubelet
	I0816 13:50:16.686906   57240 kubeadm.go:582] duration metric: took 7.025397638s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 13:50:16.686925   57240 node_conditions.go:102] verifying NodePressure condition ...
	I0816 13:50:16.869367   57240 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 13:50:16.869393   57240 node_conditions.go:123] node cpu capacity is 2
	I0816 13:50:16.869405   57240 node_conditions.go:105] duration metric: took 182.475776ms to run NodePressure ...
	I0816 13:50:16.869420   57240 start.go:241] waiting for startup goroutines ...
	I0816 13:50:16.869427   57240 start.go:246] waiting for cluster config update ...
	I0816 13:50:16.869436   57240 start.go:255] writing updated cluster config ...
	I0816 13:50:16.869686   57240 ssh_runner.go:195] Run: rm -f paused
	I0816 13:50:16.919168   57240 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 13:50:16.921207   57240 out.go:177] * Done! kubectl is now configured to use "embed-certs-302520" cluster and "default" namespace by default
	I0816 13:50:32.875973   57945 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0816 13:50:32.876092   57945 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0816 13:50:32.877853   57945 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0816 13:50:32.877964   57945 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 13:50:32.878066   57945 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 13:50:32.878184   57945 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 13:50:32.878286   57945 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0816 13:50:32.878362   57945 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 13:50:32.880211   57945 out.go:235]   - Generating certificates and keys ...
	I0816 13:50:32.880308   57945 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 13:50:32.880389   57945 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 13:50:32.880480   57945 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 13:50:32.880575   57945 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 13:50:32.880684   57945 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 13:50:32.880782   57945 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 13:50:32.880874   57945 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 13:50:32.880988   57945 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 13:50:32.881100   57945 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 13:50:32.881190   57945 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 13:50:32.881228   57945 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 13:50:32.881274   57945 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 13:50:32.881318   57945 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 13:50:32.881362   57945 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 13:50:32.881418   57945 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 13:50:32.881473   57945 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 13:50:32.881585   57945 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 13:50:32.881676   57945 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 13:50:32.881747   57945 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 13:50:32.881846   57945 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 13:50:32.883309   57945 out.go:235]   - Booting up control plane ...
	I0816 13:50:32.883394   57945 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 13:50:32.883493   57945 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 13:50:32.883563   57945 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 13:50:32.883661   57945 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 13:50:32.883867   57945 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0816 13:50:32.883916   57945 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0816 13:50:32.883985   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:50:32.884185   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:50:32.884285   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:50:32.884483   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:50:32.884557   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:50:32.884718   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:50:32.884775   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:50:32.884984   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:50:32.885058   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:50:32.885258   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:50:32.885272   57945 kubeadm.go:310] 
	I0816 13:50:32.885367   57945 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0816 13:50:32.885419   57945 kubeadm.go:310] 		timed out waiting for the condition
	I0816 13:50:32.885426   57945 kubeadm.go:310] 
	I0816 13:50:32.885455   57945 kubeadm.go:310] 	This error is likely caused by:
	I0816 13:50:32.885489   57945 kubeadm.go:310] 		- The kubelet is not running
	I0816 13:50:32.885579   57945 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0816 13:50:32.885587   57945 kubeadm.go:310] 
	I0816 13:50:32.885709   57945 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0816 13:50:32.885745   57945 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0816 13:50:32.885774   57945 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0816 13:50:32.885781   57945 kubeadm.go:310] 
	I0816 13:50:32.885866   57945 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0816 13:50:32.885938   57945 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0816 13:50:32.885945   57945 kubeadm.go:310] 
	I0816 13:50:32.886039   57945 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0816 13:50:32.886139   57945 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0816 13:50:32.886251   57945 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0816 13:50:32.886331   57945 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0816 13:50:32.886369   57945 kubeadm.go:310] 
	W0816 13:50:32.886438   57945 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0816 13:50:32.886474   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 13:50:33.351503   57945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 13:50:33.366285   57945 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 13:50:33.378157   57945 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 13:50:33.378180   57945 kubeadm.go:157] found existing configuration files:
	
	I0816 13:50:33.378241   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 13:50:33.389301   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 13:50:33.389358   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 13:50:33.400730   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 13:50:33.412130   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 13:50:33.412209   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 13:50:33.423484   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 13:50:33.433610   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 13:50:33.433676   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 13:50:33.445384   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 13:50:33.456098   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 13:50:33.456159   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 13:50:33.466036   57945 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 13:50:33.693238   57945 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 13:52:29.699171   57945 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0816 13:52:29.699367   57945 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0816 13:52:29.700903   57945 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0816 13:52:29.701036   57945 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 13:52:29.701228   57945 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 13:52:29.701460   57945 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 13:52:29.701761   57945 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0816 13:52:29.701863   57945 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 13:52:29.703486   57945 out.go:235]   - Generating certificates and keys ...
	I0816 13:52:29.703550   57945 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 13:52:29.703603   57945 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 13:52:29.703671   57945 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 13:52:29.703732   57945 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 13:52:29.703823   57945 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 13:52:29.703918   57945 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 13:52:29.704016   57945 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 13:52:29.704098   57945 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 13:52:29.704190   57945 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 13:52:29.704283   57945 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 13:52:29.704344   57945 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 13:52:29.704407   57945 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 13:52:29.704469   57945 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 13:52:29.704541   57945 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 13:52:29.704630   57945 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 13:52:29.704674   57945 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 13:52:29.704753   57945 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 13:52:29.704824   57945 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 13:52:29.704855   57945 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 13:52:29.704939   57945 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 13:52:29.706461   57945 out.go:235]   - Booting up control plane ...
	I0816 13:52:29.706555   57945 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 13:52:29.706672   57945 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 13:52:29.706744   57945 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 13:52:29.706836   57945 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 13:52:29.707002   57945 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0816 13:52:29.707047   57945 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0816 13:52:29.707126   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:52:29.707345   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:52:29.707438   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:52:29.707691   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:52:29.707752   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:52:29.707892   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:52:29.707969   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:52:29.708132   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:52:29.708219   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:52:29.708478   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:52:29.708500   57945 kubeadm.go:310] 
	I0816 13:52:29.708538   57945 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0816 13:52:29.708579   57945 kubeadm.go:310] 		timed out waiting for the condition
	I0816 13:52:29.708593   57945 kubeadm.go:310] 
	I0816 13:52:29.708633   57945 kubeadm.go:310] 	This error is likely caused by:
	I0816 13:52:29.708660   57945 kubeadm.go:310] 		- The kubelet is not running
	I0816 13:52:29.708743   57945 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0816 13:52:29.708750   57945 kubeadm.go:310] 
	I0816 13:52:29.708841   57945 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0816 13:52:29.708892   57945 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0816 13:52:29.708959   57945 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0816 13:52:29.708969   57945 kubeadm.go:310] 
	I0816 13:52:29.709120   57945 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0816 13:52:29.709237   57945 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0816 13:52:29.709248   57945 kubeadm.go:310] 
	I0816 13:52:29.709412   57945 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0816 13:52:29.709551   57945 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0816 13:52:29.709660   57945 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0816 13:52:29.709755   57945 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0816 13:52:29.709782   57945 kubeadm.go:310] 
	I0816 13:52:29.709836   57945 kubeadm.go:394] duration metric: took 7m57.514215667s to StartCluster
	I0816 13:52:29.709886   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:52:29.709942   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:52:29.753540   57945 cri.go:89] found id: ""
	I0816 13:52:29.753569   57945 logs.go:276] 0 containers: []
	W0816 13:52:29.753580   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:52:29.753588   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:52:29.753655   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:52:29.793951   57945 cri.go:89] found id: ""
	I0816 13:52:29.793975   57945 logs.go:276] 0 containers: []
	W0816 13:52:29.793983   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:52:29.793988   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:52:29.794040   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:52:29.831303   57945 cri.go:89] found id: ""
	I0816 13:52:29.831334   57945 logs.go:276] 0 containers: []
	W0816 13:52:29.831345   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:52:29.831356   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:52:29.831420   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:52:29.867252   57945 cri.go:89] found id: ""
	I0816 13:52:29.867277   57945 logs.go:276] 0 containers: []
	W0816 13:52:29.867285   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:52:29.867296   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:52:29.867349   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:52:29.901161   57945 cri.go:89] found id: ""
	I0816 13:52:29.901188   57945 logs.go:276] 0 containers: []
	W0816 13:52:29.901204   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:52:29.901212   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:52:29.901268   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:52:29.935781   57945 cri.go:89] found id: ""
	I0816 13:52:29.935808   57945 logs.go:276] 0 containers: []
	W0816 13:52:29.935816   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:52:29.935823   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:52:29.935873   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:52:29.970262   57945 cri.go:89] found id: ""
	I0816 13:52:29.970292   57945 logs.go:276] 0 containers: []
	W0816 13:52:29.970303   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:52:29.970310   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:52:29.970370   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:52:30.026580   57945 cri.go:89] found id: ""
	I0816 13:52:30.026610   57945 logs.go:276] 0 containers: []
	W0816 13:52:30.026621   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:52:30.026642   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:52:30.026657   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:52:30.050718   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:52:30.050747   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:52:30.146600   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:52:30.146623   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:52:30.146637   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:52:30.268976   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:52:30.269012   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:52:30.312306   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:52:30.312341   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0816 13:52:30.363242   57945 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0816 13:52:30.363303   57945 out.go:270] * 
	W0816 13:52:30.363365   57945 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0816 13:52:30.363377   57945 out.go:270] * 
	W0816 13:52:30.364104   57945 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 13:52:30.366989   57945 out.go:201] 
	W0816 13:52:30.368192   57945 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0816 13:52:30.368293   57945 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0816 13:52:30.368318   57945 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0816 13:52:30.369674   57945 out.go:201] 
	
	
	==> CRI-O <==
	Aug 16 13:52:32 old-k8s-version-882237 crio[655]: time="2024-08-16 13:52:32.207290580Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723816352207272280,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e85c20f8-0ed6-48b8-9da1-1c8a2a688394 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 13:52:32 old-k8s-version-882237 crio[655]: time="2024-08-16 13:52:32.207933316Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fcd48073-3e3b-4eab-9318-e397b10bb12d name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:52:32 old-k8s-version-882237 crio[655]: time="2024-08-16 13:52:32.208003242Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fcd48073-3e3b-4eab-9318-e397b10bb12d name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:52:32 old-k8s-version-882237 crio[655]: time="2024-08-16 13:52:32.208051349Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=fcd48073-3e3b-4eab-9318-e397b10bb12d name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:52:32 old-k8s-version-882237 crio[655]: time="2024-08-16 13:52:32.246380464Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c5f122fb-6b0b-493a-960f-d87ab4e2e8c3 name=/runtime.v1.RuntimeService/Version
	Aug 16 13:52:32 old-k8s-version-882237 crio[655]: time="2024-08-16 13:52:32.246497386Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c5f122fb-6b0b-493a-960f-d87ab4e2e8c3 name=/runtime.v1.RuntimeService/Version
	Aug 16 13:52:32 old-k8s-version-882237 crio[655]: time="2024-08-16 13:52:32.247486495Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b25f74c4-5631-413d-aaab-921b27b8368d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 13:52:32 old-k8s-version-882237 crio[655]: time="2024-08-16 13:52:32.247919182Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723816352247898668,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b25f74c4-5631-413d-aaab-921b27b8368d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 13:52:32 old-k8s-version-882237 crio[655]: time="2024-08-16 13:52:32.248516433Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3e7f9513-2650-4e5b-b6f8-bb62c2af7f5b name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:52:32 old-k8s-version-882237 crio[655]: time="2024-08-16 13:52:32.248640242Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3e7f9513-2650-4e5b-b6f8-bb62c2af7f5b name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:52:32 old-k8s-version-882237 crio[655]: time="2024-08-16 13:52:32.248676890Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=3e7f9513-2650-4e5b-b6f8-bb62c2af7f5b name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:52:32 old-k8s-version-882237 crio[655]: time="2024-08-16 13:52:32.282670255Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a1b8ca05-cf41-4e4c-a057-2d524d607262 name=/runtime.v1.RuntimeService/Version
	Aug 16 13:52:32 old-k8s-version-882237 crio[655]: time="2024-08-16 13:52:32.282765158Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a1b8ca05-cf41-4e4c-a057-2d524d607262 name=/runtime.v1.RuntimeService/Version
	Aug 16 13:52:32 old-k8s-version-882237 crio[655]: time="2024-08-16 13:52:32.284120069Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c881ba79-b045-4616-b483-2d7b5b669b65 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 13:52:32 old-k8s-version-882237 crio[655]: time="2024-08-16 13:52:32.284516307Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723816352284490734,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c881ba79-b045-4616-b483-2d7b5b669b65 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 13:52:32 old-k8s-version-882237 crio[655]: time="2024-08-16 13:52:32.285209790Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=561cd9dc-bcd1-45d8-8c3e-0ff033fea9b1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:52:32 old-k8s-version-882237 crio[655]: time="2024-08-16 13:52:32.285288878Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=561cd9dc-bcd1-45d8-8c3e-0ff033fea9b1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:52:32 old-k8s-version-882237 crio[655]: time="2024-08-16 13:52:32.285325925Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=561cd9dc-bcd1-45d8-8c3e-0ff033fea9b1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:52:32 old-k8s-version-882237 crio[655]: time="2024-08-16 13:52:32.320132347Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ce14e546-3830-4813-a5d9-397fd078e597 name=/runtime.v1.RuntimeService/Version
	Aug 16 13:52:32 old-k8s-version-882237 crio[655]: time="2024-08-16 13:52:32.320234322Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ce14e546-3830-4813-a5d9-397fd078e597 name=/runtime.v1.RuntimeService/Version
	Aug 16 13:52:32 old-k8s-version-882237 crio[655]: time="2024-08-16 13:52:32.321387590Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6b83a82a-0c4f-4dc3-bb63-9dc8c84bf137 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 13:52:32 old-k8s-version-882237 crio[655]: time="2024-08-16 13:52:32.321826315Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723816352321798835,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6b83a82a-0c4f-4dc3-bb63-9dc8c84bf137 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 13:52:32 old-k8s-version-882237 crio[655]: time="2024-08-16 13:52:32.323920355Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=57a0f3b7-1dab-4afa-81b3-934467e93cac name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:52:32 old-k8s-version-882237 crio[655]: time="2024-08-16 13:52:32.323994432Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=57a0f3b7-1dab-4afa-81b3-934467e93cac name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:52:32 old-k8s-version-882237 crio[655]: time="2024-08-16 13:52:32.324040475Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=57a0f3b7-1dab-4afa-81b3-934467e93cac name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug16 13:44] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050110] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040174] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.904148] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.568641] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.570397] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000010] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.219540] systemd-fstab-generator[573]: Ignoring "noauto" option for root device
	[  +0.067905] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.075212] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.209113] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.188995] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.278563] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +6.705927] systemd-fstab-generator[901]: Ignoring "noauto" option for root device
	[  +0.067606] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.266713] systemd-fstab-generator[1025]: Ignoring "noauto" option for root device
	[ +11.277225] kauditd_printk_skb: 46 callbacks suppressed
	[Aug16 13:48] systemd-fstab-generator[5073]: Ignoring "noauto" option for root device
	[Aug16 13:50] systemd-fstab-generator[5354]: Ignoring "noauto" option for root device
	[  +0.065917] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 13:52:32 up 8 min,  0 users,  load average: 0.07, 0.18, 0.13
	Linux old-k8s-version-882237 5.10.207 #1 SMP Wed Aug 14 19:18:01 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 16 13:52:29 old-k8s-version-882237 kubelet[5536]:         /usr/local/go/src/net/dial.go:580 +0x5e5
	Aug 16 13:52:29 old-k8s-version-882237 kubelet[5536]: net.(*sysDialer).dialSerial(0xc000c3ba00, 0x4f7fe40, 0xc0001e37a0, 0xc0004072f0, 0x1, 0x1, 0x0, 0x0, 0x0, 0x0)
	Aug 16 13:52:29 old-k8s-version-882237 kubelet[5536]:         /usr/local/go/src/net/dial.go:548 +0x152
	Aug 16 13:52:29 old-k8s-version-882237 kubelet[5536]: net.(*Dialer).DialContext(0xc000198c00, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc0003b93e0, 0x24, 0x0, 0x0, 0x0, ...)
	Aug 16 13:52:29 old-k8s-version-882237 kubelet[5536]:         /usr/local/go/src/net/dial.go:425 +0x6e5
	Aug 16 13:52:29 old-k8s-version-882237 kubelet[5536]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000af0a60, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc0003b93e0, 0x24, 0x60, 0x7f527023d3e8, 0x118, ...)
	Aug 16 13:52:29 old-k8s-version-882237 kubelet[5536]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Aug 16 13:52:29 old-k8s-version-882237 kubelet[5536]: net/http.(*Transport).dial(0xc000ac0000, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc0003b93e0, 0x24, 0x0, 0x0, 0x0, ...)
	Aug 16 13:52:29 old-k8s-version-882237 kubelet[5536]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Aug 16 13:52:29 old-k8s-version-882237 kubelet[5536]: net/http.(*Transport).dialConn(0xc000ac0000, 0x4f7fe00, 0xc000120018, 0x0, 0xc0002aa3c0, 0x5, 0xc0003b93e0, 0x24, 0x0, 0xc0004d05a0, ...)
	Aug 16 13:52:29 old-k8s-version-882237 kubelet[5536]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Aug 16 13:52:29 old-k8s-version-882237 kubelet[5536]: net/http.(*Transport).dialConnFor(0xc000ac0000, 0xc000027810)
	Aug 16 13:52:29 old-k8s-version-882237 kubelet[5536]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Aug 16 13:52:29 old-k8s-version-882237 kubelet[5536]: created by net/http.(*Transport).queueForDial
	Aug 16 13:52:29 old-k8s-version-882237 kubelet[5536]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Aug 16 13:52:29 old-k8s-version-882237 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Aug 16 13:52:29 old-k8s-version-882237 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Aug 16 13:52:29 old-k8s-version-882237 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Aug 16 13:52:29 old-k8s-version-882237 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Aug 16 13:52:29 old-k8s-version-882237 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Aug 16 13:52:30 old-k8s-version-882237 kubelet[5581]: I0816 13:52:30.086440    5581 server.go:416] Version: v1.20.0
	Aug 16 13:52:30 old-k8s-version-882237 kubelet[5581]: I0816 13:52:30.086731    5581 server.go:837] Client rotation is on, will bootstrap in background
	Aug 16 13:52:30 old-k8s-version-882237 kubelet[5581]: I0816 13:52:30.088513    5581 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Aug 16 13:52:30 old-k8s-version-882237 kubelet[5581]: W0816 13:52:30.089419    5581 manager.go:159] Cannot detect current cgroup on cgroup v2
	Aug 16 13:52:30 old-k8s-version-882237 kubelet[5581]: I0816 13:52:30.089766    5581 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-882237 -n old-k8s-version-882237
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-882237 -n old-k8s-version-882237: exit status 2 (219.091318ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-882237" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (723.78s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-893736 -n default-k8s-diff-port-893736
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-893736 -n default-k8s-diff-port-893736: exit status 3 (3.167822425s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 13:42:06.741322   58321 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.186:22: connect: no route to host
	E0816 13:42:06.741343   58321 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.186:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-893736 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-893736 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153786294s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.186:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-893736 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-893736 -n default-k8s-diff-port-893736
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-893736 -n default-k8s-diff-port-893736: exit status 3 (3.06200776s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 13:42:15.957288   58385 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.186:22: connect: no route to host
	E0816 13:42:15.957311   58385 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.186:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-893736" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0816 13:48:56.823337   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-311070 -n no-preload-311070
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-08-16 13:57:54.729562341 +0000 UTC m=+5817.638252957
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-311070 -n no-preload-311070
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-311070 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-311070 logs -n 25: (2.031545385s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cert-options-779306 -- sudo                         | cert-options-779306          | jenkins | v1.33.1 | 16 Aug 24 13:34 UTC | 16 Aug 24 13:34 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-779306                                 | cert-options-779306          | jenkins | v1.33.1 | 16 Aug 24 13:34 UTC | 16 Aug 24 13:34 UTC |
	| start   | -p no-preload-311070                                   | no-preload-311070            | jenkins | v1.33.1 | 16 Aug 24 13:34 UTC | 16 Aug 24 13:36 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-759623                           | kubernetes-upgrade-759623    | jenkins | v1.33.1 | 16 Aug 24 13:35 UTC | 16 Aug 24 13:35 UTC |
	| start   | -p embed-certs-302520                                  | embed-certs-302520           | jenkins | v1.33.1 | 16 Aug 24 13:35 UTC | 16 Aug 24 13:36 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-302520            | embed-certs-302520           | jenkins | v1.33.1 | 16 Aug 24 13:36 UTC | 16 Aug 24 13:36 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-302520                                  | embed-certs-302520           | jenkins | v1.33.1 | 16 Aug 24 13:36 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-311070             | no-preload-311070            | jenkins | v1.33.1 | 16 Aug 24 13:36 UTC | 16 Aug 24 13:37 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-311070                                   | no-preload-311070            | jenkins | v1.33.1 | 16 Aug 24 13:37 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-050553                              | cert-expiration-050553       | jenkins | v1.33.1 | 16 Aug 24 13:37 UTC | 16 Aug 24 13:38 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-050553                              | cert-expiration-050553       | jenkins | v1.33.1 | 16 Aug 24 13:38 UTC | 16 Aug 24 13:38 UTC |
	| delete  | -p                                                     | disable-driver-mounts-338033 | jenkins | v1.33.1 | 16 Aug 24 13:38 UTC | 16 Aug 24 13:38 UTC |
	|         | disable-driver-mounts-338033                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-893736 | jenkins | v1.33.1 | 16 Aug 24 13:38 UTC | 16 Aug 24 13:39 UTC |
	|         | default-k8s-diff-port-893736                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-302520                 | embed-certs-302520           | jenkins | v1.33.1 | 16 Aug 24 13:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-882237        | old-k8s-version-882237       | jenkins | v1.33.1 | 16 Aug 24 13:39 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-302520                                  | embed-certs-302520           | jenkins | v1.33.1 | 16 Aug 24 13:39 UTC | 16 Aug 24 13:50 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-311070                  | no-preload-311070            | jenkins | v1.33.1 | 16 Aug 24 13:39 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-311070                                   | no-preload-311070            | jenkins | v1.33.1 | 16 Aug 24 13:39 UTC | 16 Aug 24 13:48 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-893736  | default-k8s-diff-port-893736 | jenkins | v1.33.1 | 16 Aug 24 13:39 UTC | 16 Aug 24 13:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-893736 | jenkins | v1.33.1 | 16 Aug 24 13:39 UTC |                     |
	|         | default-k8s-diff-port-893736                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-882237                              | old-k8s-version-882237       | jenkins | v1.33.1 | 16 Aug 24 13:40 UTC | 16 Aug 24 13:40 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-882237             | old-k8s-version-882237       | jenkins | v1.33.1 | 16 Aug 24 13:40 UTC | 16 Aug 24 13:40 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-882237                              | old-k8s-version-882237       | jenkins | v1.33.1 | 16 Aug 24 13:40 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-893736       | default-k8s-diff-port-893736 | jenkins | v1.33.1 | 16 Aug 24 13:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-893736 | jenkins | v1.33.1 | 16 Aug 24 13:42 UTC | 16 Aug 24 13:49 UTC |
	|         | default-k8s-diff-port-893736                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 13:42:15
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 13:42:15.998819   58430 out.go:345] Setting OutFile to fd 1 ...
	I0816 13:42:15.998960   58430 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 13:42:15.998970   58430 out.go:358] Setting ErrFile to fd 2...
	I0816 13:42:15.998976   58430 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 13:42:15.999197   58430 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-3966/.minikube/bin
	I0816 13:42:15.999747   58430 out.go:352] Setting JSON to false
	I0816 13:42:16.000715   58430 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5081,"bootTime":1723810655,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 13:42:16.000770   58430 start.go:139] virtualization: kvm guest
	I0816 13:42:16.003216   58430 out.go:177] * [default-k8s-diff-port-893736] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 13:42:16.004663   58430 notify.go:220] Checking for updates...
	I0816 13:42:16.004698   58430 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 13:42:16.006298   58430 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 13:42:16.007719   58430 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 13:42:16.009073   58430 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-3966/.minikube
	I0816 13:42:16.010602   58430 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 13:42:16.012058   58430 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 13:42:16.013799   58430 config.go:182] Loaded profile config "default-k8s-diff-port-893736": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 13:42:16.014204   58430 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:42:16.014278   58430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:42:16.029427   58430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34093
	I0816 13:42:16.029977   58430 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:42:16.030548   58430 main.go:141] libmachine: Using API Version  1
	I0816 13:42:16.030573   58430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:42:16.030903   58430 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:42:16.031164   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:42:16.031412   58430 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 13:42:16.031691   58430 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:42:16.031731   58430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:42:16.046245   58430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38243
	I0816 13:42:16.046668   58430 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:42:16.047205   58430 main.go:141] libmachine: Using API Version  1
	I0816 13:42:16.047244   58430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:42:16.047537   58430 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:42:16.047730   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:42:16.080470   58430 out.go:177] * Using the kvm2 driver based on existing profile
	I0816 13:42:16.081700   58430 start.go:297] selected driver: kvm2
	I0816 13:42:16.081721   58430 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-893736 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-893736 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.186 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 13:42:16.081825   58430 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 13:42:16.082512   58430 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 13:42:16.082593   58430 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-3966/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 13:42:16.097784   58430 install.go:137] /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0816 13:42:16.098155   58430 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 13:42:16.098223   58430 cni.go:84] Creating CNI manager for ""
	I0816 13:42:16.098233   58430 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:42:16.098274   58430 start.go:340] cluster config:
	{Name:default-k8s-diff-port-893736 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-893736 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.186 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 13:42:16.098365   58430 iso.go:125] acquiring lock: {Name:mk00139b359c6fe4c84d80e843a2099303fb7c5b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 13:42:16.100341   58430 out.go:177] * Starting "default-k8s-diff-port-893736" primary control-plane node in "default-k8s-diff-port-893736" cluster
	I0816 13:42:17.205125   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:42:16.101925   58430 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 13:42:16.101966   58430 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0816 13:42:16.101973   58430 cache.go:56] Caching tarball of preloaded images
	I0816 13:42:16.102052   58430 preload.go:172] Found /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 13:42:16.102063   58430 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0816 13:42:16.102162   58430 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/default-k8s-diff-port-893736/config.json ...
	I0816 13:42:16.102344   58430 start.go:360] acquireMachinesLock for default-k8s-diff-port-893736: {Name:mk0b4b88ee8893cefe753073bc08069ac50e5641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 13:42:23.285172   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:42:26.357214   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:42:32.437218   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:42:35.509221   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:42:41.589174   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:42:44.661162   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:42:50.741223   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:42:53.813193   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:42:59.893180   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:43:02.965205   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:43:09.045252   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:43:12.117232   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:43:18.197189   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:43:21.269234   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:43:27.349182   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:43:30.421174   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:43:36.501197   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:43:39.573246   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:43:42.577406   57440 start.go:364] duration metric: took 4m10.318515071s to acquireMachinesLock for "no-preload-311070"
	I0816 13:43:42.577513   57440 start.go:96] Skipping create...Using existing machine configuration
	I0816 13:43:42.577529   57440 fix.go:54] fixHost starting: 
	I0816 13:43:42.577955   57440 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:43:42.577989   57440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:43:42.593032   57440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43785
	I0816 13:43:42.593416   57440 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:43:42.593860   57440 main.go:141] libmachine: Using API Version  1
	I0816 13:43:42.593882   57440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:43:42.594256   57440 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:43:42.594434   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	I0816 13:43:42.594586   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetState
	I0816 13:43:42.596234   57440 fix.go:112] recreateIfNeeded on no-preload-311070: state=Stopped err=<nil>
	I0816 13:43:42.596261   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	W0816 13:43:42.596431   57440 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 13:43:42.598334   57440 out.go:177] * Restarting existing kvm2 VM for "no-preload-311070" ...
	I0816 13:43:42.574954   57240 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 13:43:42.574990   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetMachineName
	I0816 13:43:42.575324   57240 buildroot.go:166] provisioning hostname "embed-certs-302520"
	I0816 13:43:42.575349   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetMachineName
	I0816 13:43:42.575554   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:43:42.577250   57240 machine.go:96] duration metric: took 4m37.4289608s to provisionDockerMachine
	I0816 13:43:42.577309   57240 fix.go:56] duration metric: took 4m37.450613575s for fixHost
	I0816 13:43:42.577314   57240 start.go:83] releasing machines lock for "embed-certs-302520", held for 4m37.450631849s
	W0816 13:43:42.577330   57240 start.go:714] error starting host: provision: host is not running
	W0816 13:43:42.577401   57240 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0816 13:43:42.577410   57240 start.go:729] Will try again in 5 seconds ...
	I0816 13:43:42.599558   57440 main.go:141] libmachine: (no-preload-311070) Calling .Start
	I0816 13:43:42.599720   57440 main.go:141] libmachine: (no-preload-311070) Ensuring networks are active...
	I0816 13:43:42.600383   57440 main.go:141] libmachine: (no-preload-311070) Ensuring network default is active
	I0816 13:43:42.600682   57440 main.go:141] libmachine: (no-preload-311070) Ensuring network mk-no-preload-311070 is active
	I0816 13:43:42.601157   57440 main.go:141] libmachine: (no-preload-311070) Getting domain xml...
	I0816 13:43:42.601868   57440 main.go:141] libmachine: (no-preload-311070) Creating domain...
	I0816 13:43:43.816308   57440 main.go:141] libmachine: (no-preload-311070) Waiting to get IP...
	I0816 13:43:43.817179   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:43.817566   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:43.817586   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:43.817516   58770 retry.go:31] will retry after 295.385031ms: waiting for machine to come up
	I0816 13:43:44.115046   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:44.115850   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:44.115875   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:44.115787   58770 retry.go:31] will retry after 340.249659ms: waiting for machine to come up
	I0816 13:43:44.457278   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:44.457722   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:44.457752   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:44.457657   58770 retry.go:31] will retry after 476.905089ms: waiting for machine to come up
	I0816 13:43:44.936230   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:44.936674   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:44.936714   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:44.936640   58770 retry.go:31] will retry after 555.288542ms: waiting for machine to come up
	I0816 13:43:45.493301   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:45.493698   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:45.493724   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:45.493657   58770 retry.go:31] will retry after 462.336365ms: waiting for machine to come up
	I0816 13:43:45.957163   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:45.957553   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:45.957580   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:45.957509   58770 retry.go:31] will retry after 886.665194ms: waiting for machine to come up
	I0816 13:43:46.845380   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:46.845743   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:46.845763   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:46.845723   58770 retry.go:31] will retry after 909.05227ms: waiting for machine to come up
	I0816 13:43:47.579134   57240 start.go:360] acquireMachinesLock for embed-certs-302520: {Name:mk0b4b88ee8893cefe753073bc08069ac50e5641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 13:43:47.755998   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:47.756439   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:47.756460   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:47.756407   58770 retry.go:31] will retry after 1.380778497s: waiting for machine to come up
	I0816 13:43:49.138398   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:49.138861   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:49.138884   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:49.138811   58770 retry.go:31] will retry after 1.788185586s: waiting for machine to come up
	I0816 13:43:50.929915   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:50.930326   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:50.930356   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:50.930276   58770 retry.go:31] will retry after 1.603049262s: waiting for machine to come up
	I0816 13:43:52.536034   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:52.536492   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:52.536518   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:52.536438   58770 retry.go:31] will retry after 1.964966349s: waiting for machine to come up
	I0816 13:43:54.504003   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:54.504408   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:54.504440   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:54.504363   58770 retry.go:31] will retry after 3.616796835s: waiting for machine to come up
	I0816 13:43:58.122295   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:58.122714   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:58.122747   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:58.122673   58770 retry.go:31] will retry after 3.893804146s: waiting for machine to come up
	I0816 13:44:02.020870   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.021351   57440 main.go:141] libmachine: (no-preload-311070) Found IP for machine: 192.168.61.116
	I0816 13:44:02.021372   57440 main.go:141] libmachine: (no-preload-311070) Reserving static IP address...
	I0816 13:44:02.021385   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has current primary IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.021917   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "no-preload-311070", mac: "52:54:00:14:17:b3", ip: "192.168.61.116"} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:02.021948   57440 main.go:141] libmachine: (no-preload-311070) Reserved static IP address: 192.168.61.116
	I0816 13:44:02.021966   57440 main.go:141] libmachine: (no-preload-311070) DBG | skip adding static IP to network mk-no-preload-311070 - found existing host DHCP lease matching {name: "no-preload-311070", mac: "52:54:00:14:17:b3", ip: "192.168.61.116"}
	I0816 13:44:02.021977   57440 main.go:141] libmachine: (no-preload-311070) DBG | Getting to WaitForSSH function...
	I0816 13:44:02.021989   57440 main.go:141] libmachine: (no-preload-311070) Waiting for SSH to be available...
	I0816 13:44:02.024661   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.025071   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:02.025094   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.025327   57440 main.go:141] libmachine: (no-preload-311070) DBG | Using SSH client type: external
	I0816 13:44:02.025349   57440 main.go:141] libmachine: (no-preload-311070) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/no-preload-311070/id_rsa (-rw-------)
	I0816 13:44:02.025376   57440 main.go:141] libmachine: (no-preload-311070) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-3966/.minikube/machines/no-preload-311070/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 13:44:02.025387   57440 main.go:141] libmachine: (no-preload-311070) DBG | About to run SSH command:
	I0816 13:44:02.025406   57440 main.go:141] libmachine: (no-preload-311070) DBG | exit 0
	I0816 13:44:02.148864   57440 main.go:141] libmachine: (no-preload-311070) DBG | SSH cmd err, output: <nil>: 
	I0816 13:44:02.149279   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetConfigRaw
	I0816 13:44:02.149868   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetIP
	I0816 13:44:02.152149   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.152460   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:02.152481   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.152681   57440 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070/config.json ...
	I0816 13:44:02.152853   57440 machine.go:93] provisionDockerMachine start ...
	I0816 13:44:02.152869   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	I0816 13:44:02.153131   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:02.155341   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.155703   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:02.155743   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.155845   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:03.389847   57945 start.go:364] duration metric: took 3m33.186277254s to acquireMachinesLock for "old-k8s-version-882237"
	I0816 13:44:03.389911   57945 start.go:96] Skipping create...Using existing machine configuration
	I0816 13:44:03.389923   57945 fix.go:54] fixHost starting: 
	I0816 13:44:03.390344   57945 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:03.390384   57945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:03.406808   57945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40841
	I0816 13:44:03.407227   57945 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:03.407790   57945 main.go:141] libmachine: Using API Version  1
	I0816 13:44:03.407819   57945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:03.408124   57945 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:03.408341   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:44:03.408506   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetState
	I0816 13:44:03.409993   57945 fix.go:112] recreateIfNeeded on old-k8s-version-882237: state=Stopped err=<nil>
	I0816 13:44:03.410029   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	W0816 13:44:03.410200   57945 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 13:44:03.412299   57945 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-882237" ...
	I0816 13:44:02.156024   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:02.156199   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:02.156350   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:02.156557   57440 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:02.156747   57440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0816 13:44:02.156758   57440 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 13:44:02.261263   57440 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 13:44:02.261290   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetMachineName
	I0816 13:44:02.261514   57440 buildroot.go:166] provisioning hostname "no-preload-311070"
	I0816 13:44:02.261528   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetMachineName
	I0816 13:44:02.261696   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:02.264473   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.264892   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:02.264936   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.265030   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:02.265198   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:02.265365   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:02.265485   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:02.265624   57440 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:02.265796   57440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0816 13:44:02.265813   57440 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-311070 && echo "no-preload-311070" | sudo tee /etc/hostname
	I0816 13:44:02.384079   57440 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-311070
	
	I0816 13:44:02.384112   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:02.386756   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.387065   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:02.387104   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.387285   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:02.387501   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:02.387699   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:02.387843   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:02.387997   57440 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:02.388193   57440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0816 13:44:02.388218   57440 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-311070' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-311070/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-311070' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 13:44:02.502089   57440 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 13:44:02.502122   57440 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-3966/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-3966/.minikube}
	I0816 13:44:02.502159   57440 buildroot.go:174] setting up certificates
	I0816 13:44:02.502173   57440 provision.go:84] configureAuth start
	I0816 13:44:02.502191   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetMachineName
	I0816 13:44:02.502484   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetIP
	I0816 13:44:02.505215   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.505523   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:02.505560   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.505726   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:02.507770   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.508033   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:02.508062   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.508193   57440 provision.go:143] copyHostCerts
	I0816 13:44:02.508249   57440 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem, removing ...
	I0816 13:44:02.508267   57440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem
	I0816 13:44:02.508336   57440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem (1082 bytes)
	I0816 13:44:02.508426   57440 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem, removing ...
	I0816 13:44:02.508435   57440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem
	I0816 13:44:02.508460   57440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem (1123 bytes)
	I0816 13:44:02.508520   57440 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem, removing ...
	I0816 13:44:02.508527   57440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem
	I0816 13:44:02.508548   57440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem (1675 bytes)
	I0816 13:44:02.508597   57440 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem org=jenkins.no-preload-311070 san=[127.0.0.1 192.168.61.116 localhost minikube no-preload-311070]
	I0816 13:44:02.732379   57440 provision.go:177] copyRemoteCerts
	I0816 13:44:02.732434   57440 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 13:44:02.732458   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:02.735444   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.735803   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:02.735837   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.736040   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:02.736274   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:02.736428   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:02.736587   57440 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/no-preload-311070/id_rsa Username:docker}
	I0816 13:44:02.819602   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 13:44:02.843489   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0816 13:44:02.866482   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 13:44:02.889908   57440 provision.go:87] duration metric: took 387.723287ms to configureAuth
	I0816 13:44:02.889936   57440 buildroot.go:189] setting minikube options for container-runtime
	I0816 13:44:02.890151   57440 config.go:182] Loaded profile config "no-preload-311070": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 13:44:02.890250   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:02.892851   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.893158   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:02.893184   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.893381   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:02.893607   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:02.893777   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:02.893925   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:02.894076   57440 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:02.894267   57440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0816 13:44:02.894286   57440 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 13:44:03.153730   57440 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 13:44:03.153755   57440 machine.go:96] duration metric: took 1.000891309s to provisionDockerMachine
	I0816 13:44:03.153766   57440 start.go:293] postStartSetup for "no-preload-311070" (driver="kvm2")
	I0816 13:44:03.153776   57440 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 13:44:03.153790   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	I0816 13:44:03.154088   57440 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 13:44:03.154122   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:03.156612   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:03.156931   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:03.156969   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:03.157113   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:03.157299   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:03.157438   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:03.157595   57440 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/no-preload-311070/id_rsa Username:docker}
	I0816 13:44:03.241700   57440 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 13:44:03.246133   57440 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 13:44:03.246155   57440 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/addons for local assets ...
	I0816 13:44:03.246221   57440 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/files for local assets ...
	I0816 13:44:03.246292   57440 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem -> 111492.pem in /etc/ssl/certs
	I0816 13:44:03.246379   57440 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 13:44:03.257778   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /etc/ssl/certs/111492.pem (1708 bytes)
	I0816 13:44:03.283511   57440 start.go:296] duration metric: took 129.718161ms for postStartSetup
	I0816 13:44:03.283552   57440 fix.go:56] duration metric: took 20.706029776s for fixHost
	I0816 13:44:03.283603   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:03.286296   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:03.286608   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:03.286651   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:03.286803   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:03.287016   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:03.287158   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:03.287298   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:03.287477   57440 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:03.287639   57440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0816 13:44:03.287649   57440 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 13:44:03.389691   57440 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723815843.358144829
	
	I0816 13:44:03.389710   57440 fix.go:216] guest clock: 1723815843.358144829
	I0816 13:44:03.389717   57440 fix.go:229] Guest: 2024-08-16 13:44:03.358144829 +0000 UTC Remote: 2024-08-16 13:44:03.283556408 +0000 UTC m=+271.159980604 (delta=74.588421ms)
	I0816 13:44:03.389749   57440 fix.go:200] guest clock delta is within tolerance: 74.588421ms
	I0816 13:44:03.389754   57440 start.go:83] releasing machines lock for "no-preload-311070", held for 20.812259998s
	I0816 13:44:03.389779   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	I0816 13:44:03.390029   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetIP
	I0816 13:44:03.392788   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:03.393137   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:03.393160   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:03.393365   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	I0816 13:44:03.393870   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	I0816 13:44:03.394042   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	I0816 13:44:03.394125   57440 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 13:44:03.394180   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:03.394215   57440 ssh_runner.go:195] Run: cat /version.json
	I0816 13:44:03.394235   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:03.396749   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:03.396813   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:03.397124   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:03.397152   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:03.397180   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:03.397197   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:03.397466   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:03.397543   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:03.397717   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:03.397731   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:03.397874   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:03.397921   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:03.397998   57440 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/no-preload-311070/id_rsa Username:docker}
	I0816 13:44:03.398077   57440 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/no-preload-311070/id_rsa Username:docker}
	I0816 13:44:03.473552   57440 ssh_runner.go:195] Run: systemctl --version
	I0816 13:44:03.497958   57440 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 13:44:03.644212   57440 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 13:44:03.651347   57440 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 13:44:03.651455   57440 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 13:44:03.667822   57440 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 13:44:03.667842   57440 start.go:495] detecting cgroup driver to use...
	I0816 13:44:03.667915   57440 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 13:44:03.685838   57440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 13:44:03.700002   57440 docker.go:217] disabling cri-docker service (if available) ...
	I0816 13:44:03.700073   57440 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 13:44:03.713465   57440 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 13:44:03.726793   57440 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 13:44:03.838274   57440 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 13:44:03.967880   57440 docker.go:233] disabling docker service ...
	I0816 13:44:03.967951   57440 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 13:44:03.982178   57440 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 13:44:03.994574   57440 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 13:44:04.132374   57440 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 13:44:04.242820   57440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 13:44:04.257254   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 13:44:04.277961   57440 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 13:44:04.278018   57440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:04.288557   57440 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 13:44:04.288621   57440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:04.299108   57440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:04.310139   57440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:04.320850   57440 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 13:44:04.332224   57440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:04.342905   57440 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:04.361606   57440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:04.372423   57440 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 13:44:04.382305   57440 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 13:44:04.382355   57440 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 13:44:04.396774   57440 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 13:44:04.408417   57440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:44:04.516638   57440 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 13:44:04.684247   57440 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 13:44:04.684316   57440 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 13:44:04.689824   57440 start.go:563] Will wait 60s for crictl version
	I0816 13:44:04.689878   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:44:04.693456   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 13:44:04.732628   57440 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 13:44:04.732712   57440 ssh_runner.go:195] Run: crio --version
	I0816 13:44:04.760111   57440 ssh_runner.go:195] Run: crio --version
	I0816 13:44:04.790127   57440 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 13:44:03.413613   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .Start
	I0816 13:44:03.413783   57945 main.go:141] libmachine: (old-k8s-version-882237) Ensuring networks are active...
	I0816 13:44:03.414567   57945 main.go:141] libmachine: (old-k8s-version-882237) Ensuring network default is active
	I0816 13:44:03.414873   57945 main.go:141] libmachine: (old-k8s-version-882237) Ensuring network mk-old-k8s-version-882237 is active
	I0816 13:44:03.415336   57945 main.go:141] libmachine: (old-k8s-version-882237) Getting domain xml...
	I0816 13:44:03.416198   57945 main.go:141] libmachine: (old-k8s-version-882237) Creating domain...
	I0816 13:44:04.671017   57945 main.go:141] libmachine: (old-k8s-version-882237) Waiting to get IP...
	I0816 13:44:04.672035   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:04.672467   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:04.672560   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:04.672467   58914 retry.go:31] will retry after 271.707338ms: waiting for machine to come up
	I0816 13:44:04.946147   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:04.946549   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:04.946577   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:04.946506   58914 retry.go:31] will retry after 324.872897ms: waiting for machine to come up
	I0816 13:44:04.791315   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetIP
	I0816 13:44:04.794224   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:04.794587   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:04.794613   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:04.794794   57440 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0816 13:44:04.798848   57440 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 13:44:04.811522   57440 kubeadm.go:883] updating cluster {Name:no-preload-311070 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-311070 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 13:44:04.811628   57440 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 13:44:04.811685   57440 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 13:44:04.845546   57440 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 13:44:04.845567   57440 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0816 13:44:04.845630   57440 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:04.845654   57440 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0816 13:44:04.845687   57440 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 13:44:04.845714   57440 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0816 13:44:04.845694   57440 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0816 13:44:04.845789   57440 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0816 13:44:04.845839   57440 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0816 13:44:04.845875   57440 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0816 13:44:04.847428   57440 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0816 13:44:04.847440   57440 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0816 13:44:04.847454   57440 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0816 13:44:04.847428   57440 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0816 13:44:04.847484   57440 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0816 13:44:04.847429   57440 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0816 13:44:04.847431   57440 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:04.847508   57440 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 13:44:05.036225   57440 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0816 13:44:05.071514   57440 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0816 13:44:05.075186   57440 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0816 13:44:05.075233   57440 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0816 13:44:05.075273   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:44:05.111591   57440 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0816 13:44:05.111634   57440 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0816 13:44:05.111687   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:44:05.111704   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0816 13:44:05.145127   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0816 13:44:05.145289   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0816 13:44:05.186194   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0816 13:44:05.200886   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0816 13:44:05.203824   57440 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0816 13:44:05.208201   57440 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0816 13:44:05.209021   57440 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 13:44:05.234117   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0816 13:44:05.234893   57440 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0816 13:44:05.245119   57440 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0816 13:44:05.305971   57440 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0816 13:44:05.306084   57440 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0816 13:44:05.374880   57440 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0816 13:44:05.374922   57440 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0816 13:44:05.374971   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:44:05.399114   57440 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0816 13:44:05.399156   57440 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0816 13:44:05.399187   57440 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0816 13:44:05.399216   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:44:05.399225   57440 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 13:44:05.399267   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:44:05.399318   57440 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0816 13:44:05.399414   57440 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0816 13:44:05.401940   57440 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0816 13:44:05.401975   57440 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0816 13:44:05.402006   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:44:05.513930   57440 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0816 13:44:05.513961   57440 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0816 13:44:05.514017   57440 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0816 13:44:05.514032   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0816 13:44:05.514059   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0816 13:44:05.514112   57440 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0816 13:44:05.514115   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 13:44:05.514150   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0816 13:44:05.634275   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0816 13:44:05.634340   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 13:44:05.864118   57440 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:05.273252   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:05.273730   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:05.273758   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:05.273682   58914 retry.go:31] will retry after 300.46858ms: waiting for machine to come up
	I0816 13:44:05.576567   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:05.577060   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:05.577088   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:05.577023   58914 retry.go:31] will retry after 471.968976ms: waiting for machine to come up
	I0816 13:44:06.050651   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:06.051035   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:06.051075   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:06.051005   58914 retry.go:31] will retry after 696.85088ms: waiting for machine to come up
	I0816 13:44:06.750108   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:06.750611   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:06.750643   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:06.750548   58914 retry.go:31] will retry after 752.204898ms: waiting for machine to come up
	I0816 13:44:07.504321   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:07.504741   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:07.504766   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:07.504706   58914 retry.go:31] will retry after 734.932569ms: waiting for machine to come up
	I0816 13:44:08.241587   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:08.241950   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:08.241977   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:08.241895   58914 retry.go:31] will retry after 1.245731112s: waiting for machine to come up
	I0816 13:44:09.488787   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:09.489326   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:09.489370   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:09.489282   58914 retry.go:31] will retry after 1.454286295s: waiting for machine to come up
	I0816 13:44:07.542707   57440 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (2.028664898s)
	I0816 13:44:07.542745   57440 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0816 13:44:07.542770   57440 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0816 13:44:07.542773   57440 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1: (2.028589727s)
	I0816 13:44:07.542817   57440 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0: (2.028737534s)
	I0816 13:44:07.542831   57440 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0816 13:44:07.542837   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0816 13:44:07.542869   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0816 13:44:07.542888   57440 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0: (1.908584925s)
	I0816 13:44:07.542937   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0816 13:44:07.542951   57440 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0: (1.908590671s)
	I0816 13:44:07.542995   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 13:44:07.543034   57440 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.678883978s)
	I0816 13:44:07.543068   57440 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0816 13:44:07.543103   57440 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:07.543138   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:44:11.390456   57440 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0: (3.847434032s)
	I0816 13:44:11.390507   57440 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0816 13:44:11.390610   57440 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0816 13:44:11.390647   57440 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.847797916s)
	I0816 13:44:11.390674   57440 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0816 13:44:11.390684   57440 ssh_runner.go:235] Completed: which crictl: (3.847535001s)
	I0816 13:44:11.390740   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:11.390780   57440 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0: (3.847819859s)
	I0816 13:44:11.390810   57440 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1: (3.847960553s)
	I0816 13:44:11.390825   57440 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0816 13:44:11.390848   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0816 13:44:11.390908   57440 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0816 13:44:11.390923   57440 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0: (3.848033361s)
	I0816 13:44:11.390978   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0816 13:44:11.461833   57440 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0816 13:44:11.461859   57440 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0816 13:44:11.461905   57440 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0816 13:44:11.461922   57440 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0816 13:44:11.461843   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:11.461990   57440 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0816 13:44:11.462013   57440 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0816 13:44:11.462557   57440 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0816 13:44:11.462649   57440 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0816 13:44:10.944947   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:10.945395   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:10.945459   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:10.945352   58914 retry.go:31] will retry after 1.738238967s: waiting for machine to come up
	I0816 13:44:12.686147   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:12.686673   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:12.686701   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:12.686630   58914 retry.go:31] will retry after 2.778761596s: waiting for machine to come up
	I0816 13:44:13.839070   57440 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (2.377139357s)
	I0816 13:44:13.839101   57440 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0816 13:44:13.839141   57440 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0816 13:44:13.839207   57440 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0816 13:44:13.839255   57440 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.377282192s)
	I0816 13:44:13.839312   57440 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1: (2.377281378s)
	I0816 13:44:13.839358   57440 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0816 13:44:13.839358   57440 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0: (2.376690281s)
	I0816 13:44:13.839379   57440 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0816 13:44:13.839318   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:13.880059   57440 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0816 13:44:13.880203   57440 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0816 13:44:15.818912   57440 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (1.979684366s)
	I0816 13:44:15.818943   57440 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0816 13:44:15.818975   57440 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.938747663s)
	I0816 13:44:15.818986   57440 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0816 13:44:15.819000   57440 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0816 13:44:15.819043   57440 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0816 13:44:15.468356   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:15.468788   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:15.468817   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:15.468739   58914 retry.go:31] will retry after 2.807621726s: waiting for machine to come up
	I0816 13:44:18.277604   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:18.277980   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:18.278013   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:18.277937   58914 retry.go:31] will retry after 4.131806684s: waiting for machine to come up
	I0816 13:44:17.795989   57440 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.976923514s)
	I0816 13:44:17.796013   57440 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0816 13:44:17.796040   57440 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0816 13:44:17.796088   57440 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0816 13:44:19.147815   57440 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.351703003s)
	I0816 13:44:19.147843   57440 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0816 13:44:19.147869   57440 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0816 13:44:19.147919   57440 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0816 13:44:19.791370   57440 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0816 13:44:19.791414   57440 cache_images.go:123] Successfully loaded all cached images
	I0816 13:44:19.791421   57440 cache_images.go:92] duration metric: took 14.945842963s to LoadCachedImages
	I0816 13:44:19.791440   57440 kubeadm.go:934] updating node { 192.168.61.116 8443 v1.31.0 crio true true} ...
	I0816 13:44:19.791593   57440 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-311070 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-311070 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 13:44:19.791681   57440 ssh_runner.go:195] Run: crio config
	I0816 13:44:19.843963   57440 cni.go:84] Creating CNI manager for ""
	I0816 13:44:19.843984   57440 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:44:19.844003   57440 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 13:44:19.844029   57440 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.116 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-311070 NodeName:no-preload-311070 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 13:44:19.844189   57440 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.116
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-311070"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 13:44:19.844250   57440 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 13:44:19.854942   57440 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 13:44:19.855014   57440 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 13:44:19.864794   57440 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0816 13:44:19.881210   57440 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 13:44:19.897450   57440 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0816 13:44:19.916038   57440 ssh_runner.go:195] Run: grep 192.168.61.116	control-plane.minikube.internal$ /etc/hosts
	I0816 13:44:19.919995   57440 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 13:44:19.934081   57440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:44:20.077422   57440 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 13:44:20.093846   57440 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070 for IP: 192.168.61.116
	I0816 13:44:20.093864   57440 certs.go:194] generating shared ca certs ...
	I0816 13:44:20.093881   57440 certs.go:226] acquiring lock for ca certs: {Name:mkdb46755373c135acf8239d2d4352f4b0b3d1d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:44:20.094055   57440 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key
	I0816 13:44:20.094120   57440 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key
	I0816 13:44:20.094135   57440 certs.go:256] generating profile certs ...
	I0816 13:44:20.094236   57440 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070/client.key
	I0816 13:44:20.094325   57440 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070/apiserver.key.000c4f90
	I0816 13:44:20.094391   57440 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070/proxy-client.key
	I0816 13:44:20.094529   57440 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem (1338 bytes)
	W0816 13:44:20.094571   57440 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149_empty.pem, impossibly tiny 0 bytes
	I0816 13:44:20.094584   57440 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 13:44:20.094621   57440 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem (1082 bytes)
	I0816 13:44:20.094654   57440 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem (1123 bytes)
	I0816 13:44:20.094795   57440 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem (1675 bytes)
	I0816 13:44:20.094874   57440 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem (1708 bytes)
	I0816 13:44:20.096132   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 13:44:20.130987   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 13:44:20.160701   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 13:44:20.187948   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 13:44:20.217162   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0816 13:44:20.242522   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 13:44:20.273582   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 13:44:20.300613   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 13:44:20.328363   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /usr/share/ca-certificates/111492.pem (1708 bytes)
	I0816 13:44:20.353396   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 13:44:20.377770   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem --> /usr/share/ca-certificates/11149.pem (1338 bytes)
	I0816 13:44:20.401760   57440 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 13:44:20.418302   57440 ssh_runner.go:195] Run: openssl version
	I0816 13:44:20.424065   57440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111492.pem && ln -fs /usr/share/ca-certificates/111492.pem /etc/ssl/certs/111492.pem"
	I0816 13:44:20.434841   57440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111492.pem
	I0816 13:44:20.439352   57440 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 12:33 /usr/share/ca-certificates/111492.pem
	I0816 13:44:20.439398   57440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111492.pem
	I0816 13:44:20.445210   57440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111492.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 13:44:20.455727   57440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 13:44:20.466095   57440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:44:20.470528   57440 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 12:22 /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:44:20.470568   57440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:44:20.476080   57440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 13:44:20.486189   57440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11149.pem && ln -fs /usr/share/ca-certificates/11149.pem /etc/ssl/certs/11149.pem"
	I0816 13:44:20.496373   57440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11149.pem
	I0816 13:44:20.500696   57440 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 12:33 /usr/share/ca-certificates/11149.pem
	I0816 13:44:20.500737   57440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11149.pem
	I0816 13:44:20.506426   57440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11149.pem /etc/ssl/certs/51391683.0"
	I0816 13:44:20.517130   57440 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 13:44:20.521664   57440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 13:44:20.527604   57440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 13:44:20.533478   57440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 13:44:20.539285   57440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 13:44:20.545042   57440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 13:44:20.550681   57440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 13:44:20.556239   57440 kubeadm.go:392] StartCluster: {Name:no-preload-311070 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-311070 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 13:44:20.556314   57440 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 13:44:20.556391   57440 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 13:44:20.594069   57440 cri.go:89] found id: ""
	I0816 13:44:20.594128   57440 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 13:44:20.604067   57440 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 13:44:20.604085   57440 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 13:44:20.604131   57440 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 13:44:20.614182   57440 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 13:44:20.615072   57440 kubeconfig.go:125] found "no-preload-311070" server: "https://192.168.61.116:8443"
	I0816 13:44:20.617096   57440 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 13:44:20.626046   57440 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.116
	I0816 13:44:20.626069   57440 kubeadm.go:1160] stopping kube-system containers ...
	I0816 13:44:20.626078   57440 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 13:44:20.626114   57440 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 13:44:20.659889   57440 cri.go:89] found id: ""
	I0816 13:44:20.659954   57440 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 13:44:20.676977   57440 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 13:44:20.686930   57440 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 13:44:20.686946   57440 kubeadm.go:157] found existing configuration files:
	
	I0816 13:44:20.686985   57440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 13:44:20.696144   57440 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 13:44:20.696222   57440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 13:44:20.705550   57440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 13:44:20.714350   57440 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 13:44:20.714399   57440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 13:44:20.723636   57440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 13:44:20.732287   57440 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 13:44:20.732329   57440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 13:44:20.741390   57440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 13:44:20.749913   57440 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 13:44:20.749956   57440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 13:44:20.758968   57440 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 13:44:20.768054   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:20.872847   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:21.933273   57440 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.060394194s)
	I0816 13:44:21.933303   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:22.130462   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:23.689897   58430 start.go:364] duration metric: took 2m7.587518205s to acquireMachinesLock for "default-k8s-diff-port-893736"
	I0816 13:44:23.689958   58430 start.go:96] Skipping create...Using existing machine configuration
	I0816 13:44:23.689965   58430 fix.go:54] fixHost starting: 
	I0816 13:44:23.690363   58430 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:23.690401   58430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:23.707766   58430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40097
	I0816 13:44:23.708281   58430 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:23.709439   58430 main.go:141] libmachine: Using API Version  1
	I0816 13:44:23.709462   58430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:23.709757   58430 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:23.709906   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:44:23.710017   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetState
	I0816 13:44:23.711612   58430 fix.go:112] recreateIfNeeded on default-k8s-diff-port-893736: state=Stopped err=<nil>
	I0816 13:44:23.711655   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	W0816 13:44:23.711797   58430 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 13:44:23.713600   58430 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-893736" ...
	I0816 13:44:22.413954   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.414552   57945 main.go:141] libmachine: (old-k8s-version-882237) Found IP for machine: 192.168.72.105
	I0816 13:44:22.414575   57945 main.go:141] libmachine: (old-k8s-version-882237) Reserving static IP address...
	I0816 13:44:22.414591   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has current primary IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.415085   57945 main.go:141] libmachine: (old-k8s-version-882237) Reserved static IP address: 192.168.72.105
	I0816 13:44:22.415142   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "old-k8s-version-882237", mac: "52:54:00:ce:02:bd", ip: "192.168.72.105"} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:22.415157   57945 main.go:141] libmachine: (old-k8s-version-882237) Waiting for SSH to be available...
	I0816 13:44:22.415183   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | skip adding static IP to network mk-old-k8s-version-882237 - found existing host DHCP lease matching {name: "old-k8s-version-882237", mac: "52:54:00:ce:02:bd", ip: "192.168.72.105"}
	I0816 13:44:22.415195   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | Getting to WaitForSSH function...
	I0816 13:44:22.417524   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.417844   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:22.417875   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.417987   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | Using SSH client type: external
	I0816 13:44:22.418014   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/old-k8s-version-882237/id_rsa (-rw-------)
	I0816 13:44:22.418052   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.105 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-3966/.minikube/machines/old-k8s-version-882237/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 13:44:22.418072   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | About to run SSH command:
	I0816 13:44:22.418086   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | exit 0
	I0816 13:44:22.536890   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | SSH cmd err, output: <nil>: 
	I0816 13:44:22.537284   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetConfigRaw
	I0816 13:44:22.537843   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetIP
	I0816 13:44:22.540100   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.540454   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:22.540490   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.540683   57945 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/config.json ...
	I0816 13:44:22.540939   57945 machine.go:93] provisionDockerMachine start ...
	I0816 13:44:22.540960   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:44:22.541184   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:22.543102   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.543385   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:22.543413   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.543505   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:44:22.543664   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:22.543798   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:22.543991   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:44:22.544177   57945 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:22.544497   57945 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.105 22 <nil> <nil>}
	I0816 13:44:22.544519   57945 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 13:44:22.641319   57945 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 13:44:22.641355   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetMachineName
	I0816 13:44:22.641606   57945 buildroot.go:166] provisioning hostname "old-k8s-version-882237"
	I0816 13:44:22.641630   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetMachineName
	I0816 13:44:22.641820   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:22.644657   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.645053   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:22.645085   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.645279   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:44:22.645476   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:22.645656   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:22.645827   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:44:22.646013   57945 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:22.646233   57945 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.105 22 <nil> <nil>}
	I0816 13:44:22.646248   57945 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-882237 && echo "old-k8s-version-882237" | sudo tee /etc/hostname
	I0816 13:44:22.759488   57945 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-882237
	
	I0816 13:44:22.759526   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:22.762382   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.762774   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:22.762811   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.762959   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:44:22.763163   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:22.763353   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:22.763534   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:44:22.763738   57945 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:22.763967   57945 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.105 22 <nil> <nil>}
	I0816 13:44:22.763995   57945 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-882237' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-882237/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-882237' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 13:44:22.878120   57945 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 13:44:22.878158   57945 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-3966/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-3966/.minikube}
	I0816 13:44:22.878215   57945 buildroot.go:174] setting up certificates
	I0816 13:44:22.878230   57945 provision.go:84] configureAuth start
	I0816 13:44:22.878244   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetMachineName
	I0816 13:44:22.878581   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetIP
	I0816 13:44:22.881426   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.881808   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:22.881843   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.881971   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:22.884352   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.884750   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:22.884778   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.884932   57945 provision.go:143] copyHostCerts
	I0816 13:44:22.884994   57945 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem, removing ...
	I0816 13:44:22.885016   57945 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem
	I0816 13:44:22.885084   57945 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem (1082 bytes)
	I0816 13:44:22.885230   57945 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem, removing ...
	I0816 13:44:22.885242   57945 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem
	I0816 13:44:22.885276   57945 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem (1123 bytes)
	I0816 13:44:22.885374   57945 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem, removing ...
	I0816 13:44:22.885383   57945 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem
	I0816 13:44:22.885415   57945 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem (1675 bytes)
	I0816 13:44:22.885503   57945 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-882237 san=[127.0.0.1 192.168.72.105 localhost minikube old-k8s-version-882237]
	I0816 13:44:23.017446   57945 provision.go:177] copyRemoteCerts
	I0816 13:44:23.017519   57945 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 13:44:23.017555   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:23.020030   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.020423   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:23.020460   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.020678   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:44:23.020888   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:23.021076   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:44:23.021199   57945 sshutil.go:53] new ssh client: &{IP:192.168.72.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/old-k8s-version-882237/id_rsa Username:docker}
	I0816 13:44:23.100006   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0816 13:44:23.128795   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 13:44:23.157542   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 13:44:23.182619   57945 provision.go:87] duration metric: took 304.375843ms to configureAuth
	I0816 13:44:23.182652   57945 buildroot.go:189] setting minikube options for container-runtime
	I0816 13:44:23.182882   57945 config.go:182] Loaded profile config "old-k8s-version-882237": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0816 13:44:23.182984   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:23.186043   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.186441   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:23.186474   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.186648   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:44:23.186844   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:23.187015   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:23.187196   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:44:23.187383   57945 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:23.187566   57945 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.105 22 <nil> <nil>}
	I0816 13:44:23.187587   57945 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 13:44:23.459221   57945 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 13:44:23.459248   57945 machine.go:96] duration metric: took 918.295024ms to provisionDockerMachine
	I0816 13:44:23.459261   57945 start.go:293] postStartSetup for "old-k8s-version-882237" (driver="kvm2")
	I0816 13:44:23.459275   57945 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 13:44:23.459305   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:44:23.459614   57945 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 13:44:23.459649   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:23.462624   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.463010   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:23.463033   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.463210   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:44:23.463405   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:23.463584   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:44:23.463715   57945 sshutil.go:53] new ssh client: &{IP:192.168.72.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/old-k8s-version-882237/id_rsa Username:docker}
	I0816 13:44:23.550785   57945 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 13:44:23.554984   57945 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 13:44:23.555009   57945 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/addons for local assets ...
	I0816 13:44:23.555078   57945 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/files for local assets ...
	I0816 13:44:23.555171   57945 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem -> 111492.pem in /etc/ssl/certs
	I0816 13:44:23.555290   57945 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 13:44:23.564655   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /etc/ssl/certs/111492.pem (1708 bytes)
	I0816 13:44:23.588471   57945 start.go:296] duration metric: took 129.196791ms for postStartSetup
	I0816 13:44:23.588515   57945 fix.go:56] duration metric: took 20.198590598s for fixHost
	I0816 13:44:23.588544   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:23.591443   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.591805   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:23.591835   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.591959   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:44:23.592145   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:23.592354   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:23.592492   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:44:23.592668   57945 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:23.592868   57945 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.105 22 <nil> <nil>}
	I0816 13:44:23.592885   57945 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 13:44:23.689724   57945 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723815863.663875328
	
	I0816 13:44:23.689760   57945 fix.go:216] guest clock: 1723815863.663875328
	I0816 13:44:23.689771   57945 fix.go:229] Guest: 2024-08-16 13:44:23.663875328 +0000 UTC Remote: 2024-08-16 13:44:23.588520483 +0000 UTC m=+233.521229154 (delta=75.354845ms)
	I0816 13:44:23.689796   57945 fix.go:200] guest clock delta is within tolerance: 75.354845ms
	I0816 13:44:23.689806   57945 start.go:83] releasing machines lock for "old-k8s-version-882237", held for 20.299922092s
	I0816 13:44:23.689839   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:44:23.690115   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetIP
	I0816 13:44:23.692683   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.693079   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:23.693108   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.693268   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:44:23.693753   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:44:23.693926   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:44:23.694009   57945 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 13:44:23.694062   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:23.694142   57945 ssh_runner.go:195] Run: cat /version.json
	I0816 13:44:23.694167   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:23.696872   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.696897   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.697247   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:23.697281   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:23.697309   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.697340   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.697622   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:44:23.697801   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:44:23.697830   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:23.697974   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:44:23.698010   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:23.698144   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:44:23.698155   57945 sshutil.go:53] new ssh client: &{IP:192.168.72.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/old-k8s-version-882237/id_rsa Username:docker}
	I0816 13:44:23.698312   57945 sshutil.go:53] new ssh client: &{IP:192.168.72.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/old-k8s-version-882237/id_rsa Username:docker}
	I0816 13:44:23.774706   57945 ssh_runner.go:195] Run: systemctl --version
	I0816 13:44:23.802788   57945 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 13:44:23.955361   57945 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 13:44:23.963291   57945 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 13:44:23.963363   57945 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 13:44:23.979542   57945 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 13:44:23.979579   57945 start.go:495] detecting cgroup driver to use...
	I0816 13:44:23.979645   57945 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 13:44:24.002509   57945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 13:44:24.019715   57945 docker.go:217] disabling cri-docker service (if available) ...
	I0816 13:44:24.019773   57945 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 13:44:24.033677   57945 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 13:44:24.049195   57945 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 13:44:24.168789   57945 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 13:44:24.346709   57945 docker.go:233] disabling docker service ...
	I0816 13:44:24.346772   57945 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 13:44:24.363739   57945 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 13:44:24.378785   57945 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 13:44:24.547705   57945 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 13:44:24.738866   57945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 13:44:24.756139   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 13:44:24.775999   57945 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0816 13:44:24.776060   57945 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:24.786682   57945 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 13:44:24.786783   57945 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:24.797385   57945 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:24.807825   57945 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:24.817919   57945 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 13:44:24.828884   57945 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 13:44:24.838725   57945 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 13:44:24.838782   57945 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 13:44:24.852544   57945 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 13:44:24.868302   57945 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:44:24.980614   57945 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 13:44:25.122584   57945 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 13:44:25.122660   57945 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 13:44:25.128622   57945 start.go:563] Will wait 60s for crictl version
	I0816 13:44:25.128694   57945 ssh_runner.go:195] Run: which crictl
	I0816 13:44:25.133726   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 13:44:25.188714   57945 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 13:44:25.188801   57945 ssh_runner.go:195] Run: crio --version
	I0816 13:44:25.223719   57945 ssh_runner.go:195] Run: crio --version
	I0816 13:44:25.263894   57945 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0816 13:44:23.714877   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .Start
	I0816 13:44:23.715069   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Ensuring networks are active...
	I0816 13:44:23.715788   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Ensuring network default is active
	I0816 13:44:23.716164   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Ensuring network mk-default-k8s-diff-port-893736 is active
	I0816 13:44:23.716648   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Getting domain xml...
	I0816 13:44:23.717424   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Creating domain...
	I0816 13:44:24.979917   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting to get IP...
	I0816 13:44:24.980942   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:24.981375   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:24.981448   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:24.981363   59082 retry.go:31] will retry after 199.038336ms: waiting for machine to come up
	I0816 13:44:25.181886   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:25.182350   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:25.182374   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:25.182330   59082 retry.go:31] will retry after 297.566018ms: waiting for machine to come up
	I0816 13:44:25.481811   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:25.482271   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:25.482296   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:25.482234   59082 retry.go:31] will retry after 297.833233ms: waiting for machine to come up
	I0816 13:44:25.781831   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:25.782445   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:25.782479   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:25.782400   59082 retry.go:31] will retry after 459.810978ms: waiting for machine to come up
	I0816 13:44:22.220022   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:22.317717   57440 api_server.go:52] waiting for apiserver process to appear ...
	I0816 13:44:22.317800   57440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:22.818025   57440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:23.318171   57440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:23.354996   57440 api_server.go:72] duration metric: took 1.037294965s to wait for apiserver process to appear ...
	I0816 13:44:23.355023   57440 api_server.go:88] waiting for apiserver healthz status ...
	I0816 13:44:23.355043   57440 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0816 13:44:23.355677   57440 api_server.go:269] stopped: https://192.168.61.116:8443/healthz: Get "https://192.168.61.116:8443/healthz": dial tcp 192.168.61.116:8443: connect: connection refused
	I0816 13:44:23.855190   57440 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0816 13:44:26.719152   57440 api_server.go:279] https://192.168.61.116:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 13:44:26.719184   57440 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 13:44:26.719204   57440 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0816 13:44:26.756329   57440 api_server.go:279] https://192.168.61.116:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 13:44:26.756366   57440 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 13:44:26.855581   57440 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0816 13:44:26.862856   57440 api_server.go:279] https://192.168.61.116:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 13:44:26.862885   57440 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 13:44:27.355555   57440 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0816 13:44:27.365664   57440 api_server.go:279] https://192.168.61.116:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 13:44:27.365702   57440 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 13:44:27.855844   57440 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0816 13:44:27.863185   57440 api_server.go:279] https://192.168.61.116:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 13:44:27.863227   57440 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 13:44:28.355490   57440 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0816 13:44:28.361410   57440 api_server.go:279] https://192.168.61.116:8443/healthz returned 200:
	ok
	I0816 13:44:28.374558   57440 api_server.go:141] control plane version: v1.31.0
	I0816 13:44:28.374593   57440 api_server.go:131] duration metric: took 5.019562023s to wait for apiserver health ...
	I0816 13:44:28.374604   57440 cni.go:84] Creating CNI manager for ""
	I0816 13:44:28.374613   57440 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:44:28.376749   57440 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 13:44:28.378413   57440 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 13:44:28.401199   57440 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 13:44:28.420798   57440 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 13:44:28.452605   57440 system_pods.go:59] 8 kube-system pods found
	I0816 13:44:28.452645   57440 system_pods.go:61] "coredns-6f6b679f8f-8kbs6" [e732183e-3b22-4a11-909a-246de5fc1c8a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 13:44:28.452655   57440 system_pods.go:61] "etcd-no-preload-311070" [e05deb95-06ff-4a8d-b520-0350b2332814] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 13:44:28.452663   57440 system_pods.go:61] "kube-apiserver-no-preload-311070" [06cb4989-0511-4c14-a3ec-e966829cf2ec] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 13:44:28.452671   57440 system_pods.go:61] "kube-controller-manager-no-preload-311070" [baa9f379-4e14-4873-a456-42e90a816b0b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 13:44:28.452680   57440 system_pods.go:61] "kube-proxy-b8d5b" [9ed1c33b-903f-43e8-880c-b9a49c658806] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0816 13:44:28.452689   57440 system_pods.go:61] "kube-scheduler-no-preload-311070" [166c0f64-ebf6-413f-82d5-f1c32991c63a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 13:44:28.452704   57440 system_pods.go:61] "metrics-server-6867b74b74-mgxhv" [e9654a8e-4db2-494d-93a7-a134b0e2bb50] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 13:44:28.452710   57440 system_pods.go:61] "storage-provisioner" [f340d2e3-2889-4200-b477-830494b827c6] Running
	I0816 13:44:28.452719   57440 system_pods.go:74] duration metric: took 31.89892ms to wait for pod list to return data ...
	I0816 13:44:28.452726   57440 node_conditions.go:102] verifying NodePressure condition ...
	I0816 13:44:28.463229   57440 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 13:44:28.463262   57440 node_conditions.go:123] node cpu capacity is 2
	I0816 13:44:28.463275   57440 node_conditions.go:105] duration metric: took 10.544476ms to run NodePressure ...
	I0816 13:44:28.463296   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:28.809304   57440 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0816 13:44:28.819091   57440 kubeadm.go:739] kubelet initialised
	I0816 13:44:28.819115   57440 kubeadm.go:740] duration metric: took 9.779672ms waiting for restarted kubelet to initialise ...
	I0816 13:44:28.819124   57440 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:44:28.827828   57440 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-8kbs6" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:28.840277   57440 pod_ready.go:98] node "no-preload-311070" hosting pod "coredns-6f6b679f8f-8kbs6" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:28.840310   57440 pod_ready.go:82] duration metric: took 12.450089ms for pod "coredns-6f6b679f8f-8kbs6" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:28.840322   57440 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-311070" hosting pod "coredns-6f6b679f8f-8kbs6" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:28.840333   57440 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:28.847012   57440 pod_ready.go:98] node "no-preload-311070" hosting pod "etcd-no-preload-311070" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:28.847036   57440 pod_ready.go:82] duration metric: took 6.692927ms for pod "etcd-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:28.847045   57440 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-311070" hosting pod "etcd-no-preload-311070" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:28.847050   57440 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:28.861358   57440 pod_ready.go:98] node "no-preload-311070" hosting pod "kube-apiserver-no-preload-311070" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:28.861404   57440 pod_ready.go:82] duration metric: took 14.346379ms for pod "kube-apiserver-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:28.861417   57440 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-311070" hosting pod "kube-apiserver-no-preload-311070" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:28.861428   57440 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:28.870641   57440 pod_ready.go:98] node "no-preload-311070" hosting pod "kube-controller-manager-no-preload-311070" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:28.870663   57440 pod_ready.go:82] duration metric: took 9.224713ms for pod "kube-controller-manager-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:28.870671   57440 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-311070" hosting pod "kube-controller-manager-no-preload-311070" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:28.870678   57440 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-b8d5b" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:29.224281   57440 pod_ready.go:98] node "no-preload-311070" hosting pod "kube-proxy-b8d5b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:29.224310   57440 pod_ready.go:82] duration metric: took 353.622663ms for pod "kube-proxy-b8d5b" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:29.224322   57440 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-311070" hosting pod "kube-proxy-b8d5b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:29.224331   57440 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:29.624518   57440 pod_ready.go:98] node "no-preload-311070" hosting pod "kube-scheduler-no-preload-311070" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:29.624552   57440 pod_ready.go:82] duration metric: took 400.212041ms for pod "kube-scheduler-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:29.624567   57440 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-311070" hosting pod "kube-scheduler-no-preload-311070" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:29.624577   57440 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:30.030291   57440 pod_ready.go:98] node "no-preload-311070" hosting pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:30.030327   57440 pod_ready.go:82] duration metric: took 405.73495ms for pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:30.030341   57440 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-311070" hosting pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:30.030352   57440 pod_ready.go:39] duration metric: took 1.211214389s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:44:30.030372   57440 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 13:44:30.045247   57440 ops.go:34] apiserver oom_adj: -16
	I0816 13:44:30.045279   57440 kubeadm.go:597] duration metric: took 9.441179951s to restartPrimaryControlPlane
	I0816 13:44:30.045291   57440 kubeadm.go:394] duration metric: took 9.489057744s to StartCluster
	I0816 13:44:30.045312   57440 settings.go:142] acquiring lock: {Name:mk96ffc9331ceacd8f3c1c33a59b38e047898a0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:44:30.045410   57440 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 13:44:30.047053   57440 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/kubeconfig: {Name:mk102d7d0e1fecb6a50b9d6c1ee82dcff7f7a898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:44:30.047310   57440 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 13:44:30.047415   57440 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 13:44:30.047486   57440 addons.go:69] Setting storage-provisioner=true in profile "no-preload-311070"
	I0816 13:44:30.047521   57440 addons.go:234] Setting addon storage-provisioner=true in "no-preload-311070"
	W0816 13:44:30.047534   57440 addons.go:243] addon storage-provisioner should already be in state true
	I0816 13:44:30.047569   57440 host.go:66] Checking if "no-preload-311070" exists ...
	I0816 13:44:30.048048   57440 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:30.048079   57440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:30.048302   57440 addons.go:69] Setting default-storageclass=true in profile "no-preload-311070"
	I0816 13:44:30.048339   57440 addons.go:69] Setting metrics-server=true in profile "no-preload-311070"
	I0816 13:44:30.048377   57440 addons.go:234] Setting addon metrics-server=true in "no-preload-311070"
	W0816 13:44:30.048387   57440 addons.go:243] addon metrics-server should already be in state true
	I0816 13:44:30.048424   57440 host.go:66] Checking if "no-preload-311070" exists ...
	I0816 13:44:30.048343   57440 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-311070"
	I0816 13:44:30.048812   57440 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:30.048834   57440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:30.048933   57440 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:30.048957   57440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:30.049282   57440 config.go:182] Loaded profile config "no-preload-311070": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 13:44:30.050905   57440 out.go:177] * Verifying Kubernetes components...
	I0816 13:44:30.052478   57440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:44:30.069405   57440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40065
	I0816 13:44:30.069463   57440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33057
	I0816 13:44:30.069735   57440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46331
	I0816 13:44:30.069949   57440 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:30.070072   57440 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:30.070145   57440 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:30.070488   57440 main.go:141] libmachine: Using API Version  1
	I0816 13:44:30.070506   57440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:30.070586   57440 main.go:141] libmachine: Using API Version  1
	I0816 13:44:30.070598   57440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:30.070618   57440 main.go:141] libmachine: Using API Version  1
	I0816 13:44:30.070627   57440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:30.070977   57440 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:30.071006   57440 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:30.071031   57440 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:30.071212   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetState
	I0816 13:44:30.071602   57440 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:30.071602   57440 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:30.071639   57440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:30.071621   57440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:30.074680   57440 addons.go:234] Setting addon default-storageclass=true in "no-preload-311070"
	W0816 13:44:30.074699   57440 addons.go:243] addon default-storageclass should already be in state true
	I0816 13:44:30.074730   57440 host.go:66] Checking if "no-preload-311070" exists ...
	I0816 13:44:30.075073   57440 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:30.075100   57440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:30.088961   57440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46717
	I0816 13:44:30.089421   57440 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:30.089952   57440 main.go:141] libmachine: Using API Version  1
	I0816 13:44:30.089971   57440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:30.090128   57440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37475
	I0816 13:44:30.090429   57440 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:30.090491   57440 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:30.090744   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetState
	I0816 13:44:30.090933   57440 main.go:141] libmachine: Using API Version  1
	I0816 13:44:30.090950   57440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:30.091263   57440 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:30.091463   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetState
	I0816 13:44:30.093258   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	I0816 13:44:30.093571   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	I0816 13:44:25.265126   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetIP
	I0816 13:44:25.268186   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:25.268630   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:25.268662   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:25.268927   57945 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0816 13:44:25.274101   57945 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 13:44:25.288155   57945 kubeadm.go:883] updating cluster {Name:old-k8s-version-882237 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-882237 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.105 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 13:44:25.288260   57945 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0816 13:44:25.288311   57945 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 13:44:25.342303   57945 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0816 13:44:25.342377   57945 ssh_runner.go:195] Run: which lz4
	I0816 13:44:25.346641   57945 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 13:44:25.350761   57945 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 13:44:25.350793   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0816 13:44:27.052140   57945 crio.go:462] duration metric: took 1.705504554s to copy over tarball
	I0816 13:44:27.052223   57945 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 13:44:30.094479   57440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35323
	I0816 13:44:30.094965   57440 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:30.095482   57440 main.go:141] libmachine: Using API Version  1
	I0816 13:44:30.095502   57440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:30.095857   57440 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:30.096322   57440 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:30.096361   57440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:30.128555   57440 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:30.128676   57440 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0816 13:44:26.244353   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:26.245158   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:26.245183   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:26.245062   59082 retry.go:31] will retry after 680.176025ms: waiting for machine to come up
	I0816 13:44:26.926654   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:26.927139   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:26.927183   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:26.927106   59082 retry.go:31] will retry after 720.530442ms: waiting for machine to come up
	I0816 13:44:27.648858   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:27.649342   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:27.649367   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:27.649289   59082 retry.go:31] will retry after 930.752133ms: waiting for machine to come up
	I0816 13:44:28.581283   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:28.581684   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:28.581709   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:28.581635   59082 retry.go:31] will retry after 972.791503ms: waiting for machine to come up
	I0816 13:44:29.556168   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:29.556563   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:29.556583   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:29.556525   59082 retry.go:31] will retry after 1.18129541s: waiting for machine to come up
	I0816 13:44:30.739498   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:30.739951   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:30.739978   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:30.739883   59082 retry.go:31] will retry after 2.27951459s: waiting for machine to come up
	I0816 13:44:30.133959   57440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39625
	I0816 13:44:30.134516   57440 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:30.135080   57440 main.go:141] libmachine: Using API Version  1
	I0816 13:44:30.135105   57440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:30.135463   57440 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:30.135598   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetState
	I0816 13:44:30.137494   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	I0816 13:44:30.137805   57440 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 13:44:30.137824   57440 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 13:44:30.137839   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:30.141006   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:30.141509   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:30.141544   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:30.141772   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:30.141952   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:30.142150   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:30.142305   57440 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/no-preload-311070/id_rsa Username:docker}
	I0816 13:44:30.164598   57440 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 13:44:30.164627   57440 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 13:44:30.164653   57440 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 13:44:30.164654   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:30.164662   57440 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 13:44:30.164687   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:30.168935   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:30.169259   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:30.169588   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:30.169615   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:30.169806   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:30.169828   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:30.169859   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:30.169953   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:30.170096   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:30.170103   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:30.170243   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:30.170241   57440 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/no-preload-311070/id_rsa Username:docker}
	I0816 13:44:30.170389   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:30.170505   57440 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/no-preload-311070/id_rsa Username:docker}
	I0816 13:44:30.285806   57440 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 13:44:30.312267   57440 node_ready.go:35] waiting up to 6m0s for node "no-preload-311070" to be "Ready" ...
	I0816 13:44:30.406371   57440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 13:44:30.409491   57440 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 13:44:30.409515   57440 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0816 13:44:30.440485   57440 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 13:44:30.440508   57440 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 13:44:30.480735   57440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 13:44:30.484549   57440 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 13:44:30.484573   57440 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 13:44:30.541485   57440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 13:44:32.535406   57440 node_ready.go:53] node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:33.204746   57440 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.723973086s)
	I0816 13:44:33.204802   57440 main.go:141] libmachine: Making call to close driver server
	I0816 13:44:33.204817   57440 main.go:141] libmachine: (no-preload-311070) Calling .Close
	I0816 13:44:33.204843   57440 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.798437569s)
	I0816 13:44:33.204877   57440 main.go:141] libmachine: Making call to close driver server
	I0816 13:44:33.204889   57440 main.go:141] libmachine: (no-preload-311070) Calling .Close
	I0816 13:44:33.205092   57440 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:44:33.205116   57440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:44:33.205126   57440 main.go:141] libmachine: Making call to close driver server
	I0816 13:44:33.205134   57440 main.go:141] libmachine: (no-preload-311070) Calling .Close
	I0816 13:44:33.205357   57440 main.go:141] libmachine: (no-preload-311070) DBG | Closing plugin on server side
	I0816 13:44:33.205359   57440 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:44:33.205379   57440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:44:33.205387   57440 main.go:141] libmachine: Making call to close driver server
	I0816 13:44:33.205395   57440 main.go:141] libmachine: (no-preload-311070) Calling .Close
	I0816 13:44:33.205408   57440 main.go:141] libmachine: (no-preload-311070) DBG | Closing plugin on server side
	I0816 13:44:33.205445   57440 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:44:33.205454   57440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:44:33.205593   57440 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:44:33.205605   57440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:44:33.214075   57440 main.go:141] libmachine: Making call to close driver server
	I0816 13:44:33.214095   57440 main.go:141] libmachine: (no-preload-311070) Calling .Close
	I0816 13:44:33.214307   57440 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:44:33.214320   57440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:44:33.259136   57440 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.717608276s)
	I0816 13:44:33.259188   57440 main.go:141] libmachine: Making call to close driver server
	I0816 13:44:33.259212   57440 main.go:141] libmachine: (no-preload-311070) Calling .Close
	I0816 13:44:33.259468   57440 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:44:33.259485   57440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:44:33.259495   57440 main.go:141] libmachine: Making call to close driver server
	I0816 13:44:33.259503   57440 main.go:141] libmachine: (no-preload-311070) Calling .Close
	I0816 13:44:33.259988   57440 main.go:141] libmachine: (no-preload-311070) DBG | Closing plugin on server side
	I0816 13:44:33.260004   57440 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:44:33.260016   57440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:44:33.260026   57440 addons.go:475] Verifying addon metrics-server=true in "no-preload-311070"
	I0816 13:44:33.262190   57440 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0816 13:44:30.191146   57945 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.138885293s)
	I0816 13:44:30.191188   57945 crio.go:469] duration metric: took 3.139020745s to extract the tarball
	I0816 13:44:30.191198   57945 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 13:44:30.249011   57945 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 13:44:30.285826   57945 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0816 13:44:30.285847   57945 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0816 13:44:30.285918   57945 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:30.285940   57945 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:44:30.285947   57945 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0816 13:44:30.285922   57945 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:44:30.285971   57945 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0816 13:44:30.286019   57945 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:44:30.285922   57945 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:44:30.285979   57945 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0816 13:44:30.288208   57945 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:44:30.288272   57945 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:30.288275   57945 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0816 13:44:30.288205   57945 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0816 13:44:30.288303   57945 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0816 13:44:30.288320   57945 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:44:30.288211   57945 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:44:30.288207   57945 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:44:30.434593   57945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:44:30.434847   57945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:44:30.438852   57945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:44:30.449704   57945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0816 13:44:30.451130   57945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:44:30.454848   57945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0816 13:44:30.513569   57945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0816 13:44:30.594404   57945 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0816 13:44:30.594453   57945 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:44:30.594509   57945 ssh_runner.go:195] Run: which crictl
	I0816 13:44:30.612653   57945 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0816 13:44:30.612699   57945 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:44:30.612746   57945 ssh_runner.go:195] Run: which crictl
	I0816 13:44:30.652117   57945 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0816 13:44:30.652162   57945 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:44:30.652214   57945 ssh_runner.go:195] Run: which crictl
	I0816 13:44:30.681057   57945 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0816 13:44:30.681116   57945 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0816 13:44:30.681163   57945 ssh_runner.go:195] Run: which crictl
	I0816 13:44:30.681239   57945 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0816 13:44:30.681296   57945 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:44:30.681341   57945 ssh_runner.go:195] Run: which crictl
	I0816 13:44:30.688696   57945 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0816 13:44:30.688739   57945 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0816 13:44:30.688785   57945 ssh_runner.go:195] Run: which crictl
	I0816 13:44:30.706749   57945 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0816 13:44:30.706802   57945 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0816 13:44:30.706816   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:44:30.706843   57945 ssh_runner.go:195] Run: which crictl
	I0816 13:44:30.706911   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:44:30.706938   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:44:30.706987   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 13:44:30.707031   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:44:30.707045   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 13:44:30.913446   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 13:44:30.913520   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 13:44:30.913548   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:44:30.913611   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:44:30.913653   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 13:44:30.913675   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:44:30.913813   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:44:31.079066   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:44:31.079100   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 13:44:31.079140   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:44:31.103707   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:44:31.103890   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 13:44:31.106587   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:44:31.106723   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 13:44:31.210359   57945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:31.226549   57945 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0816 13:44:31.226605   57945 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0816 13:44:31.226648   57945 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0816 13:44:31.266238   57945 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0816 13:44:31.266256   57945 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0816 13:44:31.269423   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 13:44:31.270551   57945 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0816 13:44:31.399144   57945 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0816 13:44:31.399220   57945 cache_images.go:92] duration metric: took 1.113354806s to LoadCachedImages
	W0816 13:44:31.399297   57945 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0816 13:44:31.399311   57945 kubeadm.go:934] updating node { 192.168.72.105 8443 v1.20.0 crio true true} ...
	I0816 13:44:31.399426   57945 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-882237 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.105
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-882237 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 13:44:31.399515   57945 ssh_runner.go:195] Run: crio config
	I0816 13:44:31.459182   57945 cni.go:84] Creating CNI manager for ""
	I0816 13:44:31.459226   57945 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:44:31.459244   57945 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 13:44:31.459270   57945 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.105 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-882237 NodeName:old-k8s-version-882237 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.105"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.105 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0816 13:44:31.459439   57945 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.105
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-882237"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.105
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.105"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 13:44:31.459521   57945 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0816 13:44:31.470415   57945 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 13:44:31.470500   57945 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 13:44:31.480890   57945 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0816 13:44:31.498797   57945 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 13:44:31.516425   57945 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0816 13:44:31.536528   57945 ssh_runner.go:195] Run: grep 192.168.72.105	control-plane.minikube.internal$ /etc/hosts
	I0816 13:44:31.540569   57945 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.105	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 13:44:31.553530   57945 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:44:31.693191   57945 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 13:44:31.711162   57945 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237 for IP: 192.168.72.105
	I0816 13:44:31.711190   57945 certs.go:194] generating shared ca certs ...
	I0816 13:44:31.711209   57945 certs.go:226] acquiring lock for ca certs: {Name:mkdb46755373c135acf8239d2d4352f4b0b3d1d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:44:31.711382   57945 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key
	I0816 13:44:31.711465   57945 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key
	I0816 13:44:31.711478   57945 certs.go:256] generating profile certs ...
	I0816 13:44:31.711596   57945 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/client.key
	I0816 13:44:31.711676   57945 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/apiserver.key.e63f19d8
	I0816 13:44:31.711739   57945 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/proxy-client.key
	I0816 13:44:31.711906   57945 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem (1338 bytes)
	W0816 13:44:31.711969   57945 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149_empty.pem, impossibly tiny 0 bytes
	I0816 13:44:31.711984   57945 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 13:44:31.712019   57945 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem (1082 bytes)
	I0816 13:44:31.712058   57945 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem (1123 bytes)
	I0816 13:44:31.712089   57945 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem (1675 bytes)
	I0816 13:44:31.712146   57945 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem (1708 bytes)
	I0816 13:44:31.713101   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 13:44:31.748701   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 13:44:31.789308   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 13:44:31.814410   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 13:44:31.841281   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0816 13:44:31.867939   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 13:44:31.894410   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 13:44:31.921591   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 13:44:31.952356   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /usr/share/ca-certificates/111492.pem (1708 bytes)
	I0816 13:44:31.982171   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 13:44:32.008849   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem --> /usr/share/ca-certificates/11149.pem (1338 bytes)
	I0816 13:44:32.034750   57945 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 13:44:32.051812   57945 ssh_runner.go:195] Run: openssl version
	I0816 13:44:32.057774   57945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11149.pem && ln -fs /usr/share/ca-certificates/11149.pem /etc/ssl/certs/11149.pem"
	I0816 13:44:32.068553   57945 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11149.pem
	I0816 13:44:32.073022   57945 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 12:33 /usr/share/ca-certificates/11149.pem
	I0816 13:44:32.073095   57945 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11149.pem
	I0816 13:44:32.079239   57945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11149.pem /etc/ssl/certs/51391683.0"
	I0816 13:44:32.089825   57945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111492.pem && ln -fs /usr/share/ca-certificates/111492.pem /etc/ssl/certs/111492.pem"
	I0816 13:44:32.100630   57945 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111492.pem
	I0816 13:44:32.105792   57945 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 12:33 /usr/share/ca-certificates/111492.pem
	I0816 13:44:32.105851   57945 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111492.pem
	I0816 13:44:32.112004   57945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111492.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 13:44:32.122723   57945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 13:44:32.133560   57945 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:44:32.138215   57945 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 12:22 /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:44:32.138260   57945 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:44:32.144059   57945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 13:44:32.155210   57945 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 13:44:32.159746   57945 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 13:44:32.165984   57945 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 13:44:32.171617   57945 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 13:44:32.177778   57945 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 13:44:32.183623   57945 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 13:44:32.189537   57945 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 13:44:32.195627   57945 kubeadm.go:392] StartCluster: {Name:old-k8s-version-882237 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-882237 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.105 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 13:44:32.195706   57945 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 13:44:32.195741   57945 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 13:44:32.235910   57945 cri.go:89] found id: ""
	I0816 13:44:32.235978   57945 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 13:44:32.248201   57945 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 13:44:32.248223   57945 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 13:44:32.248286   57945 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 13:44:32.258917   57945 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 13:44:32.260386   57945 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-882237" does not appear in /home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 13:44:32.261475   57945 kubeconfig.go:62] /home/jenkins/minikube-integration/19423-3966/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-882237" cluster setting kubeconfig missing "old-k8s-version-882237" context setting]
	I0816 13:44:32.263041   57945 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/kubeconfig: {Name:mk102d7d0e1fecb6a50b9d6c1ee82dcff7f7a898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:44:32.335150   57945 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 13:44:32.346103   57945 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.105
	I0816 13:44:32.346141   57945 kubeadm.go:1160] stopping kube-system containers ...
	I0816 13:44:32.346155   57945 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 13:44:32.346212   57945 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 13:44:32.390110   57945 cri.go:89] found id: ""
	I0816 13:44:32.390197   57945 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 13:44:32.408685   57945 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 13:44:32.419119   57945 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 13:44:32.419146   57945 kubeadm.go:157] found existing configuration files:
	
	I0816 13:44:32.419227   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 13:44:32.429282   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 13:44:32.429352   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 13:44:32.439444   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 13:44:32.449342   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 13:44:32.449409   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 13:44:32.459836   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 13:44:32.469581   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 13:44:32.469653   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 13:44:32.479655   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 13:44:32.489139   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 13:44:32.489204   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 13:44:32.499439   57945 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 13:44:32.509706   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:32.672388   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:33.787722   57945 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.115294487s)
	I0816 13:44:33.787763   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:34.027016   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:34.141852   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:34.247190   57945 api_server.go:52] waiting for apiserver process to appear ...
	I0816 13:44:34.247286   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:34.747781   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:33.022378   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:33.023000   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:33.023028   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:33.022950   59082 retry.go:31] will retry after 1.906001247s: waiting for machine to come up
	I0816 13:44:34.930169   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:34.930674   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:34.930702   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:34.930612   59082 retry.go:31] will retry after 2.809719622s: waiting for machine to come up
	I0816 13:44:33.263780   57440 addons.go:510] duration metric: took 3.216351591s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0816 13:44:34.816280   57440 node_ready.go:53] node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:36.817474   57440 node_ready.go:53] node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:35.248075   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:35.747575   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:36.247693   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:36.748219   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:37.247519   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:37.748189   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:38.248143   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:38.748193   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:39.247412   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:39.748043   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:37.742122   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:37.742506   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:37.742545   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:37.742464   59082 retry.go:31] will retry after 4.139761236s: waiting for machine to come up
	I0816 13:44:37.815407   57440 node_ready.go:49] node "no-preload-311070" has status "Ready":"True"
	I0816 13:44:37.815428   57440 node_ready.go:38] duration metric: took 7.503128864s for node "no-preload-311070" to be "Ready" ...
	I0816 13:44:37.815437   57440 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:44:37.820318   57440 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-8kbs6" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:37.825460   57440 pod_ready.go:93] pod "coredns-6f6b679f8f-8kbs6" in "kube-system" namespace has status "Ready":"True"
	I0816 13:44:37.825478   57440 pod_ready.go:82] duration metric: took 5.136508ms for pod "coredns-6f6b679f8f-8kbs6" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:37.825486   57440 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:37.829609   57440 pod_ready.go:93] pod "etcd-no-preload-311070" in "kube-system" namespace has status "Ready":"True"
	I0816 13:44:37.829628   57440 pod_ready.go:82] duration metric: took 4.133294ms for pod "etcd-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:37.829635   57440 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:39.835973   57440 pod_ready.go:103] pod "kube-apiserver-no-preload-311070" in "kube-system" namespace has status "Ready":"False"
	I0816 13:44:40.335270   57440 pod_ready.go:93] pod "kube-apiserver-no-preload-311070" in "kube-system" namespace has status "Ready":"True"
	I0816 13:44:40.335289   57440 pod_ready.go:82] duration metric: took 2.505647853s for pod "kube-apiserver-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:40.335298   57440 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:43.233555   57240 start.go:364] duration metric: took 55.654362151s to acquireMachinesLock for "embed-certs-302520"
	I0816 13:44:43.233638   57240 start.go:96] Skipping create...Using existing machine configuration
	I0816 13:44:43.233649   57240 fix.go:54] fixHost starting: 
	I0816 13:44:43.234047   57240 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:43.234078   57240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:43.253929   57240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34851
	I0816 13:44:43.254405   57240 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:43.254878   57240 main.go:141] libmachine: Using API Version  1
	I0816 13:44:43.254900   57240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:43.255235   57240 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:43.255400   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	I0816 13:44:43.255578   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetState
	I0816 13:44:43.257434   57240 fix.go:112] recreateIfNeeded on embed-certs-302520: state=Stopped err=<nil>
	I0816 13:44:43.257472   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	W0816 13:44:43.257637   57240 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 13:44:43.259743   57240 out.go:177] * Restarting existing kvm2 VM for "embed-certs-302520" ...
	I0816 13:44:41.885729   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:41.886143   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Found IP for machine: 192.168.50.186
	I0816 13:44:41.886162   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Reserving static IP address...
	I0816 13:44:41.886178   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has current primary IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:41.886540   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-893736", mac: "52:54:00:5f:b2:25", ip: "192.168.50.186"} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:41.886570   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | skip adding static IP to network mk-default-k8s-diff-port-893736 - found existing host DHCP lease matching {name: "default-k8s-diff-port-893736", mac: "52:54:00:5f:b2:25", ip: "192.168.50.186"}
	I0816 13:44:41.886585   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Reserved static IP address: 192.168.50.186
	I0816 13:44:41.886600   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for SSH to be available...
	I0816 13:44:41.886617   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | Getting to WaitForSSH function...
	I0816 13:44:41.888671   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:41.889003   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:41.889047   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:41.889118   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | Using SSH client type: external
	I0816 13:44:41.889142   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/default-k8s-diff-port-893736/id_rsa (-rw-------)
	I0816 13:44:41.889181   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.186 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-3966/.minikube/machines/default-k8s-diff-port-893736/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 13:44:41.889201   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | About to run SSH command:
	I0816 13:44:41.889215   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | exit 0
	I0816 13:44:42.017010   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | SSH cmd err, output: <nil>: 
	I0816 13:44:42.017374   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetConfigRaw
	I0816 13:44:42.017979   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetIP
	I0816 13:44:42.020580   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.020958   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:42.020992   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.021174   58430 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/default-k8s-diff-port-893736/config.json ...
	I0816 13:44:42.021342   58430 machine.go:93] provisionDockerMachine start ...
	I0816 13:44:42.021356   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:44:42.021521   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:42.023732   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.024033   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:42.024057   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.024201   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:42.024354   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:42.024526   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:42.024667   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:42.024811   58430 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:42.024994   58430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.186 22 <nil> <nil>}
	I0816 13:44:42.025005   58430 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 13:44:42.137459   58430 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 13:44:42.137495   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetMachineName
	I0816 13:44:42.137722   58430 buildroot.go:166] provisioning hostname "default-k8s-diff-port-893736"
	I0816 13:44:42.137745   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetMachineName
	I0816 13:44:42.137925   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:42.140599   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.140987   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:42.141017   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.141148   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:42.141309   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:42.141430   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:42.141536   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:42.141677   58430 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:42.141843   58430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.186 22 <nil> <nil>}
	I0816 13:44:42.141855   58430 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-893736 && echo "default-k8s-diff-port-893736" | sudo tee /etc/hostname
	I0816 13:44:42.267643   58430 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-893736
	
	I0816 13:44:42.267670   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:42.270489   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.270834   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:42.270867   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.271089   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:42.271266   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:42.271405   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:42.271527   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:42.271675   58430 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:42.271829   58430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.186 22 <nil> <nil>}
	I0816 13:44:42.271847   58430 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-893736' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-893736/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-893736' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 13:44:42.398010   58430 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 13:44:42.398057   58430 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-3966/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-3966/.minikube}
	I0816 13:44:42.398122   58430 buildroot.go:174] setting up certificates
	I0816 13:44:42.398139   58430 provision.go:84] configureAuth start
	I0816 13:44:42.398157   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetMachineName
	I0816 13:44:42.398484   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetIP
	I0816 13:44:42.401217   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.401566   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:42.401587   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.401749   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:42.404082   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.404380   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:42.404425   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.404541   58430 provision.go:143] copyHostCerts
	I0816 13:44:42.404596   58430 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem, removing ...
	I0816 13:44:42.404606   58430 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem
	I0816 13:44:42.404666   58430 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem (1123 bytes)
	I0816 13:44:42.404758   58430 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem, removing ...
	I0816 13:44:42.404767   58430 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem
	I0816 13:44:42.404788   58430 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem (1675 bytes)
	I0816 13:44:42.404850   58430 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem, removing ...
	I0816 13:44:42.404857   58430 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem
	I0816 13:44:42.404873   58430 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem (1082 bytes)
	I0816 13:44:42.404965   58430 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-893736 san=[127.0.0.1 192.168.50.186 default-k8s-diff-port-893736 localhost minikube]
	I0816 13:44:42.551867   58430 provision.go:177] copyRemoteCerts
	I0816 13:44:42.551928   58430 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 13:44:42.551954   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:42.554945   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.555276   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:42.555316   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.555517   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:42.555699   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:42.555838   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:42.555964   58430 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/default-k8s-diff-port-893736/id_rsa Username:docker}
	I0816 13:44:42.643591   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 13:44:42.667108   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0816 13:44:42.690852   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 13:44:42.714001   58430 provision.go:87] duration metric: took 315.84846ms to configureAuth
	I0816 13:44:42.714030   58430 buildroot.go:189] setting minikube options for container-runtime
	I0816 13:44:42.714189   58430 config.go:182] Loaded profile config "default-k8s-diff-port-893736": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 13:44:42.714263   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:42.716726   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.717082   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:42.717110   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.717282   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:42.717486   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:42.717621   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:42.717740   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:42.717883   58430 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:42.718038   58430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.186 22 <nil> <nil>}
	I0816 13:44:42.718055   58430 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 13:44:42.988769   58430 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 13:44:42.988798   58430 machine.go:96] duration metric: took 967.444538ms to provisionDockerMachine
	I0816 13:44:42.988814   58430 start.go:293] postStartSetup for "default-k8s-diff-port-893736" (driver="kvm2")
	I0816 13:44:42.988833   58430 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 13:44:42.988864   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:44:42.989226   58430 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 13:44:42.989261   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:42.991868   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.992162   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:42.992184   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.992364   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:42.992537   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:42.992689   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:42.992838   58430 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/default-k8s-diff-port-893736/id_rsa Username:docker}
	I0816 13:44:43.079199   58430 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 13:44:43.083277   58430 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 13:44:43.083296   58430 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/addons for local assets ...
	I0816 13:44:43.083357   58430 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/files for local assets ...
	I0816 13:44:43.083459   58430 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem -> 111492.pem in /etc/ssl/certs
	I0816 13:44:43.083576   58430 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 13:44:43.092684   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /etc/ssl/certs/111492.pem (1708 bytes)
	I0816 13:44:43.115693   58430 start.go:296] duration metric: took 126.86489ms for postStartSetup
	I0816 13:44:43.115735   58430 fix.go:56] duration metric: took 19.425768942s for fixHost
	I0816 13:44:43.115761   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:43.118597   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:43.118915   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:43.118947   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:43.119100   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:43.119306   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:43.119442   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:43.119563   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:43.119683   58430 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:43.119840   58430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.186 22 <nil> <nil>}
	I0816 13:44:43.119850   58430 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 13:44:43.233362   58430 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723815883.193133132
	
	I0816 13:44:43.233394   58430 fix.go:216] guest clock: 1723815883.193133132
	I0816 13:44:43.233406   58430 fix.go:229] Guest: 2024-08-16 13:44:43.193133132 +0000 UTC Remote: 2024-08-16 13:44:43.115740856 +0000 UTC m=+147.151935383 (delta=77.392276ms)
	I0816 13:44:43.233479   58430 fix.go:200] guest clock delta is within tolerance: 77.392276ms
	I0816 13:44:43.233486   58430 start.go:83] releasing machines lock for "default-k8s-diff-port-893736", held for 19.543554553s
	I0816 13:44:43.233517   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:44:43.233783   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetIP
	I0816 13:44:43.236492   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:43.236875   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:43.236901   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:43.237136   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:44:43.237703   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:44:43.237943   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:44:43.238074   58430 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 13:44:43.238153   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:43.238182   58430 ssh_runner.go:195] Run: cat /version.json
	I0816 13:44:43.238215   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:43.240639   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:43.241000   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:43.241029   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:43.241052   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:43.241193   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:43.241360   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:43.241573   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:43.241581   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:43.241601   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:43.241733   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:43.241732   58430 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/default-k8s-diff-port-893736/id_rsa Username:docker}
	I0816 13:44:43.241895   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:43.242052   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:43.242178   58430 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/default-k8s-diff-port-893736/id_rsa Username:docker}
	I0816 13:44:43.352903   58430 ssh_runner.go:195] Run: systemctl --version
	I0816 13:44:43.359071   58430 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 13:44:43.509233   58430 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 13:44:43.516592   58430 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 13:44:43.516666   58430 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 13:44:43.534069   58430 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 13:44:43.534096   58430 start.go:495] detecting cgroup driver to use...
	I0816 13:44:43.534167   58430 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 13:44:43.553305   58430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 13:44:43.569958   58430 docker.go:217] disabling cri-docker service (if available) ...
	I0816 13:44:43.570007   58430 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 13:44:43.590642   58430 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 13:44:43.606411   58430 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 13:44:43.733331   58430 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 13:44:43.882032   58430 docker.go:233] disabling docker service ...
	I0816 13:44:43.882110   58430 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 13:44:43.896780   58430 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 13:44:43.909702   58430 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 13:44:44.044071   58430 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 13:44:44.170798   58430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 13:44:44.184421   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 13:44:44.203201   58430 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 13:44:44.203269   58430 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:44.213647   58430 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 13:44:44.213708   58430 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:44.224261   58430 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:44.235295   58430 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:44.247670   58430 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 13:44:44.264065   58430 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:44.276212   58430 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:44.296049   58430 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:44.307920   58430 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 13:44:44.319689   58430 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 13:44:44.319746   58430 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 13:44:44.335735   58430 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 13:44:44.352364   58430 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:44:44.476754   58430 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 13:44:44.618847   58430 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 13:44:44.618914   58430 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 13:44:44.623946   58430 start.go:563] Will wait 60s for crictl version
	I0816 13:44:44.624004   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:44:44.627796   58430 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 13:44:44.666274   58430 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 13:44:44.666350   58430 ssh_runner.go:195] Run: crio --version
	I0816 13:44:44.694476   58430 ssh_runner.go:195] Run: crio --version
	I0816 13:44:44.723937   58430 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 13:44:43.261237   57240 main.go:141] libmachine: (embed-certs-302520) Calling .Start
	I0816 13:44:43.261399   57240 main.go:141] libmachine: (embed-certs-302520) Ensuring networks are active...
	I0816 13:44:43.262183   57240 main.go:141] libmachine: (embed-certs-302520) Ensuring network default is active
	I0816 13:44:43.262591   57240 main.go:141] libmachine: (embed-certs-302520) Ensuring network mk-embed-certs-302520 is active
	I0816 13:44:43.263044   57240 main.go:141] libmachine: (embed-certs-302520) Getting domain xml...
	I0816 13:44:43.263849   57240 main.go:141] libmachine: (embed-certs-302520) Creating domain...
	I0816 13:44:44.565632   57240 main.go:141] libmachine: (embed-certs-302520) Waiting to get IP...
	I0816 13:44:44.566705   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:44.567120   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:44.567211   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:44.567113   59274 retry.go:31] will retry after 259.265867ms: waiting for machine to come up
	I0816 13:44:44.827603   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:44.828117   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:44.828152   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:44.828043   59274 retry.go:31] will retry after 271.270487ms: waiting for machine to come up
	I0816 13:44:40.247541   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:40.747938   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:41.247408   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:41.747777   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:42.248295   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:42.747393   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:43.247508   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:43.748151   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:44.247958   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:44.747978   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:44.725112   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetIP
	I0816 13:44:44.728077   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:44.728446   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:44.728469   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:44.728728   58430 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0816 13:44:44.733365   58430 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 13:44:44.746196   58430 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-893736 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-893736 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.186 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 13:44:44.746325   58430 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 13:44:44.746385   58430 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 13:44:44.787402   58430 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 13:44:44.787481   58430 ssh_runner.go:195] Run: which lz4
	I0816 13:44:44.791755   58430 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 13:44:44.797290   58430 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 13:44:44.797320   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0816 13:44:42.342663   57440 pod_ready.go:93] pod "kube-controller-manager-no-preload-311070" in "kube-system" namespace has status "Ready":"True"
	I0816 13:44:42.342685   57440 pod_ready.go:82] duration metric: took 2.007381193s for pod "kube-controller-manager-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:42.342694   57440 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-b8d5b" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:42.346807   57440 pod_ready.go:93] pod "kube-proxy-b8d5b" in "kube-system" namespace has status "Ready":"True"
	I0816 13:44:42.346824   57440 pod_ready.go:82] duration metric: took 4.124529ms for pod "kube-proxy-b8d5b" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:42.346832   57440 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:42.351010   57440 pod_ready.go:93] pod "kube-scheduler-no-preload-311070" in "kube-system" namespace has status "Ready":"True"
	I0816 13:44:42.351025   57440 pod_ready.go:82] duration metric: took 4.186812ms for pod "kube-scheduler-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:42.351032   57440 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:44.358663   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:44:46.359708   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:44:45.100554   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:45.101150   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:45.101265   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:45.101207   59274 retry.go:31] will retry after 309.469795ms: waiting for machine to come up
	I0816 13:44:45.412518   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:45.413009   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:45.413040   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:45.412975   59274 retry.go:31] will retry after 502.564219ms: waiting for machine to come up
	I0816 13:44:45.917731   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:45.918284   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:45.918316   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:45.918235   59274 retry.go:31] will retry after 723.442166ms: waiting for machine to come up
	I0816 13:44:46.642971   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:46.643467   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:46.643498   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:46.643400   59274 retry.go:31] will retry after 600.365383ms: waiting for machine to come up
	I0816 13:44:47.245233   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:47.245756   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:47.245785   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:47.245710   59274 retry.go:31] will retry after 1.06438693s: waiting for machine to come up
	I0816 13:44:48.312043   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:48.312842   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:48.312886   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:48.312840   59274 retry.go:31] will retry after 903.877948ms: waiting for machine to come up
	I0816 13:44:49.218419   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:49.218805   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:49.218835   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:49.218758   59274 retry.go:31] will retry after 1.73892963s: waiting for machine to come up
	I0816 13:44:45.247523   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:45.747694   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:46.248397   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:46.747660   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:47.247382   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:47.748220   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:48.248130   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:48.747818   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:49.248360   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:49.747962   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:46.230345   58430 crio.go:462] duration metric: took 1.438624377s to copy over tarball
	I0816 13:44:46.230429   58430 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 13:44:48.358060   58430 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.127589486s)
	I0816 13:44:48.358131   58430 crio.go:469] duration metric: took 2.127754698s to extract the tarball
	I0816 13:44:48.358145   58430 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 13:44:48.398054   58430 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 13:44:48.449391   58430 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 13:44:48.449416   58430 cache_images.go:84] Images are preloaded, skipping loading
	I0816 13:44:48.449425   58430 kubeadm.go:934] updating node { 192.168.50.186 8444 v1.31.0 crio true true} ...
	I0816 13:44:48.449576   58430 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-893736 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.186
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-893736 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 13:44:48.449662   58430 ssh_runner.go:195] Run: crio config
	I0816 13:44:48.499389   58430 cni.go:84] Creating CNI manager for ""
	I0816 13:44:48.499413   58430 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:44:48.499424   58430 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 13:44:48.499452   58430 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.186 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-893736 NodeName:default-k8s-diff-port-893736 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.186"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.186 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 13:44:48.499576   58430 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.186
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-893736"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.186
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.186"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 13:44:48.499653   58430 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 13:44:48.509639   58430 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 13:44:48.509706   58430 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 13:44:48.519099   58430 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0816 13:44:48.535866   58430 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 13:44:48.552977   58430 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0816 13:44:48.571198   58430 ssh_runner.go:195] Run: grep 192.168.50.186	control-plane.minikube.internal$ /etc/hosts
	I0816 13:44:48.575881   58430 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.186	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 13:44:48.587850   58430 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:44:48.703848   58430 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 13:44:48.721449   58430 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/default-k8s-diff-port-893736 for IP: 192.168.50.186
	I0816 13:44:48.721476   58430 certs.go:194] generating shared ca certs ...
	I0816 13:44:48.721496   58430 certs.go:226] acquiring lock for ca certs: {Name:mkdb46755373c135acf8239d2d4352f4b0b3d1d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:44:48.721677   58430 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key
	I0816 13:44:48.721731   58430 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key
	I0816 13:44:48.721745   58430 certs.go:256] generating profile certs ...
	I0816 13:44:48.721843   58430 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/default-k8s-diff-port-893736/client.key
	I0816 13:44:48.721926   58430 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/default-k8s-diff-port-893736/apiserver.key.64c9b41b
	I0816 13:44:48.721980   58430 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/default-k8s-diff-port-893736/proxy-client.key
	I0816 13:44:48.722107   58430 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem (1338 bytes)
	W0816 13:44:48.722138   58430 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149_empty.pem, impossibly tiny 0 bytes
	I0816 13:44:48.722149   58430 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 13:44:48.722182   58430 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem (1082 bytes)
	I0816 13:44:48.722204   58430 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem (1123 bytes)
	I0816 13:44:48.722225   58430 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem (1675 bytes)
	I0816 13:44:48.722258   58430 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem (1708 bytes)
	I0816 13:44:48.722818   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 13:44:48.779462   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 13:44:48.814653   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 13:44:48.887435   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 13:44:48.913644   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/default-k8s-diff-port-893736/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0816 13:44:48.937536   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/default-k8s-diff-port-893736/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 13:44:48.960729   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/default-k8s-diff-port-893736/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 13:44:48.984375   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/default-k8s-diff-port-893736/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 13:44:49.007997   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 13:44:49.031631   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem --> /usr/share/ca-certificates/11149.pem (1338 bytes)
	I0816 13:44:49.054333   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /usr/share/ca-certificates/111492.pem (1708 bytes)
	I0816 13:44:49.076566   58430 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 13:44:49.092986   58430 ssh_runner.go:195] Run: openssl version
	I0816 13:44:49.098555   58430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111492.pem && ln -fs /usr/share/ca-certificates/111492.pem /etc/ssl/certs/111492.pem"
	I0816 13:44:49.109454   58430 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111492.pem
	I0816 13:44:49.114868   58430 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 12:33 /usr/share/ca-certificates/111492.pem
	I0816 13:44:49.114934   58430 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111492.pem
	I0816 13:44:49.120811   58430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111492.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 13:44:49.131829   58430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 13:44:49.142825   58430 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:44:49.147276   58430 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 12:22 /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:44:49.147322   58430 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:44:49.152678   58430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 13:44:49.163622   58430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11149.pem && ln -fs /usr/share/ca-certificates/11149.pem /etc/ssl/certs/11149.pem"
	I0816 13:44:49.174426   58430 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11149.pem
	I0816 13:44:49.179353   58430 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 12:33 /usr/share/ca-certificates/11149.pem
	I0816 13:44:49.179406   58430 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11149.pem
	I0816 13:44:49.185129   58430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11149.pem /etc/ssl/certs/51391683.0"
	I0816 13:44:49.196668   58430 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 13:44:49.201447   58430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 13:44:49.207718   58430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 13:44:49.213869   58430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 13:44:49.220325   58430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 13:44:49.226220   58430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 13:44:49.231971   58430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 13:44:49.238080   58430 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-893736 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-893736 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.186 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 13:44:49.238178   58430 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 13:44:49.238231   58430 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 13:44:49.276621   58430 cri.go:89] found id: ""
	I0816 13:44:49.276719   58430 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 13:44:49.287765   58430 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 13:44:49.287785   58430 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 13:44:49.287829   58430 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 13:44:49.298038   58430 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 13:44:49.299171   58430 kubeconfig.go:125] found "default-k8s-diff-port-893736" server: "https://192.168.50.186:8444"
	I0816 13:44:49.301521   58430 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 13:44:49.311800   58430 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.186
	I0816 13:44:49.311833   58430 kubeadm.go:1160] stopping kube-system containers ...
	I0816 13:44:49.311845   58430 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 13:44:49.311899   58430 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 13:44:49.363716   58430 cri.go:89] found id: ""
	I0816 13:44:49.363784   58430 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 13:44:49.381053   58430 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 13:44:49.391306   58430 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 13:44:49.391322   58430 kubeadm.go:157] found existing configuration files:
	
	I0816 13:44:49.391370   58430 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0816 13:44:49.400770   58430 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 13:44:49.400829   58430 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 13:44:49.410252   58430 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0816 13:44:49.419405   58430 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 13:44:49.419481   58430 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 13:44:49.429330   58430 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0816 13:44:49.438521   58430 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 13:44:49.438587   58430 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 13:44:49.448144   58430 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0816 13:44:49.456744   58430 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 13:44:49.456805   58430 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 13:44:49.466062   58430 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 13:44:49.476159   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:49.597639   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:50.673182   58430 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.075495766s)
	I0816 13:44:50.673218   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:50.887802   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:50.958384   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:48.858145   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:44:51.358083   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:44:50.959807   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:50.960217   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:50.960236   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:50.960188   59274 retry.go:31] will retry after 2.32558417s: waiting for machine to come up
	I0816 13:44:53.287947   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:53.288441   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:53.288470   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:53.288388   59274 retry.go:31] will retry after 1.85414625s: waiting for machine to come up
	I0816 13:44:50.247710   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:50.747741   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:51.248099   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:51.748052   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:52.247958   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:52.748141   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:53.247751   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:53.747353   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:54.247624   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:54.747699   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:51.054015   58430 api_server.go:52] waiting for apiserver process to appear ...
	I0816 13:44:51.054101   58430 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:51.554846   58430 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:52.055178   58430 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:52.082087   58430 api_server.go:72] duration metric: took 1.028080423s to wait for apiserver process to appear ...
	I0816 13:44:52.082114   58430 api_server.go:88] waiting for apiserver healthz status ...
	I0816 13:44:52.082133   58430 api_server.go:253] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 13:44:52.082624   58430 api_server.go:269] stopped: https://192.168.50.186:8444/healthz: Get "https://192.168.50.186:8444/healthz": dial tcp 192.168.50.186:8444: connect: connection refused
	I0816 13:44:52.582261   58430 api_server.go:253] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 13:44:55.336530   58430 api_server.go:279] https://192.168.50.186:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 13:44:55.336565   58430 api_server.go:103] status: https://192.168.50.186:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 13:44:55.336580   58430 api_server.go:253] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 13:44:55.374699   58430 api_server.go:279] https://192.168.50.186:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 13:44:55.374733   58430 api_server.go:103] status: https://192.168.50.186:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 13:44:55.583112   58430 api_server.go:253] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 13:44:55.588756   58430 api_server.go:279] https://192.168.50.186:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 13:44:55.588782   58430 api_server.go:103] status: https://192.168.50.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 13:44:56.082182   58430 api_server.go:253] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 13:44:56.088062   58430 api_server.go:279] https://192.168.50.186:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 13:44:56.088108   58430 api_server.go:103] status: https://192.168.50.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 13:44:56.582273   58430 api_server.go:253] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 13:44:56.587049   58430 api_server.go:279] https://192.168.50.186:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 13:44:56.587087   58430 api_server.go:103] status: https://192.168.50.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 13:44:57.082664   58430 api_server.go:253] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 13:44:57.092562   58430 api_server.go:279] https://192.168.50.186:8444/healthz returned 200:
	ok
	I0816 13:44:57.100740   58430 api_server.go:141] control plane version: v1.31.0
	I0816 13:44:57.100767   58430 api_server.go:131] duration metric: took 5.018647278s to wait for apiserver health ...
	I0816 13:44:57.100777   58430 cni.go:84] Creating CNI manager for ""
	I0816 13:44:57.100784   58430 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:44:57.102775   58430 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 13:44:53.358390   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:44:55.358437   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:44:57.104079   58430 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 13:44:57.115212   58430 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 13:44:57.137445   58430 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 13:44:57.150376   58430 system_pods.go:59] 8 kube-system pods found
	I0816 13:44:57.150412   58430 system_pods.go:61] "coredns-6f6b679f8f-xdwhx" [66987c52-9a8c-4ddd-a6cf-ac84172d8c8c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 13:44:57.150422   58430 system_pods.go:61] "etcd-default-k8s-diff-port-893736" [54f5bb47-00f5-4c65-88ae-460571872fd1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 13:44:57.150435   58430 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-893736" [c8c57619-3a4c-4cc9-b637-7842194071ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 13:44:57.150448   58430 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-893736" [e553aced-34bd-4d30-b473-082001fe2948] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 13:44:57.150454   58430 system_pods.go:61] "kube-proxy-btq6r" [a2b7b283-da62-4cb8-a039-07a509491e5e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0816 13:44:57.150458   58430 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-893736" [0a30cbd0-a2ac-4ca9-bddc-674e6fc9ae77] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 13:44:57.150463   58430 system_pods.go:61] "metrics-server-6867b74b74-j9tqh" [ef077e6d-f368-4872-bb87-9e031d3ea764] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 13:44:57.150472   58430 system_pods.go:61] "storage-provisioner" [e2fbf16a-3bc7-4300-8023-5dbb20ba70bc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0816 13:44:57.150481   58430 system_pods.go:74] duration metric: took 13.019757ms to wait for pod list to return data ...
	I0816 13:44:57.150489   58430 node_conditions.go:102] verifying NodePressure condition ...
	I0816 13:44:57.153699   58430 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 13:44:57.153721   58430 node_conditions.go:123] node cpu capacity is 2
	I0816 13:44:57.153731   58430 node_conditions.go:105] duration metric: took 3.237407ms to run NodePressure ...
	I0816 13:44:57.153752   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:57.439130   58430 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0816 13:44:57.446848   58430 kubeadm.go:739] kubelet initialised
	I0816 13:44:57.446876   58430 kubeadm.go:740] duration metric: took 7.718113ms waiting for restarted kubelet to initialise ...
	I0816 13:44:57.446885   58430 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:44:57.452263   58430 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-xdwhx" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:57.459002   58430 pod_ready.go:98] node "default-k8s-diff-port-893736" hosting pod "coredns-6f6b679f8f-xdwhx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:57.459024   58430 pod_ready.go:82] duration metric: took 6.735487ms for pod "coredns-6f6b679f8f-xdwhx" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:57.459033   58430 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-893736" hosting pod "coredns-6f6b679f8f-xdwhx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:57.459039   58430 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:57.463723   58430 pod_ready.go:98] node "default-k8s-diff-port-893736" hosting pod "etcd-default-k8s-diff-port-893736" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:57.463742   58430 pod_ready.go:82] duration metric: took 4.695932ms for pod "etcd-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:57.463751   58430 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-893736" hosting pod "etcd-default-k8s-diff-port-893736" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:57.463756   58430 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:57.468593   58430 pod_ready.go:98] node "default-k8s-diff-port-893736" hosting pod "kube-apiserver-default-k8s-diff-port-893736" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:57.468619   58430 pod_ready.go:82] duration metric: took 4.856498ms for pod "kube-apiserver-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:57.468632   58430 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-893736" hosting pod "kube-apiserver-default-k8s-diff-port-893736" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:57.468643   58430 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:57.541251   58430 pod_ready.go:98] node "default-k8s-diff-port-893736" hosting pod "kube-controller-manager-default-k8s-diff-port-893736" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:57.541278   58430 pod_ready.go:82] duration metric: took 72.626413ms for pod "kube-controller-manager-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:57.541290   58430 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-893736" hosting pod "kube-controller-manager-default-k8s-diff-port-893736" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:57.541296   58430 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-btq6r" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:57.940580   58430 pod_ready.go:98] node "default-k8s-diff-port-893736" hosting pod "kube-proxy-btq6r" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:57.940616   58430 pod_ready.go:82] duration metric: took 399.312571ms for pod "kube-proxy-btq6r" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:57.940627   58430 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-893736" hosting pod "kube-proxy-btq6r" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:57.940635   58430 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:58.340647   58430 pod_ready.go:98] node "default-k8s-diff-port-893736" hosting pod "kube-scheduler-default-k8s-diff-port-893736" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:58.340671   58430 pod_ready.go:82] duration metric: took 400.026004ms for pod "kube-scheduler-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:58.340683   58430 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-893736" hosting pod "kube-scheduler-default-k8s-diff-port-893736" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:58.340694   58430 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:58.750549   58430 pod_ready.go:98] node "default-k8s-diff-port-893736" hosting pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:58.750573   58430 pod_ready.go:82] duration metric: took 409.872187ms for pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:58.750588   58430 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-893736" hosting pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:58.750598   58430 pod_ready.go:39] duration metric: took 1.303702313s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:44:58.750626   58430 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 13:44:58.766462   58430 ops.go:34] apiserver oom_adj: -16
	I0816 13:44:58.766482   58430 kubeadm.go:597] duration metric: took 9.478690644s to restartPrimaryControlPlane
	I0816 13:44:58.766491   58430 kubeadm.go:394] duration metric: took 9.528416258s to StartCluster
	I0816 13:44:58.766509   58430 settings.go:142] acquiring lock: {Name:mk96ffc9331ceacd8f3c1c33a59b38e047898a0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:44:58.766572   58430 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 13:44:58.770737   58430 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/kubeconfig: {Name:mk102d7d0e1fecb6a50b9d6c1ee82dcff7f7a898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:44:58.771036   58430 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.186 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 13:44:58.771138   58430 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 13:44:58.771218   58430 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-893736"
	I0816 13:44:58.771232   58430 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-893736"
	I0816 13:44:58.771245   58430 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-893736"
	I0816 13:44:58.771281   58430 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-893736"
	I0816 13:44:58.771252   58430 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-893736"
	W0816 13:44:58.771337   58430 addons.go:243] addon storage-provisioner should already be in state true
	I0816 13:44:58.771371   58430 host.go:66] Checking if "default-k8s-diff-port-893736" exists ...
	I0816 13:44:58.771285   58430 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-893736"
	W0816 13:44:58.771447   58430 addons.go:243] addon metrics-server should already be in state true
	I0816 13:44:58.771485   58430 host.go:66] Checking if "default-k8s-diff-port-893736" exists ...
	I0816 13:44:58.771231   58430 config.go:182] Loaded profile config "default-k8s-diff-port-893736": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 13:44:58.771653   58430 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:58.771682   58430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:58.771750   58430 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:58.771781   58430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:58.771839   58430 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:58.771886   58430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:58.772665   58430 out.go:177] * Verifying Kubernetes components...
	I0816 13:44:58.773992   58430 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:44:58.788717   58430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41949
	I0816 13:44:58.789233   58430 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:58.789833   58430 main.go:141] libmachine: Using API Version  1
	I0816 13:44:58.789859   58430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:58.790269   58430 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:58.790882   58430 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:58.790913   58430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:58.791553   58430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35753
	I0816 13:44:58.791556   58430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36985
	I0816 13:44:58.791945   58430 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:58.791979   58430 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:58.792413   58430 main.go:141] libmachine: Using API Version  1
	I0816 13:44:58.792440   58430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:58.792813   58430 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:58.792963   58430 main.go:141] libmachine: Using API Version  1
	I0816 13:44:58.792986   58430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:58.793013   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetState
	I0816 13:44:58.793374   58430 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:58.793940   58430 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:58.793986   58430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:58.796723   58430 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-893736"
	W0816 13:44:58.796740   58430 addons.go:243] addon default-storageclass should already be in state true
	I0816 13:44:58.796763   58430 host.go:66] Checking if "default-k8s-diff-port-893736" exists ...
	I0816 13:44:58.797138   58430 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:58.797184   58430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:58.806753   58430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41511
	I0816 13:44:58.807162   58430 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:58.807605   58430 main.go:141] libmachine: Using API Version  1
	I0816 13:44:58.807624   58430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:58.807984   58430 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:58.808229   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetState
	I0816 13:44:58.809833   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:44:58.811642   58430 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:58.812716   58430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39567
	I0816 13:44:58.812888   58430 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 13:44:58.812902   58430 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 13:44:58.812937   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:58.813184   58430 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:58.813668   58430 main.go:141] libmachine: Using API Version  1
	I0816 13:44:58.813695   58430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:58.813725   58430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40851
	I0816 13:44:58.814101   58430 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:58.814207   58430 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:58.814696   58430 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:58.814715   58430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:58.814948   58430 main.go:141] libmachine: Using API Version  1
	I0816 13:44:58.814961   58430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:58.815304   58430 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:58.815518   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetState
	I0816 13:44:58.816936   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:58.817482   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:44:58.817529   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:58.817543   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:58.817871   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:58.818057   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:58.818219   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:58.818397   58430 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/default-k8s-diff-port-893736/id_rsa Username:docker}
	I0816 13:44:58.819251   58430 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0816 13:44:55.143862   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:55.144403   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:55.144433   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:55.144354   59274 retry.go:31] will retry after 3.573850343s: waiting for machine to come up
	I0816 13:44:58.720104   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:58.720571   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:58.720606   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:58.720510   59274 retry.go:31] will retry after 4.52867767s: waiting for machine to come up
	I0816 13:44:55.248021   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:55.747406   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:56.247470   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:56.747399   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:57.247462   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:57.747637   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:58.248194   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:58.747381   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:59.247772   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:59.748373   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:58.820720   58430 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 13:44:58.820733   58430 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 13:44:58.820747   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:58.823868   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:58.824290   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:58.824305   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:58.824489   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:58.824629   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:58.824764   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:58.824860   58430 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/default-k8s-diff-port-893736/id_rsa Username:docker}
	I0816 13:44:58.830530   58430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38427
	I0816 13:44:58.830894   58430 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:58.831274   58430 main.go:141] libmachine: Using API Version  1
	I0816 13:44:58.831294   58430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:58.831583   58430 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:58.831729   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetState
	I0816 13:44:58.833321   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:44:58.833512   58430 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 13:44:58.833526   58430 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 13:44:58.833543   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:58.836244   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:58.836626   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:58.836649   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:58.836762   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:58.836947   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:58.837098   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:58.837234   58430 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/default-k8s-diff-port-893736/id_rsa Username:docker}
	I0816 13:44:58.973561   58430 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 13:44:58.995763   58430 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-893736" to be "Ready" ...
	I0816 13:44:59.118558   58430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 13:44:59.126100   58430 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 13:44:59.126125   58430 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0816 13:44:59.154048   58430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 13:44:59.162623   58430 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 13:44:59.162649   58430 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 13:44:59.213614   58430 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 13:44:59.213635   58430 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 13:44:59.233653   58430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 13:44:59.485000   58430 main.go:141] libmachine: Making call to close driver server
	I0816 13:44:59.485030   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .Close
	I0816 13:44:59.485329   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | Closing plugin on server side
	I0816 13:44:59.485384   58430 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:44:59.485397   58430 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:44:59.485406   58430 main.go:141] libmachine: Making call to close driver server
	I0816 13:44:59.485414   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .Close
	I0816 13:44:59.485736   58430 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:44:59.485777   58430 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:44:59.485741   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | Closing plugin on server side
	I0816 13:44:59.491692   58430 main.go:141] libmachine: Making call to close driver server
	I0816 13:44:59.491711   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .Close
	I0816 13:44:59.491938   58430 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:44:59.491957   58430 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:45:00.273964   58430 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.04027784s)
	I0816 13:45:00.274018   58430 main.go:141] libmachine: Making call to close driver server
	I0816 13:45:00.274036   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .Close
	I0816 13:45:00.274032   58430 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.119945545s)
	I0816 13:45:00.274065   58430 main.go:141] libmachine: Making call to close driver server
	I0816 13:45:00.274080   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .Close
	I0816 13:45:00.274373   58430 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:45:00.274388   58430 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:45:00.274398   58430 main.go:141] libmachine: Making call to close driver server
	I0816 13:45:00.274406   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .Close
	I0816 13:45:00.274441   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | Closing plugin on server side
	I0816 13:45:00.274481   58430 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:45:00.274499   58430 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:45:00.274513   58430 main.go:141] libmachine: Making call to close driver server
	I0816 13:45:00.274526   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .Close
	I0816 13:45:00.274620   58430 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:45:00.274633   58430 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:45:00.274643   58430 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-893736"
	I0816 13:45:00.274749   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | Closing plugin on server side
	I0816 13:45:00.274842   58430 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:45:00.274851   58430 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:45:00.276747   58430 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0816 13:45:00.278150   58430 addons.go:510] duration metric: took 1.506994649s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0816 13:44:57.858846   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:00.357028   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:03.253913   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.254379   57240 main.go:141] libmachine: (embed-certs-302520) Found IP for machine: 192.168.39.125
	I0816 13:45:03.254401   57240 main.go:141] libmachine: (embed-certs-302520) Reserving static IP address...
	I0816 13:45:03.254418   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has current primary IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.254776   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "embed-certs-302520", mac: "52:54:00:15:a3:1b", ip: "192.168.39.125"} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:03.254804   57240 main.go:141] libmachine: (embed-certs-302520) Reserved static IP address: 192.168.39.125
	I0816 13:45:03.254822   57240 main.go:141] libmachine: (embed-certs-302520) DBG | skip adding static IP to network mk-embed-certs-302520 - found existing host DHCP lease matching {name: "embed-certs-302520", mac: "52:54:00:15:a3:1b", ip: "192.168.39.125"}
	I0816 13:45:03.254840   57240 main.go:141] libmachine: (embed-certs-302520) DBG | Getting to WaitForSSH function...
	I0816 13:45:03.254848   57240 main.go:141] libmachine: (embed-certs-302520) Waiting for SSH to be available...
	I0816 13:45:03.256951   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.257302   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:03.257327   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.257462   57240 main.go:141] libmachine: (embed-certs-302520) DBG | Using SSH client type: external
	I0816 13:45:03.257483   57240 main.go:141] libmachine: (embed-certs-302520) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/embed-certs-302520/id_rsa (-rw-------)
	I0816 13:45:03.257519   57240 main.go:141] libmachine: (embed-certs-302520) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.125 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-3966/.minikube/machines/embed-certs-302520/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 13:45:03.257528   57240 main.go:141] libmachine: (embed-certs-302520) DBG | About to run SSH command:
	I0816 13:45:03.257537   57240 main.go:141] libmachine: (embed-certs-302520) DBG | exit 0
	I0816 13:45:03.389262   57240 main.go:141] libmachine: (embed-certs-302520) DBG | SSH cmd err, output: <nil>: 
	I0816 13:45:03.389630   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetConfigRaw
	I0816 13:45:03.390305   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetIP
	I0816 13:45:03.392462   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.392767   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:03.392795   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.393012   57240 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/embed-certs-302520/config.json ...
	I0816 13:45:03.393212   57240 machine.go:93] provisionDockerMachine start ...
	I0816 13:45:03.393230   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	I0816 13:45:03.393453   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:45:03.395589   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.395949   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:03.395971   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.396086   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:45:03.396258   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:03.396447   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:03.396589   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:45:03.396785   57240 main.go:141] libmachine: Using SSH client type: native
	I0816 13:45:03.397004   57240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0816 13:45:03.397042   57240 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 13:45:03.513624   57240 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 13:45:03.513655   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetMachineName
	I0816 13:45:03.513954   57240 buildroot.go:166] provisioning hostname "embed-certs-302520"
	I0816 13:45:03.513976   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetMachineName
	I0816 13:45:03.514199   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:45:03.517138   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.517499   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:03.517520   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.517672   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:45:03.517867   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:03.518007   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:03.518168   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:45:03.518364   57240 main.go:141] libmachine: Using SSH client type: native
	I0816 13:45:03.518583   57240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0816 13:45:03.518599   57240 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-302520 && echo "embed-certs-302520" | sudo tee /etc/hostname
	I0816 13:45:03.647799   57240 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-302520
	
	I0816 13:45:03.647840   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:45:03.650491   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.650846   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:03.650880   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.651103   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:45:03.651301   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:03.651469   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:03.651614   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:45:03.651778   57240 main.go:141] libmachine: Using SSH client type: native
	I0816 13:45:03.651935   57240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0816 13:45:03.651951   57240 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-302520' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-302520/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-302520' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 13:45:03.778350   57240 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 13:45:03.778381   57240 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-3966/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-3966/.minikube}
	I0816 13:45:03.778411   57240 buildroot.go:174] setting up certificates
	I0816 13:45:03.778423   57240 provision.go:84] configureAuth start
	I0816 13:45:03.778435   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetMachineName
	I0816 13:45:03.778689   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetIP
	I0816 13:45:03.781319   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.781673   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:03.781695   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.781829   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:45:03.783724   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.784035   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:03.784064   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.784180   57240 provision.go:143] copyHostCerts
	I0816 13:45:03.784243   57240 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem, removing ...
	I0816 13:45:03.784262   57240 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem
	I0816 13:45:03.784335   57240 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem (1082 bytes)
	I0816 13:45:03.784462   57240 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem, removing ...
	I0816 13:45:03.784474   57240 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem
	I0816 13:45:03.784503   57240 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem (1123 bytes)
	I0816 13:45:03.784568   57240 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem, removing ...
	I0816 13:45:03.784578   57240 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem
	I0816 13:45:03.784600   57240 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem (1675 bytes)
	I0816 13:45:03.784647   57240 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem org=jenkins.embed-certs-302520 san=[127.0.0.1 192.168.39.125 embed-certs-302520 localhost minikube]
	I0816 13:45:03.901261   57240 provision.go:177] copyRemoteCerts
	I0816 13:45:03.901314   57240 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 13:45:03.901339   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:45:03.904505   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.904893   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:03.904933   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.905118   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:45:03.905329   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:03.905499   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:45:03.905650   57240 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/embed-certs-302520/id_rsa Username:docker}
	I0816 13:45:03.996083   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 13:45:04.024594   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0816 13:45:04.054080   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 13:45:04.079810   57240 provision.go:87] duration metric: took 301.374056ms to configureAuth
	I0816 13:45:04.079865   57240 buildroot.go:189] setting minikube options for container-runtime
	I0816 13:45:04.080048   57240 config.go:182] Loaded profile config "embed-certs-302520": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 13:45:04.080116   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:45:04.082649   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.083037   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:04.083090   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.083239   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:45:04.083430   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:04.083598   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:04.083775   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:45:04.083951   57240 main.go:141] libmachine: Using SSH client type: native
	I0816 13:45:04.084149   57240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0816 13:45:04.084171   57240 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 13:45:04.404121   57240 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 13:45:04.404150   57240 machine.go:96] duration metric: took 1.010924979s to provisionDockerMachine
	I0816 13:45:04.404163   57240 start.go:293] postStartSetup for "embed-certs-302520" (driver="kvm2")
	I0816 13:45:04.404182   57240 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 13:45:04.404202   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	I0816 13:45:04.404542   57240 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 13:45:04.404574   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:45:04.407763   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.408118   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:04.408145   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.408311   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:45:04.408508   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:04.408685   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:45:04.408864   57240 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/embed-certs-302520/id_rsa Username:docker}
	I0816 13:45:04.496519   57240 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 13:45:04.501262   57240 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 13:45:04.501282   57240 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/addons for local assets ...
	I0816 13:45:04.501352   57240 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/files for local assets ...
	I0816 13:45:04.501440   57240 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem -> 111492.pem in /etc/ssl/certs
	I0816 13:45:04.501554   57240 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 13:45:04.511338   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /etc/ssl/certs/111492.pem (1708 bytes)
	I0816 13:45:04.535372   57240 start.go:296] duration metric: took 131.188411ms for postStartSetup
	I0816 13:45:04.535411   57240 fix.go:56] duration metric: took 21.301761751s for fixHost
	I0816 13:45:04.535435   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:45:04.538286   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.538651   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:04.538676   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.538868   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:45:04.539069   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:04.539208   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:04.539344   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:45:04.539504   57240 main.go:141] libmachine: Using SSH client type: native
	I0816 13:45:04.539702   57240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0816 13:45:04.539715   57240 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 13:45:04.653529   57240 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723815904.606422212
	
	I0816 13:45:04.653556   57240 fix.go:216] guest clock: 1723815904.606422212
	I0816 13:45:04.653566   57240 fix.go:229] Guest: 2024-08-16 13:45:04.606422212 +0000 UTC Remote: 2024-08-16 13:45:04.535416156 +0000 UTC m=+359.547804920 (delta=71.006056ms)
	I0816 13:45:04.653598   57240 fix.go:200] guest clock delta is within tolerance: 71.006056ms
	I0816 13:45:04.653605   57240 start.go:83] releasing machines lock for "embed-certs-302520", held for 21.419990329s
	I0816 13:45:04.653631   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	I0816 13:45:04.653922   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetIP
	I0816 13:45:04.656682   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.657009   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:04.657034   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.657211   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	I0816 13:45:04.657800   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	I0816 13:45:04.657981   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	I0816 13:45:04.658069   57240 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 13:45:04.658114   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:45:04.658172   57240 ssh_runner.go:195] Run: cat /version.json
	I0816 13:45:04.658193   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:45:04.660629   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.660942   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.661051   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:04.661076   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.661315   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:45:04.661433   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:04.661470   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.661474   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:04.661598   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:45:04.661646   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:45:04.661841   57240 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/embed-certs-302520/id_rsa Username:docker}
	I0816 13:45:04.661904   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:04.662054   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:45:04.662199   57240 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/embed-certs-302520/id_rsa Username:docker}
	I0816 13:45:04.767691   57240 ssh_runner.go:195] Run: systemctl --version
	I0816 13:45:04.773984   57240 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 13:45:04.925431   57240 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 13:45:04.931848   57240 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 13:45:04.931931   57240 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 13:45:04.951355   57240 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 13:45:04.951381   57240 start.go:495] detecting cgroup driver to use...
	I0816 13:45:04.951442   57240 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 13:45:04.972903   57240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 13:45:04.987531   57240 docker.go:217] disabling cri-docker service (if available) ...
	I0816 13:45:04.987600   57240 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 13:45:05.001880   57240 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 13:45:05.018403   57240 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 13:45:00.247513   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:00.748342   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:01.248179   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:01.747757   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:02.247789   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:02.748162   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:03.247936   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:03.747434   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:04.247832   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:04.747704   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:00.999833   58430 node_ready.go:53] node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:45:03.500652   58430 node_ready.go:53] node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:45:05.143662   57240 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 13:45:05.297447   57240 docker.go:233] disabling docker service ...
	I0816 13:45:05.297527   57240 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 13:45:05.313382   57240 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 13:45:05.327116   57240 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 13:45:05.486443   57240 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 13:45:05.620465   57240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 13:45:05.634813   57240 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 13:45:05.653822   57240 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 13:45:05.653887   57240 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:45:05.664976   57240 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 13:45:05.665045   57240 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:45:05.676414   57240 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:45:05.688631   57240 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:45:05.700400   57240 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 13:45:05.712822   57240 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:45:05.724573   57240 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:45:05.742934   57240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:45:05.755669   57240 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 13:45:05.766837   57240 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 13:45:05.766890   57240 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 13:45:05.782296   57240 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 13:45:05.793695   57240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:45:05.919559   57240 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 13:45:06.057480   57240 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 13:45:06.057543   57240 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 13:45:06.062348   57240 start.go:563] Will wait 60s for crictl version
	I0816 13:45:06.062414   57240 ssh_runner.go:195] Run: which crictl
	I0816 13:45:06.066456   57240 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 13:45:06.104075   57240 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 13:45:06.104156   57240 ssh_runner.go:195] Run: crio --version
	I0816 13:45:06.132406   57240 ssh_runner.go:195] Run: crio --version
	I0816 13:45:06.161878   57240 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 13:45:02.357119   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:04.361365   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:06.859546   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:06.163233   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetIP
	I0816 13:45:06.165924   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:06.166310   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:06.166333   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:06.166529   57240 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0816 13:45:06.170722   57240 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 13:45:06.183152   57240 kubeadm.go:883] updating cluster {Name:embed-certs-302520 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-302520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 13:45:06.183256   57240 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 13:45:06.183306   57240 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 13:45:06.223405   57240 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 13:45:06.223489   57240 ssh_runner.go:195] Run: which lz4
	I0816 13:45:06.227851   57240 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 13:45:06.232132   57240 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 13:45:06.232156   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0816 13:45:07.642616   57240 crio.go:462] duration metric: took 1.414789512s to copy over tarball
	I0816 13:45:07.642698   57240 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 13:45:09.794329   57240 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.151601472s)
	I0816 13:45:09.794359   57240 crio.go:469] duration metric: took 2.151717024s to extract the tarball
	I0816 13:45:09.794369   57240 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 13:45:09.833609   57240 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 13:45:09.878781   57240 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 13:45:09.878806   57240 cache_images.go:84] Images are preloaded, skipping loading
	I0816 13:45:09.878815   57240 kubeadm.go:934] updating node { 192.168.39.125 8443 v1.31.0 crio true true} ...
	I0816 13:45:09.878944   57240 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-302520 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.125
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-302520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 13:45:09.879032   57240 ssh_runner.go:195] Run: crio config
	I0816 13:45:09.924876   57240 cni.go:84] Creating CNI manager for ""
	I0816 13:45:09.924900   57240 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:45:09.924927   57240 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 13:45:09.924958   57240 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.125 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-302520 NodeName:embed-certs-302520 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.125"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.125 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 13:45:09.925150   57240 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.125
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-302520"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.125
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.125"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 13:45:09.925226   57240 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 13:45:09.935204   57240 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 13:45:09.935280   57240 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 13:45:09.945366   57240 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0816 13:45:09.961881   57240 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 13:45:09.978495   57240 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0816 13:45:09.995664   57240 ssh_runner.go:195] Run: grep 192.168.39.125	control-plane.minikube.internal$ /etc/hosts
	I0816 13:45:10.000132   57240 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.125	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 13:45:10.013039   57240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:45:05.247343   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:05.747420   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:06.247801   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:06.747978   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:07.248393   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:07.747801   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:08.248388   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:08.747624   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:09.247530   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:09.748311   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:06.000553   58430 node_ready.go:49] node "default-k8s-diff-port-893736" has status "Ready":"True"
	I0816 13:45:06.000579   58430 node_ready.go:38] duration metric: took 7.004778161s for node "default-k8s-diff-port-893736" to be "Ready" ...
	I0816 13:45:06.000590   58430 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:45:06.006987   58430 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-xdwhx" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:06.012552   58430 pod_ready.go:93] pod "coredns-6f6b679f8f-xdwhx" in "kube-system" namespace has status "Ready":"True"
	I0816 13:45:06.012577   58430 pod_ready.go:82] duration metric: took 5.565882ms for pod "coredns-6f6b679f8f-xdwhx" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:06.012588   58430 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:06.519889   58430 pod_ready.go:93] pod "etcd-default-k8s-diff-port-893736" in "kube-system" namespace has status "Ready":"True"
	I0816 13:45:06.519919   58430 pod_ready.go:82] duration metric: took 507.322547ms for pod "etcd-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:06.519932   58430 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:08.527411   58430 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-893736" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:09.527923   58430 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-893736" in "kube-system" namespace has status "Ready":"True"
	I0816 13:45:09.527950   58430 pod_ready.go:82] duration metric: took 3.008009418s for pod "kube-apiserver-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:09.527963   58430 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:09.534422   58430 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-893736" in "kube-system" namespace has status "Ready":"True"
	I0816 13:45:09.534460   58430 pod_ready.go:82] duration metric: took 6.488169ms for pod "kube-controller-manager-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:09.534476   58430 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-btq6r" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:09.538660   58430 pod_ready.go:93] pod "kube-proxy-btq6r" in "kube-system" namespace has status "Ready":"True"
	I0816 13:45:09.538688   58430 pod_ready.go:82] duration metric: took 4.202597ms for pod "kube-proxy-btq6r" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:09.538700   58430 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:09.600350   58430 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-893736" in "kube-system" namespace has status "Ready":"True"
	I0816 13:45:09.600377   58430 pod_ready.go:82] duration metric: took 61.666987ms for pod "kube-scheduler-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:09.600391   58430 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:09.361968   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:11.859112   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:10.143519   57240 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 13:45:10.160358   57240 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/embed-certs-302520 for IP: 192.168.39.125
	I0816 13:45:10.160381   57240 certs.go:194] generating shared ca certs ...
	I0816 13:45:10.160400   57240 certs.go:226] acquiring lock for ca certs: {Name:mkdb46755373c135acf8239d2d4352f4b0b3d1d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:45:10.160591   57240 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key
	I0816 13:45:10.160646   57240 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key
	I0816 13:45:10.160656   57240 certs.go:256] generating profile certs ...
	I0816 13:45:10.160767   57240 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/embed-certs-302520/client.key
	I0816 13:45:10.160845   57240 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/embed-certs-302520/apiserver.key.f0c5f9ff
	I0816 13:45:10.160893   57240 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/embed-certs-302520/proxy-client.key
	I0816 13:45:10.161075   57240 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem (1338 bytes)
	W0816 13:45:10.161133   57240 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149_empty.pem, impossibly tiny 0 bytes
	I0816 13:45:10.161148   57240 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 13:45:10.161182   57240 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem (1082 bytes)
	I0816 13:45:10.161213   57240 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem (1123 bytes)
	I0816 13:45:10.161243   57240 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem (1675 bytes)
	I0816 13:45:10.161298   57240 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem (1708 bytes)
	I0816 13:45:10.161944   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 13:45:10.202268   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 13:45:10.242684   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 13:45:10.287223   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 13:45:10.316762   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/embed-certs-302520/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0816 13:45:10.343352   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/embed-certs-302520/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 13:45:10.371042   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/embed-certs-302520/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 13:45:10.394922   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/embed-certs-302520/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 13:45:10.419358   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /usr/share/ca-certificates/111492.pem (1708 bytes)
	I0816 13:45:10.442301   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 13:45:10.465266   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem --> /usr/share/ca-certificates/11149.pem (1338 bytes)
	I0816 13:45:10.487647   57240 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 13:45:10.504713   57240 ssh_runner.go:195] Run: openssl version
	I0816 13:45:10.510493   57240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111492.pem && ln -fs /usr/share/ca-certificates/111492.pem /etc/ssl/certs/111492.pem"
	I0816 13:45:10.521818   57240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111492.pem
	I0816 13:45:10.526637   57240 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 12:33 /usr/share/ca-certificates/111492.pem
	I0816 13:45:10.526681   57240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111492.pem
	I0816 13:45:10.532660   57240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111492.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 13:45:10.543403   57240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 13:45:10.554344   57240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:45:10.559089   57240 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 12:22 /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:45:10.559149   57240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:45:10.564982   57240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 13:45:10.576074   57240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11149.pem && ln -fs /usr/share/ca-certificates/11149.pem /etc/ssl/certs/11149.pem"
	I0816 13:45:10.586596   57240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11149.pem
	I0816 13:45:10.591586   57240 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 12:33 /usr/share/ca-certificates/11149.pem
	I0816 13:45:10.591637   57240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11149.pem
	I0816 13:45:10.597624   57240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11149.pem /etc/ssl/certs/51391683.0"
	I0816 13:45:10.608838   57240 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 13:45:10.613785   57240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 13:45:10.619902   57240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 13:45:10.625554   57240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 13:45:10.631526   57240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 13:45:10.637251   57240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 13:45:10.643210   57240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 13:45:10.649187   57240 kubeadm.go:392] StartCluster: {Name:embed-certs-302520 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-302520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 13:45:10.649298   57240 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 13:45:10.649349   57240 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 13:45:10.686074   57240 cri.go:89] found id: ""
	I0816 13:45:10.686153   57240 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 13:45:10.696504   57240 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 13:45:10.696527   57240 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 13:45:10.696581   57240 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 13:45:10.706447   57240 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 13:45:10.707413   57240 kubeconfig.go:125] found "embed-certs-302520" server: "https://192.168.39.125:8443"
	I0816 13:45:10.710045   57240 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 13:45:10.719563   57240 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.125
	I0816 13:45:10.719599   57240 kubeadm.go:1160] stopping kube-system containers ...
	I0816 13:45:10.719613   57240 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 13:45:10.719665   57240 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 13:45:10.759584   57240 cri.go:89] found id: ""
	I0816 13:45:10.759661   57240 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 13:45:10.776355   57240 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 13:45:10.786187   57240 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 13:45:10.786205   57240 kubeadm.go:157] found existing configuration files:
	
	I0816 13:45:10.786244   57240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 13:45:10.795644   57240 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 13:45:10.795723   57240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 13:45:10.807988   57240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 13:45:10.817234   57240 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 13:45:10.817299   57240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 13:45:10.826601   57240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 13:45:10.835702   57240 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 13:45:10.835763   57240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 13:45:10.845160   57240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 13:45:10.855522   57240 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 13:45:10.855578   57240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 13:45:10.865445   57240 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 13:45:10.875429   57240 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:45:10.988958   57240 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:45:12.195215   57240 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.206217359s)
	I0816 13:45:12.195241   57240 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:45:12.432322   57240 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:45:12.514631   57240 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:45:12.606133   57240 api_server.go:52] waiting for apiserver process to appear ...
	I0816 13:45:12.606238   57240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:13.106823   57240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:13.606856   57240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:13.624866   57240 api_server.go:72] duration metric: took 1.018743147s to wait for apiserver process to appear ...
	I0816 13:45:13.624897   57240 api_server.go:88] waiting for apiserver healthz status ...
	I0816 13:45:13.624930   57240 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0816 13:45:13.625953   57240 api_server.go:269] stopped: https://192.168.39.125:8443/healthz: Get "https://192.168.39.125:8443/healthz": dial tcp 192.168.39.125:8443: connect: connection refused
	I0816 13:45:14.124979   57240 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0816 13:45:10.247689   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:10.747756   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:11.247963   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:11.747523   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:12.247397   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:12.748146   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:13.247976   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:13.748109   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:14.247662   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:14.748041   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:11.607443   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:14.107647   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:14.357916   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:16.358986   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:16.404020   57240 api_server.go:279] https://192.168.39.125:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 13:45:16.404049   57240 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 13:45:16.404062   57240 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0816 13:45:16.462649   57240 api_server.go:279] https://192.168.39.125:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 13:45:16.462685   57240 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 13:45:16.625998   57240 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0816 13:45:16.632560   57240 api_server.go:279] https://192.168.39.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 13:45:16.632586   57240 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 13:45:17.124984   57240 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0816 13:45:17.133533   57240 api_server.go:279] https://192.168.39.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 13:45:17.133563   57240 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 13:45:17.624993   57240 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0816 13:45:17.629720   57240 api_server.go:279] https://192.168.39.125:8443/healthz returned 200:
	ok
	I0816 13:45:17.635848   57240 api_server.go:141] control plane version: v1.31.0
	I0816 13:45:17.635874   57240 api_server.go:131] duration metric: took 4.010970063s to wait for apiserver health ...
	I0816 13:45:17.635885   57240 cni.go:84] Creating CNI manager for ""
	I0816 13:45:17.635892   57240 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:45:17.637609   57240 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 13:45:17.638828   57240 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 13:45:17.650034   57240 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 13:45:17.681352   57240 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 13:45:17.691752   57240 system_pods.go:59] 8 kube-system pods found
	I0816 13:45:17.691784   57240 system_pods.go:61] "coredns-6f6b679f8f-phxht" [df7bd896-d1c6-4a0e-aead-e3db36e915da] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 13:45:17.691792   57240 system_pods.go:61] "etcd-embed-certs-302520" [ef7bae1c-7cd3-4d8e-b2fc-e5837f4c5a1e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 13:45:17.691801   57240 system_pods.go:61] "kube-apiserver-embed-certs-302520" [957ba8ec-91ae-4cea-902f-81a286e35659] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 13:45:17.691806   57240 system_pods.go:61] "kube-controller-manager-embed-certs-302520" [afbfc2da-5435-4ebb-ada0-e0edc9d09a7d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 13:45:17.691817   57240 system_pods.go:61] "kube-proxy-nnc6b" [ec8b820d-6f1d-4777-9f76-7efffb4e6e79] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0816 13:45:17.691824   57240 system_pods.go:61] "kube-scheduler-embed-certs-302520" [077024c8-3dfd-4e8c-850a-333b63d3f23b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 13:45:17.691832   57240 system_pods.go:61] "metrics-server-6867b74b74-9277d" [5d7ee9e5-b40c-4840-9fb4-0b7b8be9faca] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 13:45:17.691837   57240 system_pods.go:61] "storage-provisioner" [6f3dc7f6-a3e0-4bc3-b362-e1d97633d0eb] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0816 13:45:17.691854   57240 system_pods.go:74] duration metric: took 10.481601ms to wait for pod list to return data ...
	I0816 13:45:17.691861   57240 node_conditions.go:102] verifying NodePressure condition ...
	I0816 13:45:17.695253   57240 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 13:45:17.695278   57240 node_conditions.go:123] node cpu capacity is 2
	I0816 13:45:17.695292   57240 node_conditions.go:105] duration metric: took 3.4236ms to run NodePressure ...
	I0816 13:45:17.695311   57240 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:45:17.996024   57240 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0816 13:45:17.999887   57240 kubeadm.go:739] kubelet initialised
	I0816 13:45:17.999906   57240 kubeadm.go:740] duration metric: took 3.859222ms waiting for restarted kubelet to initialise ...
	I0816 13:45:17.999913   57240 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:45:18.004476   57240 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-phxht" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:18.009142   57240 pod_ready.go:98] node "embed-certs-302520" hosting pod "coredns-6f6b679f8f-phxht" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-302520" has status "Ready":"False"
	I0816 13:45:18.009162   57240 pod_ready.go:82] duration metric: took 4.665087ms for pod "coredns-6f6b679f8f-phxht" in "kube-system" namespace to be "Ready" ...
	E0816 13:45:18.009170   57240 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-302520" hosting pod "coredns-6f6b679f8f-phxht" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-302520" has status "Ready":"False"
	I0816 13:45:18.009175   57240 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:18.014083   57240 pod_ready.go:98] node "embed-certs-302520" hosting pod "etcd-embed-certs-302520" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-302520" has status "Ready":"False"
	I0816 13:45:18.014102   57240 pod_ready.go:82] duration metric: took 4.91913ms for pod "etcd-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	E0816 13:45:18.014118   57240 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-302520" hosting pod "etcd-embed-certs-302520" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-302520" has status "Ready":"False"
	I0816 13:45:18.014124   57240 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:18.018257   57240 pod_ready.go:98] node "embed-certs-302520" hosting pod "kube-apiserver-embed-certs-302520" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-302520" has status "Ready":"False"
	I0816 13:45:18.018276   57240 pod_ready.go:82] duration metric: took 4.14471ms for pod "kube-apiserver-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	E0816 13:45:18.018283   57240 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-302520" hosting pod "kube-apiserver-embed-certs-302520" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-302520" has status "Ready":"False"
	I0816 13:45:18.018288   57240 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:18.085229   57240 pod_ready.go:98] node "embed-certs-302520" hosting pod "kube-controller-manager-embed-certs-302520" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-302520" has status "Ready":"False"
	I0816 13:45:18.085257   57240 pod_ready.go:82] duration metric: took 66.95357ms for pod "kube-controller-manager-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	E0816 13:45:18.085269   57240 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-302520" hosting pod "kube-controller-manager-embed-certs-302520" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-302520" has status "Ready":"False"
	I0816 13:45:18.085276   57240 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-nnc6b" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:18.485094   57240 pod_ready.go:93] pod "kube-proxy-nnc6b" in "kube-system" namespace has status "Ready":"True"
	I0816 13:45:18.485124   57240 pod_ready.go:82] duration metric: took 399.831747ms for pod "kube-proxy-nnc6b" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:18.485135   57240 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:15.248141   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:15.747452   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:16.247654   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:16.747569   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:17.248203   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:17.747951   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:18.248147   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:18.747490   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:19.248135   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:19.748201   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:16.107986   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:18.606838   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:18.857109   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:20.858242   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:20.491635   57240 pod_ready.go:103] pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:22.492484   57240 pod_ready.go:103] pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:24.992054   57240 pod_ready.go:103] pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:20.247741   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:20.747432   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:21.247600   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:21.748309   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:22.247438   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:22.748379   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:23.247577   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:23.747950   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:24.247733   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:24.748079   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:21.107371   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:23.607589   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:23.357770   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:25.358102   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:26.992544   57240 pod_ready.go:103] pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:29.491552   57240 pod_ready.go:103] pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:25.247402   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:25.747623   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:26.248101   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:26.747403   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:27.248040   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:27.747380   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:28.247857   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:28.748374   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:29.247819   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:29.747331   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:26.106454   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:28.107564   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:30.115954   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:27.358671   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:29.857631   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:31.862487   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:30.491291   57240 pod_ready.go:93] pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace has status "Ready":"True"
	I0816 13:45:30.491320   57240 pod_ready.go:82] duration metric: took 12.006175772s for pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:30.491333   57240 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:32.497481   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:34.500397   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:30.247771   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:30.747706   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:31.247762   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:31.748013   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:32.247551   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:32.748020   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:33.247432   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:33.747594   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:34.247750   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:45:34.247831   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:45:34.295412   57945 cri.go:89] found id: ""
	I0816 13:45:34.295439   57945 logs.go:276] 0 containers: []
	W0816 13:45:34.295461   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:45:34.295468   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:45:34.295529   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:45:34.332061   57945 cri.go:89] found id: ""
	I0816 13:45:34.332085   57945 logs.go:276] 0 containers: []
	W0816 13:45:34.332093   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:45:34.332100   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:45:34.332158   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:45:34.369512   57945 cri.go:89] found id: ""
	I0816 13:45:34.369535   57945 logs.go:276] 0 containers: []
	W0816 13:45:34.369546   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:45:34.369553   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:45:34.369617   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:45:34.406324   57945 cri.go:89] found id: ""
	I0816 13:45:34.406351   57945 logs.go:276] 0 containers: []
	W0816 13:45:34.406362   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:45:34.406370   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:45:34.406436   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:45:34.442193   57945 cri.go:89] found id: ""
	I0816 13:45:34.442229   57945 logs.go:276] 0 containers: []
	W0816 13:45:34.442239   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:45:34.442244   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:45:34.442301   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:45:34.476563   57945 cri.go:89] found id: ""
	I0816 13:45:34.476600   57945 logs.go:276] 0 containers: []
	W0816 13:45:34.476616   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:45:34.476622   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:45:34.476670   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:45:34.515841   57945 cri.go:89] found id: ""
	I0816 13:45:34.515869   57945 logs.go:276] 0 containers: []
	W0816 13:45:34.515877   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:45:34.515883   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:45:34.515940   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:45:34.551242   57945 cri.go:89] found id: ""
	I0816 13:45:34.551276   57945 logs.go:276] 0 containers: []
	W0816 13:45:34.551288   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:45:34.551305   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:45:34.551321   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:45:34.564902   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:45:34.564944   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:45:34.694323   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:45:34.694349   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:45:34.694366   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:45:34.770548   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:45:34.770589   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:45:34.818339   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:45:34.818366   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:45:32.606912   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:34.607600   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:34.358649   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:36.856727   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:37.003939   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:39.498178   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:37.370390   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:37.383474   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:45:37.383558   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:45:37.419911   57945 cri.go:89] found id: ""
	I0816 13:45:37.419943   57945 logs.go:276] 0 containers: []
	W0816 13:45:37.419954   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:45:37.419961   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:45:37.420027   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:45:37.453845   57945 cri.go:89] found id: ""
	I0816 13:45:37.453876   57945 logs.go:276] 0 containers: []
	W0816 13:45:37.453884   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:45:37.453889   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:45:37.453949   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:45:37.489053   57945 cri.go:89] found id: ""
	I0816 13:45:37.489088   57945 logs.go:276] 0 containers: []
	W0816 13:45:37.489099   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:45:37.489106   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:45:37.489176   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:45:37.525631   57945 cri.go:89] found id: ""
	I0816 13:45:37.525664   57945 logs.go:276] 0 containers: []
	W0816 13:45:37.525676   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:45:37.525684   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:45:37.525743   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:45:37.560064   57945 cri.go:89] found id: ""
	I0816 13:45:37.560089   57945 logs.go:276] 0 containers: []
	W0816 13:45:37.560101   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:45:37.560109   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:45:37.560168   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:45:37.593856   57945 cri.go:89] found id: ""
	I0816 13:45:37.593888   57945 logs.go:276] 0 containers: []
	W0816 13:45:37.593899   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:45:37.593907   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:45:37.593969   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:45:37.627775   57945 cri.go:89] found id: ""
	I0816 13:45:37.627808   57945 logs.go:276] 0 containers: []
	W0816 13:45:37.627818   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:45:37.627828   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:45:37.627888   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:45:37.660926   57945 cri.go:89] found id: ""
	I0816 13:45:37.660962   57945 logs.go:276] 0 containers: []
	W0816 13:45:37.660973   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:45:37.660991   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:45:37.661008   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:45:37.738954   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:45:37.738993   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:45:37.778976   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:45:37.779006   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:45:37.831361   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:45:37.831397   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:45:37.845096   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:45:37.845122   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:45:37.930797   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:45:37.106303   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:39.107343   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:38.857564   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:40.858908   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:41.998945   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:43.999474   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:40.431616   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:40.445298   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:45:40.445365   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:45:40.478229   57945 cri.go:89] found id: ""
	I0816 13:45:40.478252   57945 logs.go:276] 0 containers: []
	W0816 13:45:40.478259   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:45:40.478265   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:45:40.478313   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:45:40.514721   57945 cri.go:89] found id: ""
	I0816 13:45:40.514744   57945 logs.go:276] 0 containers: []
	W0816 13:45:40.514754   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:45:40.514761   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:45:40.514819   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:45:40.550604   57945 cri.go:89] found id: ""
	I0816 13:45:40.550629   57945 logs.go:276] 0 containers: []
	W0816 13:45:40.550637   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:45:40.550644   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:45:40.550700   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:45:40.589286   57945 cri.go:89] found id: ""
	I0816 13:45:40.589312   57945 logs.go:276] 0 containers: []
	W0816 13:45:40.589320   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:45:40.589326   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:45:40.589382   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:45:40.622689   57945 cri.go:89] found id: ""
	I0816 13:45:40.622709   57945 logs.go:276] 0 containers: []
	W0816 13:45:40.622717   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:45:40.622722   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:45:40.622778   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:45:40.660872   57945 cri.go:89] found id: ""
	I0816 13:45:40.660897   57945 logs.go:276] 0 containers: []
	W0816 13:45:40.660915   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:45:40.660925   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:45:40.660986   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:45:40.697369   57945 cri.go:89] found id: ""
	I0816 13:45:40.697395   57945 logs.go:276] 0 containers: []
	W0816 13:45:40.697404   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:45:40.697415   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:45:40.697521   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:45:40.733565   57945 cri.go:89] found id: ""
	I0816 13:45:40.733594   57945 logs.go:276] 0 containers: []
	W0816 13:45:40.733604   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:45:40.733615   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:45:40.733629   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:45:40.770951   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:45:40.770993   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:45:40.824983   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:45:40.825025   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:45:40.838846   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:45:40.838876   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:45:40.915687   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:45:40.915718   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:45:40.915733   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:45:43.496409   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:43.511419   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:45:43.511485   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:45:43.556996   57945 cri.go:89] found id: ""
	I0816 13:45:43.557031   57945 logs.go:276] 0 containers: []
	W0816 13:45:43.557042   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:45:43.557050   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:45:43.557102   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:45:43.609200   57945 cri.go:89] found id: ""
	I0816 13:45:43.609228   57945 logs.go:276] 0 containers: []
	W0816 13:45:43.609237   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:45:43.609244   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:45:43.609305   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:45:43.648434   57945 cri.go:89] found id: ""
	I0816 13:45:43.648458   57945 logs.go:276] 0 containers: []
	W0816 13:45:43.648467   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:45:43.648474   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:45:43.648538   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:45:43.687179   57945 cri.go:89] found id: ""
	I0816 13:45:43.687214   57945 logs.go:276] 0 containers: []
	W0816 13:45:43.687222   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:45:43.687228   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:45:43.687293   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:45:43.721723   57945 cri.go:89] found id: ""
	I0816 13:45:43.721751   57945 logs.go:276] 0 containers: []
	W0816 13:45:43.721762   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:45:43.721769   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:45:43.721847   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:45:43.756469   57945 cri.go:89] found id: ""
	I0816 13:45:43.756492   57945 logs.go:276] 0 containers: []
	W0816 13:45:43.756501   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:45:43.756506   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:45:43.756560   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:45:43.790241   57945 cri.go:89] found id: ""
	I0816 13:45:43.790267   57945 logs.go:276] 0 containers: []
	W0816 13:45:43.790275   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:45:43.790281   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:45:43.790329   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:45:43.828620   57945 cri.go:89] found id: ""
	I0816 13:45:43.828646   57945 logs.go:276] 0 containers: []
	W0816 13:45:43.828654   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:45:43.828662   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:45:43.828677   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:45:43.879573   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:45:43.879607   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:45:43.893813   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:45:43.893842   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:45:43.975188   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:45:43.975209   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:45:43.975220   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:45:44.054231   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:45:44.054266   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:45:41.609813   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:44.116781   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:43.358670   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:45.857710   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:46.497146   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:48.498302   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:46.593190   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:46.607472   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:45:46.607568   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:45:46.642764   57945 cri.go:89] found id: ""
	I0816 13:45:46.642787   57945 logs.go:276] 0 containers: []
	W0816 13:45:46.642795   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:45:46.642800   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:45:46.642848   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:45:46.678965   57945 cri.go:89] found id: ""
	I0816 13:45:46.678992   57945 logs.go:276] 0 containers: []
	W0816 13:45:46.679000   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:45:46.679005   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:45:46.679051   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:45:46.717632   57945 cri.go:89] found id: ""
	I0816 13:45:46.717657   57945 logs.go:276] 0 containers: []
	W0816 13:45:46.717666   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:45:46.717671   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:45:46.717720   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:45:46.758359   57945 cri.go:89] found id: ""
	I0816 13:45:46.758407   57945 logs.go:276] 0 containers: []
	W0816 13:45:46.758419   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:45:46.758427   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:45:46.758487   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:45:46.798405   57945 cri.go:89] found id: ""
	I0816 13:45:46.798437   57945 logs.go:276] 0 containers: []
	W0816 13:45:46.798448   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:45:46.798472   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:45:46.798547   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:45:46.834977   57945 cri.go:89] found id: ""
	I0816 13:45:46.835008   57945 logs.go:276] 0 containers: []
	W0816 13:45:46.835019   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:45:46.835026   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:45:46.835077   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:45:46.873589   57945 cri.go:89] found id: ""
	I0816 13:45:46.873622   57945 logs.go:276] 0 containers: []
	W0816 13:45:46.873631   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:45:46.873638   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:45:46.873689   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:45:46.912649   57945 cri.go:89] found id: ""
	I0816 13:45:46.912680   57945 logs.go:276] 0 containers: []
	W0816 13:45:46.912691   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:45:46.912701   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:45:46.912720   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:45:46.966998   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:45:46.967038   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:45:46.980897   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:45:46.980937   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:45:47.053055   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:45:47.053079   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:45:47.053091   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:45:47.136251   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:45:47.136291   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:45:49.678283   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:49.691134   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:45:49.691244   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:45:49.726598   57945 cri.go:89] found id: ""
	I0816 13:45:49.726644   57945 logs.go:276] 0 containers: []
	W0816 13:45:49.726656   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:45:49.726665   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:45:49.726729   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:45:49.760499   57945 cri.go:89] found id: ""
	I0816 13:45:49.760526   57945 logs.go:276] 0 containers: []
	W0816 13:45:49.760536   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:45:49.760543   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:45:49.760602   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:45:49.794064   57945 cri.go:89] found id: ""
	I0816 13:45:49.794087   57945 logs.go:276] 0 containers: []
	W0816 13:45:49.794094   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:45:49.794099   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:45:49.794162   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:45:49.830016   57945 cri.go:89] found id: ""
	I0816 13:45:49.830045   57945 logs.go:276] 0 containers: []
	W0816 13:45:49.830057   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:45:49.830071   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:45:49.830125   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:45:49.865230   57945 cri.go:89] found id: ""
	I0816 13:45:49.865248   57945 logs.go:276] 0 containers: []
	W0816 13:45:49.865255   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:45:49.865261   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:45:49.865310   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:45:49.898715   57945 cri.go:89] found id: ""
	I0816 13:45:49.898743   57945 logs.go:276] 0 containers: []
	W0816 13:45:49.898752   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:45:49.898758   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:45:49.898807   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:45:49.932831   57945 cri.go:89] found id: ""
	I0816 13:45:49.932857   57945 logs.go:276] 0 containers: []
	W0816 13:45:49.932868   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:45:49.932875   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:45:49.932948   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:45:49.965580   57945 cri.go:89] found id: ""
	I0816 13:45:49.965609   57945 logs.go:276] 0 containers: []
	W0816 13:45:49.965617   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:45:49.965626   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:45:49.965642   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:45:50.058462   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:45:50.058516   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:45:46.606815   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:49.107387   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:47.858274   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:49.861382   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:50.999007   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:53.497248   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:50.111179   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:45:50.111206   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:45:50.162529   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:45:50.162561   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:45:50.176552   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:45:50.176579   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:45:50.243818   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:45:52.744808   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:52.757430   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:45:52.757513   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:45:52.793177   57945 cri.go:89] found id: ""
	I0816 13:45:52.793209   57945 logs.go:276] 0 containers: []
	W0816 13:45:52.793217   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:45:52.793224   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:45:52.793276   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:45:52.827846   57945 cri.go:89] found id: ""
	I0816 13:45:52.827874   57945 logs.go:276] 0 containers: []
	W0816 13:45:52.827886   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:45:52.827894   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:45:52.827959   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:45:52.864662   57945 cri.go:89] found id: ""
	I0816 13:45:52.864693   57945 logs.go:276] 0 containers: []
	W0816 13:45:52.864705   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:45:52.864711   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:45:52.864761   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:45:52.901124   57945 cri.go:89] found id: ""
	I0816 13:45:52.901154   57945 logs.go:276] 0 containers: []
	W0816 13:45:52.901166   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:45:52.901174   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:45:52.901234   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:45:52.939763   57945 cri.go:89] found id: ""
	I0816 13:45:52.939791   57945 logs.go:276] 0 containers: []
	W0816 13:45:52.939799   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:45:52.939805   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:45:52.939858   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:45:52.975045   57945 cri.go:89] found id: ""
	I0816 13:45:52.975075   57945 logs.go:276] 0 containers: []
	W0816 13:45:52.975086   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:45:52.975092   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:45:52.975141   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:45:53.014686   57945 cri.go:89] found id: ""
	I0816 13:45:53.014714   57945 logs.go:276] 0 containers: []
	W0816 13:45:53.014725   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:45:53.014732   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:45:53.014794   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:45:53.049445   57945 cri.go:89] found id: ""
	I0816 13:45:53.049466   57945 logs.go:276] 0 containers: []
	W0816 13:45:53.049473   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:45:53.049482   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:45:53.049492   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:45:53.101819   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:45:53.101850   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:45:53.116165   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:45:53.116191   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:45:53.191022   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:45:53.191047   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:45:53.191062   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:45:53.268901   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:45:53.268952   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:45:51.607047   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:54.106991   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:52.363317   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:54.857924   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:55.497520   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:57.498597   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:59.997729   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:55.814862   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:55.828817   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:45:55.828875   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:45:55.877556   57945 cri.go:89] found id: ""
	I0816 13:45:55.877586   57945 logs.go:276] 0 containers: []
	W0816 13:45:55.877595   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:45:55.877606   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:45:55.877667   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:45:55.912820   57945 cri.go:89] found id: ""
	I0816 13:45:55.912848   57945 logs.go:276] 0 containers: []
	W0816 13:45:55.912855   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:45:55.912862   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:45:55.912918   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:45:55.947419   57945 cri.go:89] found id: ""
	I0816 13:45:55.947449   57945 logs.go:276] 0 containers: []
	W0816 13:45:55.947460   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:45:55.947467   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:45:55.947532   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:45:55.980964   57945 cri.go:89] found id: ""
	I0816 13:45:55.980990   57945 logs.go:276] 0 containers: []
	W0816 13:45:55.981001   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:45:55.981008   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:45:55.981068   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:45:56.019021   57945 cri.go:89] found id: ""
	I0816 13:45:56.019045   57945 logs.go:276] 0 containers: []
	W0816 13:45:56.019053   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:45:56.019059   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:45:56.019116   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:45:56.054950   57945 cri.go:89] found id: ""
	I0816 13:45:56.054974   57945 logs.go:276] 0 containers: []
	W0816 13:45:56.054985   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:45:56.054992   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:45:56.055057   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:45:56.091165   57945 cri.go:89] found id: ""
	I0816 13:45:56.091192   57945 logs.go:276] 0 containers: []
	W0816 13:45:56.091202   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:45:56.091211   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:45:56.091268   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:45:56.125748   57945 cri.go:89] found id: ""
	I0816 13:45:56.125775   57945 logs.go:276] 0 containers: []
	W0816 13:45:56.125787   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:45:56.125797   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:45:56.125811   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:45:56.174836   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:45:56.174870   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:45:56.188501   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:45:56.188529   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:45:56.266017   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:45:56.266038   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:45:56.266050   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:45:56.346482   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:45:56.346519   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:45:58.887176   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:58.900464   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:45:58.900531   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:45:58.939526   57945 cri.go:89] found id: ""
	I0816 13:45:58.939558   57945 logs.go:276] 0 containers: []
	W0816 13:45:58.939568   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:45:58.939576   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:45:58.939639   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:45:58.975256   57945 cri.go:89] found id: ""
	I0816 13:45:58.975281   57945 logs.go:276] 0 containers: []
	W0816 13:45:58.975289   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:45:58.975294   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:45:58.975350   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:45:59.012708   57945 cri.go:89] found id: ""
	I0816 13:45:59.012736   57945 logs.go:276] 0 containers: []
	W0816 13:45:59.012746   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:45:59.012754   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:45:59.012820   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:45:59.049385   57945 cri.go:89] found id: ""
	I0816 13:45:59.049417   57945 logs.go:276] 0 containers: []
	W0816 13:45:59.049430   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:45:59.049438   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:45:59.049505   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:45:59.084750   57945 cri.go:89] found id: ""
	I0816 13:45:59.084773   57945 logs.go:276] 0 containers: []
	W0816 13:45:59.084781   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:45:59.084786   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:45:59.084834   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:45:59.129464   57945 cri.go:89] found id: ""
	I0816 13:45:59.129495   57945 logs.go:276] 0 containers: []
	W0816 13:45:59.129506   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:45:59.129514   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:45:59.129578   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:45:59.166772   57945 cri.go:89] found id: ""
	I0816 13:45:59.166794   57945 logs.go:276] 0 containers: []
	W0816 13:45:59.166802   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:45:59.166808   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:45:59.166867   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:45:59.203843   57945 cri.go:89] found id: ""
	I0816 13:45:59.203876   57945 logs.go:276] 0 containers: []
	W0816 13:45:59.203886   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:45:59.203897   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:45:59.203911   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:45:59.285798   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:45:59.285837   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:45:59.324704   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:45:59.324729   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:45:59.377532   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:45:59.377566   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:45:59.391209   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:45:59.391236   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:45:59.463420   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:45:56.107187   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:58.606550   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:57.358875   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:59.857940   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:01.859677   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:01.998260   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:04.498473   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:01.964395   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:01.977380   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:01.977452   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:02.014480   57945 cri.go:89] found id: ""
	I0816 13:46:02.014504   57945 logs.go:276] 0 containers: []
	W0816 13:46:02.014511   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:02.014517   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:02.014578   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:02.057233   57945 cri.go:89] found id: ""
	I0816 13:46:02.057262   57945 logs.go:276] 0 containers: []
	W0816 13:46:02.057270   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:02.057277   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:02.057326   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:02.095936   57945 cri.go:89] found id: ""
	I0816 13:46:02.095962   57945 logs.go:276] 0 containers: []
	W0816 13:46:02.095970   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:02.095976   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:02.096020   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:02.136949   57945 cri.go:89] found id: ""
	I0816 13:46:02.136980   57945 logs.go:276] 0 containers: []
	W0816 13:46:02.136992   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:02.136998   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:02.137047   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:02.172610   57945 cri.go:89] found id: ""
	I0816 13:46:02.172648   57945 logs.go:276] 0 containers: []
	W0816 13:46:02.172658   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:02.172666   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:02.172729   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:02.211216   57945 cri.go:89] found id: ""
	I0816 13:46:02.211247   57945 logs.go:276] 0 containers: []
	W0816 13:46:02.211257   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:02.211266   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:02.211334   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:02.245705   57945 cri.go:89] found id: ""
	I0816 13:46:02.245735   57945 logs.go:276] 0 containers: []
	W0816 13:46:02.245746   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:02.245753   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:02.245823   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:02.281057   57945 cri.go:89] found id: ""
	I0816 13:46:02.281082   57945 logs.go:276] 0 containers: []
	W0816 13:46:02.281093   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:02.281103   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:02.281128   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:02.333334   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:02.333377   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:02.347520   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:02.347546   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:02.427543   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:02.427572   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:02.427587   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:02.514871   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:02.514908   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:05.057817   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:05.070491   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:05.070554   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:01.106533   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:03.107325   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:05.107629   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:04.359077   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:06.857557   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:06.997606   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:08.998915   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:05.108262   57945 cri.go:89] found id: ""
	I0816 13:46:05.108290   57945 logs.go:276] 0 containers: []
	W0816 13:46:05.108301   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:05.108308   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:05.108361   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:05.143962   57945 cri.go:89] found id: ""
	I0816 13:46:05.143995   57945 logs.go:276] 0 containers: []
	W0816 13:46:05.144005   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:05.144011   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:05.144067   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:05.180032   57945 cri.go:89] found id: ""
	I0816 13:46:05.180058   57945 logs.go:276] 0 containers: []
	W0816 13:46:05.180068   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:05.180076   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:05.180128   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:05.214077   57945 cri.go:89] found id: ""
	I0816 13:46:05.214107   57945 logs.go:276] 0 containers: []
	W0816 13:46:05.214115   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:05.214121   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:05.214171   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:05.250887   57945 cri.go:89] found id: ""
	I0816 13:46:05.250920   57945 logs.go:276] 0 containers: []
	W0816 13:46:05.250930   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:05.250937   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:05.251000   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:05.285592   57945 cri.go:89] found id: ""
	I0816 13:46:05.285615   57945 logs.go:276] 0 containers: []
	W0816 13:46:05.285623   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:05.285629   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:05.285675   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:05.325221   57945 cri.go:89] found id: ""
	I0816 13:46:05.325248   57945 logs.go:276] 0 containers: []
	W0816 13:46:05.325258   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:05.325264   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:05.325307   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:05.364025   57945 cri.go:89] found id: ""
	I0816 13:46:05.364047   57945 logs.go:276] 0 containers: []
	W0816 13:46:05.364055   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:05.364062   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:05.364074   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:05.413364   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:05.413395   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:05.427328   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:05.427358   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:05.504040   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:05.504067   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:05.504086   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:05.580975   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:05.581010   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:08.123111   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:08.136822   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:08.136902   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:08.169471   57945 cri.go:89] found id: ""
	I0816 13:46:08.169495   57945 logs.go:276] 0 containers: []
	W0816 13:46:08.169503   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:08.169508   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:08.169556   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:08.211041   57945 cri.go:89] found id: ""
	I0816 13:46:08.211069   57945 logs.go:276] 0 containers: []
	W0816 13:46:08.211081   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:08.211087   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:08.211148   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:08.247564   57945 cri.go:89] found id: ""
	I0816 13:46:08.247590   57945 logs.go:276] 0 containers: []
	W0816 13:46:08.247600   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:08.247607   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:08.247670   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:08.284283   57945 cri.go:89] found id: ""
	I0816 13:46:08.284312   57945 logs.go:276] 0 containers: []
	W0816 13:46:08.284324   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:08.284332   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:08.284384   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:08.320287   57945 cri.go:89] found id: ""
	I0816 13:46:08.320311   57945 logs.go:276] 0 containers: []
	W0816 13:46:08.320319   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:08.320325   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:08.320371   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:08.358294   57945 cri.go:89] found id: ""
	I0816 13:46:08.358324   57945 logs.go:276] 0 containers: []
	W0816 13:46:08.358342   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:08.358356   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:08.358423   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:08.394386   57945 cri.go:89] found id: ""
	I0816 13:46:08.394414   57945 logs.go:276] 0 containers: []
	W0816 13:46:08.394424   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:08.394432   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:08.394502   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:08.439608   57945 cri.go:89] found id: ""
	I0816 13:46:08.439635   57945 logs.go:276] 0 containers: []
	W0816 13:46:08.439643   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:08.439653   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:08.439668   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:08.493878   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:08.493918   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:08.508080   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:08.508114   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:08.584703   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:08.584727   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:08.584745   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:08.663741   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:08.663776   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:07.606112   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:09.608137   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:09.357201   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:11.359055   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:11.497851   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:13.998849   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:11.204946   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:11.218720   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:11.218800   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:11.257825   57945 cri.go:89] found id: ""
	I0816 13:46:11.257852   57945 logs.go:276] 0 containers: []
	W0816 13:46:11.257862   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:11.257870   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:11.257930   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:11.293910   57945 cri.go:89] found id: ""
	I0816 13:46:11.293946   57945 logs.go:276] 0 containers: []
	W0816 13:46:11.293958   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:11.293966   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:11.294023   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:11.330005   57945 cri.go:89] found id: ""
	I0816 13:46:11.330031   57945 logs.go:276] 0 containers: []
	W0816 13:46:11.330039   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:11.330045   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:11.330101   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:11.365057   57945 cri.go:89] found id: ""
	I0816 13:46:11.365083   57945 logs.go:276] 0 containers: []
	W0816 13:46:11.365093   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:11.365101   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:11.365159   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:11.401440   57945 cri.go:89] found id: ""
	I0816 13:46:11.401467   57945 logs.go:276] 0 containers: []
	W0816 13:46:11.401475   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:11.401481   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:11.401532   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:11.435329   57945 cri.go:89] found id: ""
	I0816 13:46:11.435354   57945 logs.go:276] 0 containers: []
	W0816 13:46:11.435361   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:11.435368   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:11.435427   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:11.468343   57945 cri.go:89] found id: ""
	I0816 13:46:11.468373   57945 logs.go:276] 0 containers: []
	W0816 13:46:11.468393   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:11.468401   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:11.468465   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:11.503326   57945 cri.go:89] found id: ""
	I0816 13:46:11.503347   57945 logs.go:276] 0 containers: []
	W0816 13:46:11.503362   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:11.503370   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:11.503386   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:11.554681   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:11.554712   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:11.568056   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:11.568087   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:11.646023   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:11.646049   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:11.646060   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:11.726154   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:11.726191   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:14.266008   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:14.280328   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:14.280408   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:14.316359   57945 cri.go:89] found id: ""
	I0816 13:46:14.316388   57945 logs.go:276] 0 containers: []
	W0816 13:46:14.316398   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:14.316406   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:14.316470   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:14.360143   57945 cri.go:89] found id: ""
	I0816 13:46:14.360165   57945 logs.go:276] 0 containers: []
	W0816 13:46:14.360172   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:14.360183   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:14.360234   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:14.394692   57945 cri.go:89] found id: ""
	I0816 13:46:14.394717   57945 logs.go:276] 0 containers: []
	W0816 13:46:14.394724   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:14.394730   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:14.394789   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:14.431928   57945 cri.go:89] found id: ""
	I0816 13:46:14.431957   57945 logs.go:276] 0 containers: []
	W0816 13:46:14.431968   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:14.431975   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:14.432041   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:14.469223   57945 cri.go:89] found id: ""
	I0816 13:46:14.469253   57945 logs.go:276] 0 containers: []
	W0816 13:46:14.469265   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:14.469272   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:14.469334   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:14.506893   57945 cri.go:89] found id: ""
	I0816 13:46:14.506917   57945 logs.go:276] 0 containers: []
	W0816 13:46:14.506925   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:14.506931   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:14.506991   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:14.544801   57945 cri.go:89] found id: ""
	I0816 13:46:14.544825   57945 logs.go:276] 0 containers: []
	W0816 13:46:14.544833   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:14.544839   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:14.544891   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:14.579489   57945 cri.go:89] found id: ""
	I0816 13:46:14.579528   57945 logs.go:276] 0 containers: []
	W0816 13:46:14.579541   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:14.579556   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:14.579572   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:14.656527   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:14.656551   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:14.656573   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:14.736792   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:14.736823   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:14.775976   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:14.776010   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:14.827804   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:14.827836   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:12.106330   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:14.106732   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:13.857302   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:15.858233   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:16.497347   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:18.497948   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:17.341506   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:17.357136   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:17.357214   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:17.397810   57945 cri.go:89] found id: ""
	I0816 13:46:17.397839   57945 logs.go:276] 0 containers: []
	W0816 13:46:17.397867   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:17.397874   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:17.397936   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:17.435170   57945 cri.go:89] found id: ""
	I0816 13:46:17.435198   57945 logs.go:276] 0 containers: []
	W0816 13:46:17.435208   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:17.435214   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:17.435260   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:17.468837   57945 cri.go:89] found id: ""
	I0816 13:46:17.468871   57945 logs.go:276] 0 containers: []
	W0816 13:46:17.468882   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:17.468891   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:17.468962   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:17.503884   57945 cri.go:89] found id: ""
	I0816 13:46:17.503910   57945 logs.go:276] 0 containers: []
	W0816 13:46:17.503921   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:17.503930   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:17.503977   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:17.541204   57945 cri.go:89] found id: ""
	I0816 13:46:17.541232   57945 logs.go:276] 0 containers: []
	W0816 13:46:17.541244   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:17.541251   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:17.541312   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:17.577007   57945 cri.go:89] found id: ""
	I0816 13:46:17.577031   57945 logs.go:276] 0 containers: []
	W0816 13:46:17.577038   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:17.577045   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:17.577092   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:17.611352   57945 cri.go:89] found id: ""
	I0816 13:46:17.611373   57945 logs.go:276] 0 containers: []
	W0816 13:46:17.611380   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:17.611386   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:17.611433   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:17.648108   57945 cri.go:89] found id: ""
	I0816 13:46:17.648147   57945 logs.go:276] 0 containers: []
	W0816 13:46:17.648155   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:17.648164   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:17.648176   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:17.720475   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:17.720500   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:17.720512   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:17.797602   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:17.797636   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:17.842985   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:17.843019   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:17.893581   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:17.893617   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:16.107456   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:18.107650   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:20.608155   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:18.357472   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:20.857964   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:20.498563   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:22.998319   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:20.408415   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:20.423303   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:20.423384   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:20.459057   57945 cri.go:89] found id: ""
	I0816 13:46:20.459083   57945 logs.go:276] 0 containers: []
	W0816 13:46:20.459091   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:20.459096   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:20.459152   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:20.496447   57945 cri.go:89] found id: ""
	I0816 13:46:20.496471   57945 logs.go:276] 0 containers: []
	W0816 13:46:20.496479   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:20.496485   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:20.496532   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:20.538508   57945 cri.go:89] found id: ""
	I0816 13:46:20.538531   57945 logs.go:276] 0 containers: []
	W0816 13:46:20.538539   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:20.538544   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:20.538600   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:20.579350   57945 cri.go:89] found id: ""
	I0816 13:46:20.579382   57945 logs.go:276] 0 containers: []
	W0816 13:46:20.579390   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:20.579396   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:20.579465   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:20.615088   57945 cri.go:89] found id: ""
	I0816 13:46:20.615118   57945 logs.go:276] 0 containers: []
	W0816 13:46:20.615130   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:20.615137   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:20.615203   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:20.650849   57945 cri.go:89] found id: ""
	I0816 13:46:20.650877   57945 logs.go:276] 0 containers: []
	W0816 13:46:20.650884   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:20.650890   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:20.650950   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:20.691439   57945 cri.go:89] found id: ""
	I0816 13:46:20.691471   57945 logs.go:276] 0 containers: []
	W0816 13:46:20.691482   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:20.691490   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:20.691553   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:20.727795   57945 cri.go:89] found id: ""
	I0816 13:46:20.727820   57945 logs.go:276] 0 containers: []
	W0816 13:46:20.727829   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:20.727836   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:20.727847   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:20.806369   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:20.806390   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:20.806402   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:20.886313   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:20.886345   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:20.926079   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:20.926104   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:20.981052   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:20.981088   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:23.496179   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:23.509918   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:23.509983   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:23.546175   57945 cri.go:89] found id: ""
	I0816 13:46:23.546214   57945 logs.go:276] 0 containers: []
	W0816 13:46:23.546224   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:23.546231   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:23.546293   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:23.581553   57945 cri.go:89] found id: ""
	I0816 13:46:23.581581   57945 logs.go:276] 0 containers: []
	W0816 13:46:23.581594   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:23.581600   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:23.581648   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:23.614559   57945 cri.go:89] found id: ""
	I0816 13:46:23.614584   57945 logs.go:276] 0 containers: []
	W0816 13:46:23.614592   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:23.614597   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:23.614651   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:23.649239   57945 cri.go:89] found id: ""
	I0816 13:46:23.649272   57945 logs.go:276] 0 containers: []
	W0816 13:46:23.649283   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:23.649291   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:23.649354   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:23.688017   57945 cri.go:89] found id: ""
	I0816 13:46:23.688044   57945 logs.go:276] 0 containers: []
	W0816 13:46:23.688054   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:23.688062   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:23.688126   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:23.723475   57945 cri.go:89] found id: ""
	I0816 13:46:23.723507   57945 logs.go:276] 0 containers: []
	W0816 13:46:23.723517   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:23.723525   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:23.723585   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:23.756028   57945 cri.go:89] found id: ""
	I0816 13:46:23.756055   57945 logs.go:276] 0 containers: []
	W0816 13:46:23.756063   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:23.756069   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:23.756121   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:23.789965   57945 cri.go:89] found id: ""
	I0816 13:46:23.789993   57945 logs.go:276] 0 containers: []
	W0816 13:46:23.790000   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:23.790009   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:23.790029   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:23.803669   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:23.803696   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:23.882614   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:23.882642   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:23.882659   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:23.957733   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:23.957773   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:23.994270   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:23.994298   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:23.106190   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:25.106765   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:23.356443   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:25.356705   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:25.496930   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:27.497933   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:29.500639   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:26.546600   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:26.560153   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:26.560221   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:26.594482   57945 cri.go:89] found id: ""
	I0816 13:46:26.594506   57945 logs.go:276] 0 containers: []
	W0816 13:46:26.594520   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:26.594528   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:26.594585   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:26.628020   57945 cri.go:89] found id: ""
	I0816 13:46:26.628051   57945 logs.go:276] 0 containers: []
	W0816 13:46:26.628061   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:26.628068   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:26.628126   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:26.664248   57945 cri.go:89] found id: ""
	I0816 13:46:26.664277   57945 logs.go:276] 0 containers: []
	W0816 13:46:26.664288   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:26.664295   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:26.664357   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:26.700365   57945 cri.go:89] found id: ""
	I0816 13:46:26.700389   57945 logs.go:276] 0 containers: []
	W0816 13:46:26.700397   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:26.700403   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:26.700464   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:26.736170   57945 cri.go:89] found id: ""
	I0816 13:46:26.736204   57945 logs.go:276] 0 containers: []
	W0816 13:46:26.736212   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:26.736219   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:26.736268   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:26.773411   57945 cri.go:89] found id: ""
	I0816 13:46:26.773441   57945 logs.go:276] 0 containers: []
	W0816 13:46:26.773449   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:26.773455   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:26.773514   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:26.811994   57945 cri.go:89] found id: ""
	I0816 13:46:26.812022   57945 logs.go:276] 0 containers: []
	W0816 13:46:26.812030   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:26.812036   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:26.812087   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:26.846621   57945 cri.go:89] found id: ""
	I0816 13:46:26.846647   57945 logs.go:276] 0 containers: []
	W0816 13:46:26.846654   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:26.846662   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:26.846680   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:26.902255   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:26.902293   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:26.916117   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:26.916148   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:26.986755   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:26.986786   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:26.986802   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:27.069607   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:27.069644   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:29.610859   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:29.624599   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:29.624654   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:29.660421   57945 cri.go:89] found id: ""
	I0816 13:46:29.660454   57945 logs.go:276] 0 containers: []
	W0816 13:46:29.660465   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:29.660474   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:29.660534   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:29.694828   57945 cri.go:89] found id: ""
	I0816 13:46:29.694853   57945 logs.go:276] 0 containers: []
	W0816 13:46:29.694861   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:29.694867   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:29.694933   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:29.734054   57945 cri.go:89] found id: ""
	I0816 13:46:29.734083   57945 logs.go:276] 0 containers: []
	W0816 13:46:29.734093   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:29.734100   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:29.734159   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:29.771358   57945 cri.go:89] found id: ""
	I0816 13:46:29.771383   57945 logs.go:276] 0 containers: []
	W0816 13:46:29.771391   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:29.771397   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:29.771464   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:29.806781   57945 cri.go:89] found id: ""
	I0816 13:46:29.806804   57945 logs.go:276] 0 containers: []
	W0816 13:46:29.806812   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:29.806819   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:29.806879   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:29.841716   57945 cri.go:89] found id: ""
	I0816 13:46:29.841743   57945 logs.go:276] 0 containers: []
	W0816 13:46:29.841754   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:29.841762   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:29.841827   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:29.880115   57945 cri.go:89] found id: ""
	I0816 13:46:29.880144   57945 logs.go:276] 0 containers: []
	W0816 13:46:29.880152   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:29.880158   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:29.880226   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:29.916282   57945 cri.go:89] found id: ""
	I0816 13:46:29.916311   57945 logs.go:276] 0 containers: []
	W0816 13:46:29.916321   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:29.916331   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:29.916347   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:29.996027   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:29.996059   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:30.035284   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:30.035315   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:30.085336   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:30.085368   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:30.099534   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:30.099562   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 13:46:27.606739   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:29.606870   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:27.357970   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:29.861012   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:31.998584   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:34.497511   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	W0816 13:46:30.174105   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:32.674746   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:32.688631   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:32.688699   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:32.722967   57945 cri.go:89] found id: ""
	I0816 13:46:32.722997   57945 logs.go:276] 0 containers: []
	W0816 13:46:32.723007   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:32.723014   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:32.723075   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:32.757223   57945 cri.go:89] found id: ""
	I0816 13:46:32.757257   57945 logs.go:276] 0 containers: []
	W0816 13:46:32.757267   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:32.757275   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:32.757342   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:32.793773   57945 cri.go:89] found id: ""
	I0816 13:46:32.793795   57945 logs.go:276] 0 containers: []
	W0816 13:46:32.793804   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:32.793811   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:32.793879   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:32.829541   57945 cri.go:89] found id: ""
	I0816 13:46:32.829565   57945 logs.go:276] 0 containers: []
	W0816 13:46:32.829573   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:32.829578   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:32.829626   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:32.864053   57945 cri.go:89] found id: ""
	I0816 13:46:32.864079   57945 logs.go:276] 0 containers: []
	W0816 13:46:32.864090   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:32.864097   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:32.864155   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:32.901420   57945 cri.go:89] found id: ""
	I0816 13:46:32.901451   57945 logs.go:276] 0 containers: []
	W0816 13:46:32.901459   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:32.901466   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:32.901522   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:32.933082   57945 cri.go:89] found id: ""
	I0816 13:46:32.933110   57945 logs.go:276] 0 containers: []
	W0816 13:46:32.933118   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:32.933125   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:32.933186   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:32.966640   57945 cri.go:89] found id: ""
	I0816 13:46:32.966664   57945 logs.go:276] 0 containers: []
	W0816 13:46:32.966672   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:32.966680   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:32.966692   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:33.048593   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:33.048627   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:33.089329   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:33.089366   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:33.144728   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:33.144764   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:33.158639   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:33.158666   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:33.227076   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:32.106718   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:34.606961   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:32.357555   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:34.857062   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:36.857679   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:36.997085   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:38.999741   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:35.727465   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:35.740850   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:35.740940   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:35.777294   57945 cri.go:89] found id: ""
	I0816 13:46:35.777317   57945 logs.go:276] 0 containers: []
	W0816 13:46:35.777325   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:35.777330   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:35.777394   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:35.815582   57945 cri.go:89] found id: ""
	I0816 13:46:35.815604   57945 logs.go:276] 0 containers: []
	W0816 13:46:35.815612   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:35.815618   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:35.815672   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:35.848338   57945 cri.go:89] found id: ""
	I0816 13:46:35.848363   57945 logs.go:276] 0 containers: []
	W0816 13:46:35.848370   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:35.848376   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:35.848432   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:35.884834   57945 cri.go:89] found id: ""
	I0816 13:46:35.884862   57945 logs.go:276] 0 containers: []
	W0816 13:46:35.884870   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:35.884876   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:35.884953   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:35.919022   57945 cri.go:89] found id: ""
	I0816 13:46:35.919046   57945 logs.go:276] 0 containers: []
	W0816 13:46:35.919058   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:35.919063   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:35.919150   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:35.953087   57945 cri.go:89] found id: ""
	I0816 13:46:35.953111   57945 logs.go:276] 0 containers: []
	W0816 13:46:35.953119   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:35.953124   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:35.953182   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:35.984776   57945 cri.go:89] found id: ""
	I0816 13:46:35.984804   57945 logs.go:276] 0 containers: []
	W0816 13:46:35.984814   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:35.984821   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:35.984882   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:36.028921   57945 cri.go:89] found id: ""
	I0816 13:46:36.028946   57945 logs.go:276] 0 containers: []
	W0816 13:46:36.028954   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:36.028964   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:36.028976   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:36.091313   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:36.091342   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:36.116881   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:36.116915   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:36.186758   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:36.186778   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:36.186791   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:36.268618   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:36.268653   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:38.808419   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:38.821646   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:38.821708   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:38.860623   57945 cri.go:89] found id: ""
	I0816 13:46:38.860647   57945 logs.go:276] 0 containers: []
	W0816 13:46:38.860655   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:38.860660   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:38.860712   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:38.894728   57945 cri.go:89] found id: ""
	I0816 13:46:38.894782   57945 logs.go:276] 0 containers: []
	W0816 13:46:38.894795   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:38.894804   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:38.894870   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:38.928945   57945 cri.go:89] found id: ""
	I0816 13:46:38.928974   57945 logs.go:276] 0 containers: []
	W0816 13:46:38.928988   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:38.928994   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:38.929048   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:38.966450   57945 cri.go:89] found id: ""
	I0816 13:46:38.966474   57945 logs.go:276] 0 containers: []
	W0816 13:46:38.966482   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:38.966487   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:38.966548   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:39.001554   57945 cri.go:89] found id: ""
	I0816 13:46:39.001577   57945 logs.go:276] 0 containers: []
	W0816 13:46:39.001589   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:39.001595   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:39.001656   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:39.036621   57945 cri.go:89] found id: ""
	I0816 13:46:39.036646   57945 logs.go:276] 0 containers: []
	W0816 13:46:39.036654   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:39.036660   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:39.036725   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:39.071244   57945 cri.go:89] found id: ""
	I0816 13:46:39.071271   57945 logs.go:276] 0 containers: []
	W0816 13:46:39.071281   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:39.071289   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:39.071355   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:39.107325   57945 cri.go:89] found id: ""
	I0816 13:46:39.107352   57945 logs.go:276] 0 containers: []
	W0816 13:46:39.107361   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:39.107371   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:39.107401   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:39.189172   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:39.189208   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:39.229060   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:39.229094   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:39.281983   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:39.282025   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:39.296515   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:39.296545   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:39.368488   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:37.113026   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:39.606526   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:38.857809   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:41.358047   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:41.497724   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:43.498815   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:41.868721   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:41.883796   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:41.883869   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:41.922181   57945 cri.go:89] found id: ""
	I0816 13:46:41.922211   57945 logs.go:276] 0 containers: []
	W0816 13:46:41.922222   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:41.922232   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:41.922297   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:41.962213   57945 cri.go:89] found id: ""
	I0816 13:46:41.962239   57945 logs.go:276] 0 containers: []
	W0816 13:46:41.962249   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:41.962257   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:41.962321   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:42.003214   57945 cri.go:89] found id: ""
	I0816 13:46:42.003243   57945 logs.go:276] 0 containers: []
	W0816 13:46:42.003251   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:42.003257   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:42.003316   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:42.038594   57945 cri.go:89] found id: ""
	I0816 13:46:42.038622   57945 logs.go:276] 0 containers: []
	W0816 13:46:42.038635   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:42.038641   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:42.038691   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:42.071377   57945 cri.go:89] found id: ""
	I0816 13:46:42.071409   57945 logs.go:276] 0 containers: []
	W0816 13:46:42.071421   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:42.071429   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:42.071489   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:42.104777   57945 cri.go:89] found id: ""
	I0816 13:46:42.104804   57945 logs.go:276] 0 containers: []
	W0816 13:46:42.104815   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:42.104823   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:42.104879   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:42.140292   57945 cri.go:89] found id: ""
	I0816 13:46:42.140324   57945 logs.go:276] 0 containers: []
	W0816 13:46:42.140335   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:42.140342   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:42.140404   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:42.174823   57945 cri.go:89] found id: ""
	I0816 13:46:42.174861   57945 logs.go:276] 0 containers: []
	W0816 13:46:42.174870   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:42.174887   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:42.174906   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:42.216308   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:42.216337   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:42.269277   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:42.269304   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:42.282347   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:42.282374   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:42.358776   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:42.358796   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:42.358807   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:44.942195   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:44.955384   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:44.955465   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:44.994181   57945 cri.go:89] found id: ""
	I0816 13:46:44.994212   57945 logs.go:276] 0 containers: []
	W0816 13:46:44.994223   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:44.994230   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:44.994286   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:45.028937   57945 cri.go:89] found id: ""
	I0816 13:46:45.028972   57945 logs.go:276] 0 containers: []
	W0816 13:46:45.028984   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:45.028991   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:45.029049   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:45.068193   57945 cri.go:89] found id: ""
	I0816 13:46:45.068223   57945 logs.go:276] 0 containers: []
	W0816 13:46:45.068237   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:45.068249   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:45.068309   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:42.108651   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:44.606597   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:43.856419   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:45.858360   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:45.998195   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:48.497584   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:45.100553   57945 cri.go:89] found id: ""
	I0816 13:46:45.100653   57945 logs.go:276] 0 containers: []
	W0816 13:46:45.100667   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:45.100674   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:45.100734   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:45.135676   57945 cri.go:89] found id: ""
	I0816 13:46:45.135704   57945 logs.go:276] 0 containers: []
	W0816 13:46:45.135714   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:45.135721   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:45.135784   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:45.174611   57945 cri.go:89] found id: ""
	I0816 13:46:45.174642   57945 logs.go:276] 0 containers: []
	W0816 13:46:45.174653   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:45.174660   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:45.174713   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:45.209544   57945 cri.go:89] found id: ""
	I0816 13:46:45.209573   57945 logs.go:276] 0 containers: []
	W0816 13:46:45.209582   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:45.209588   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:45.209649   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:45.245622   57945 cri.go:89] found id: ""
	I0816 13:46:45.245654   57945 logs.go:276] 0 containers: []
	W0816 13:46:45.245664   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:45.245677   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:45.245692   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:45.284294   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:45.284322   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:45.335720   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:45.335751   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:45.350014   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:45.350039   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:45.419816   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:45.419839   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:45.419854   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:48.005991   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:48.019754   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:48.019814   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:48.053269   57945 cri.go:89] found id: ""
	I0816 13:46:48.053331   57945 logs.go:276] 0 containers: []
	W0816 13:46:48.053344   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:48.053351   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:48.053404   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:48.086992   57945 cri.go:89] found id: ""
	I0816 13:46:48.087024   57945 logs.go:276] 0 containers: []
	W0816 13:46:48.087032   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:48.087037   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:48.087098   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:48.123008   57945 cri.go:89] found id: ""
	I0816 13:46:48.123037   57945 logs.go:276] 0 containers: []
	W0816 13:46:48.123046   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:48.123053   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:48.123110   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:48.158035   57945 cri.go:89] found id: ""
	I0816 13:46:48.158064   57945 logs.go:276] 0 containers: []
	W0816 13:46:48.158075   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:48.158082   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:48.158146   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:48.194576   57945 cri.go:89] found id: ""
	I0816 13:46:48.194605   57945 logs.go:276] 0 containers: []
	W0816 13:46:48.194616   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:48.194624   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:48.194687   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:48.232844   57945 cri.go:89] found id: ""
	I0816 13:46:48.232870   57945 logs.go:276] 0 containers: []
	W0816 13:46:48.232878   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:48.232883   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:48.232955   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:48.267525   57945 cri.go:89] found id: ""
	I0816 13:46:48.267551   57945 logs.go:276] 0 containers: []
	W0816 13:46:48.267559   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:48.267564   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:48.267629   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:48.305436   57945 cri.go:89] found id: ""
	I0816 13:46:48.305465   57945 logs.go:276] 0 containers: []
	W0816 13:46:48.305477   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:48.305487   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:48.305502   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:48.357755   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:48.357781   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:48.372672   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:48.372703   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:48.439076   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:48.439099   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:48.439114   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:48.524142   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:48.524181   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:47.106288   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:49.108117   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:48.357517   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:50.857069   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:50.501014   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:52.998618   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:51.065770   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:51.078797   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:51.078868   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:51.118864   57945 cri.go:89] found id: ""
	I0816 13:46:51.118891   57945 logs.go:276] 0 containers: []
	W0816 13:46:51.118899   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:51.118905   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:51.118964   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:51.153024   57945 cri.go:89] found id: ""
	I0816 13:46:51.153049   57945 logs.go:276] 0 containers: []
	W0816 13:46:51.153057   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:51.153062   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:51.153111   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:51.189505   57945 cri.go:89] found id: ""
	I0816 13:46:51.189531   57945 logs.go:276] 0 containers: []
	W0816 13:46:51.189542   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:51.189550   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:51.189611   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:51.228456   57945 cri.go:89] found id: ""
	I0816 13:46:51.228483   57945 logs.go:276] 0 containers: []
	W0816 13:46:51.228494   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:51.228502   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:51.228565   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:51.264436   57945 cri.go:89] found id: ""
	I0816 13:46:51.264463   57945 logs.go:276] 0 containers: []
	W0816 13:46:51.264474   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:51.264482   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:51.264542   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:51.300291   57945 cri.go:89] found id: ""
	I0816 13:46:51.300315   57945 logs.go:276] 0 containers: []
	W0816 13:46:51.300323   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:51.300329   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:51.300379   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:51.334878   57945 cri.go:89] found id: ""
	I0816 13:46:51.334902   57945 logs.go:276] 0 containers: []
	W0816 13:46:51.334909   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:51.334917   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:51.334969   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:51.376467   57945 cri.go:89] found id: ""
	I0816 13:46:51.376491   57945 logs.go:276] 0 containers: []
	W0816 13:46:51.376499   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:51.376507   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:51.376518   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:51.420168   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:51.420194   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:51.470869   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:51.470900   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:51.484877   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:51.484903   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:51.557587   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:51.557614   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:51.557631   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:54.141123   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:54.154790   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:54.154864   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:54.189468   57945 cri.go:89] found id: ""
	I0816 13:46:54.189495   57945 logs.go:276] 0 containers: []
	W0816 13:46:54.189503   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:54.189509   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:54.189562   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:54.223774   57945 cri.go:89] found id: ""
	I0816 13:46:54.223805   57945 logs.go:276] 0 containers: []
	W0816 13:46:54.223817   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:54.223826   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:54.223883   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:54.257975   57945 cri.go:89] found id: ""
	I0816 13:46:54.258004   57945 logs.go:276] 0 containers: []
	W0816 13:46:54.258014   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:54.258022   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:54.258078   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:54.296144   57945 cri.go:89] found id: ""
	I0816 13:46:54.296174   57945 logs.go:276] 0 containers: []
	W0816 13:46:54.296193   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:54.296201   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:54.296276   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:54.336734   57945 cri.go:89] found id: ""
	I0816 13:46:54.336760   57945 logs.go:276] 0 containers: []
	W0816 13:46:54.336770   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:54.336775   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:54.336839   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:54.370572   57945 cri.go:89] found id: ""
	I0816 13:46:54.370602   57945 logs.go:276] 0 containers: []
	W0816 13:46:54.370609   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:54.370615   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:54.370676   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:54.405703   57945 cri.go:89] found id: ""
	I0816 13:46:54.405735   57945 logs.go:276] 0 containers: []
	W0816 13:46:54.405745   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:54.405753   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:54.405816   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:54.441466   57945 cri.go:89] found id: ""
	I0816 13:46:54.441492   57945 logs.go:276] 0 containers: []
	W0816 13:46:54.441500   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:54.441509   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:54.441521   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:54.492539   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:54.492570   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:54.506313   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:54.506341   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:54.580127   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:54.580151   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:54.580172   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:54.658597   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:54.658633   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:51.607335   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:54.106631   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:53.357847   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:55.857456   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:55.497897   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:57.999173   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:57.198267   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:57.213292   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:57.213354   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:57.248838   57945 cri.go:89] found id: ""
	I0816 13:46:57.248862   57945 logs.go:276] 0 containers: []
	W0816 13:46:57.248870   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:57.248876   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:57.248951   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:57.283868   57945 cri.go:89] found id: ""
	I0816 13:46:57.283895   57945 logs.go:276] 0 containers: []
	W0816 13:46:57.283903   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:57.283908   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:57.283958   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:57.319363   57945 cri.go:89] found id: ""
	I0816 13:46:57.319392   57945 logs.go:276] 0 containers: []
	W0816 13:46:57.319405   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:57.319412   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:57.319465   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:57.359895   57945 cri.go:89] found id: ""
	I0816 13:46:57.359937   57945 logs.go:276] 0 containers: []
	W0816 13:46:57.359949   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:57.359957   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:57.360024   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:57.398025   57945 cri.go:89] found id: ""
	I0816 13:46:57.398057   57945 logs.go:276] 0 containers: []
	W0816 13:46:57.398068   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:57.398075   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:57.398140   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:57.436101   57945 cri.go:89] found id: ""
	I0816 13:46:57.436132   57945 logs.go:276] 0 containers: []
	W0816 13:46:57.436140   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:57.436147   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:57.436223   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:57.471737   57945 cri.go:89] found id: ""
	I0816 13:46:57.471767   57945 logs.go:276] 0 containers: []
	W0816 13:46:57.471778   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:57.471785   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:57.471845   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:57.508664   57945 cri.go:89] found id: ""
	I0816 13:46:57.508694   57945 logs.go:276] 0 containers: []
	W0816 13:46:57.508705   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:57.508716   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:57.508730   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:57.559122   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:57.559155   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:57.572504   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:57.572529   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:57.646721   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:57.646743   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:57.646756   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:57.725107   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:57.725153   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:56.107168   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:58.606805   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:00.607098   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:57.857681   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:00.357433   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:00.497738   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:02.998036   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:04.998316   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:00.269137   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:00.284285   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:00.284363   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:00.325613   57945 cri.go:89] found id: ""
	I0816 13:47:00.325645   57945 logs.go:276] 0 containers: []
	W0816 13:47:00.325654   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:00.325662   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:00.325721   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:00.361706   57945 cri.go:89] found id: ""
	I0816 13:47:00.361732   57945 logs.go:276] 0 containers: []
	W0816 13:47:00.361742   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:00.361750   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:00.361808   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:00.398453   57945 cri.go:89] found id: ""
	I0816 13:47:00.398478   57945 logs.go:276] 0 containers: []
	W0816 13:47:00.398486   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:00.398491   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:00.398544   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:00.434233   57945 cri.go:89] found id: ""
	I0816 13:47:00.434265   57945 logs.go:276] 0 containers: []
	W0816 13:47:00.434278   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:00.434286   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:00.434391   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:00.473020   57945 cri.go:89] found id: ""
	I0816 13:47:00.473042   57945 logs.go:276] 0 containers: []
	W0816 13:47:00.473050   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:00.473056   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:00.473117   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:00.511480   57945 cri.go:89] found id: ""
	I0816 13:47:00.511507   57945 logs.go:276] 0 containers: []
	W0816 13:47:00.511518   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:00.511525   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:00.511595   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:00.546166   57945 cri.go:89] found id: ""
	I0816 13:47:00.546202   57945 logs.go:276] 0 containers: []
	W0816 13:47:00.546209   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:00.546216   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:00.546263   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:00.585285   57945 cri.go:89] found id: ""
	I0816 13:47:00.585310   57945 logs.go:276] 0 containers: []
	W0816 13:47:00.585320   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:00.585329   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:00.585348   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:00.633346   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:00.633373   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:00.687904   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:00.687937   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:00.703773   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:00.703801   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:00.775179   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:00.775210   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:00.775226   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:03.354676   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:03.370107   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:03.370178   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:03.406212   57945 cri.go:89] found id: ""
	I0816 13:47:03.406245   57945 logs.go:276] 0 containers: []
	W0816 13:47:03.406256   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:03.406263   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:03.406333   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:03.442887   57945 cri.go:89] found id: ""
	I0816 13:47:03.442925   57945 logs.go:276] 0 containers: []
	W0816 13:47:03.442937   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:03.442943   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:03.443000   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:03.479225   57945 cri.go:89] found id: ""
	I0816 13:47:03.479259   57945 logs.go:276] 0 containers: []
	W0816 13:47:03.479270   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:03.479278   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:03.479340   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:03.516145   57945 cri.go:89] found id: ""
	I0816 13:47:03.516181   57945 logs.go:276] 0 containers: []
	W0816 13:47:03.516192   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:03.516203   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:03.516265   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:03.548225   57945 cri.go:89] found id: ""
	I0816 13:47:03.548252   57945 logs.go:276] 0 containers: []
	W0816 13:47:03.548260   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:03.548267   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:03.548324   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:03.582038   57945 cri.go:89] found id: ""
	I0816 13:47:03.582071   57945 logs.go:276] 0 containers: []
	W0816 13:47:03.582082   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:03.582089   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:03.582160   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:03.618693   57945 cri.go:89] found id: ""
	I0816 13:47:03.618720   57945 logs.go:276] 0 containers: []
	W0816 13:47:03.618730   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:03.618737   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:03.618793   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:03.653717   57945 cri.go:89] found id: ""
	I0816 13:47:03.653742   57945 logs.go:276] 0 containers: []
	W0816 13:47:03.653751   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:03.653759   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:03.653771   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:03.705909   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:03.705942   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:03.720727   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:03.720751   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:03.795064   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:03.795089   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:03.795104   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:03.874061   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:03.874105   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:02.607546   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:05.106955   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:02.358368   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:04.359618   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:06.858437   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:06.999109   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:09.498087   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:06.420149   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:06.437062   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:06.437124   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:06.473620   57945 cri.go:89] found id: ""
	I0816 13:47:06.473651   57945 logs.go:276] 0 containers: []
	W0816 13:47:06.473659   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:06.473664   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:06.473720   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:06.510281   57945 cri.go:89] found id: ""
	I0816 13:47:06.510307   57945 logs.go:276] 0 containers: []
	W0816 13:47:06.510315   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:06.510321   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:06.510372   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:06.546589   57945 cri.go:89] found id: ""
	I0816 13:47:06.546623   57945 logs.go:276] 0 containers: []
	W0816 13:47:06.546634   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:06.546642   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:06.546702   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:06.580629   57945 cri.go:89] found id: ""
	I0816 13:47:06.580652   57945 logs.go:276] 0 containers: []
	W0816 13:47:06.580665   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:06.580671   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:06.580718   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:06.617411   57945 cri.go:89] found id: ""
	I0816 13:47:06.617439   57945 logs.go:276] 0 containers: []
	W0816 13:47:06.617459   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:06.617468   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:06.617533   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:06.654017   57945 cri.go:89] found id: ""
	I0816 13:47:06.654045   57945 logs.go:276] 0 containers: []
	W0816 13:47:06.654057   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:06.654064   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:06.654124   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:06.695109   57945 cri.go:89] found id: ""
	I0816 13:47:06.695139   57945 logs.go:276] 0 containers: []
	W0816 13:47:06.695147   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:06.695153   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:06.695205   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:06.731545   57945 cri.go:89] found id: ""
	I0816 13:47:06.731620   57945 logs.go:276] 0 containers: []
	W0816 13:47:06.731635   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:06.731647   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:06.731668   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:06.782862   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:06.782900   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:06.797524   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:06.797550   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:06.877445   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:06.877476   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:06.877493   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:06.957932   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:06.957965   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:09.498843   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:09.513398   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:09.513468   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:09.551246   57945 cri.go:89] found id: ""
	I0816 13:47:09.551275   57945 logs.go:276] 0 containers: []
	W0816 13:47:09.551284   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:09.551290   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:09.551339   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:09.585033   57945 cri.go:89] found id: ""
	I0816 13:47:09.585059   57945 logs.go:276] 0 containers: []
	W0816 13:47:09.585066   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:09.585072   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:09.585120   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:09.623498   57945 cri.go:89] found id: ""
	I0816 13:47:09.623524   57945 logs.go:276] 0 containers: []
	W0816 13:47:09.623531   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:09.623537   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:09.623584   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:09.657476   57945 cri.go:89] found id: ""
	I0816 13:47:09.657504   57945 logs.go:276] 0 containers: []
	W0816 13:47:09.657515   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:09.657523   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:09.657578   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:09.693715   57945 cri.go:89] found id: ""
	I0816 13:47:09.693746   57945 logs.go:276] 0 containers: []
	W0816 13:47:09.693757   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:09.693765   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:09.693825   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:09.727396   57945 cri.go:89] found id: ""
	I0816 13:47:09.727426   57945 logs.go:276] 0 containers: []
	W0816 13:47:09.727437   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:09.727451   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:09.727511   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:09.764334   57945 cri.go:89] found id: ""
	I0816 13:47:09.764361   57945 logs.go:276] 0 containers: []
	W0816 13:47:09.764368   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:09.764374   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:09.764428   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:09.799460   57945 cri.go:89] found id: ""
	I0816 13:47:09.799485   57945 logs.go:276] 0 containers: []
	W0816 13:47:09.799497   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:09.799508   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:09.799521   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:09.849637   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:09.849678   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:09.869665   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:09.869702   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:09.954878   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:09.954907   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:09.954922   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:10.032473   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:10.032507   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:07.107809   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:09.606867   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:09.358384   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:11.359451   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:11.997273   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:13.998709   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:12.574303   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:12.587684   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:12.587746   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:12.625568   57945 cri.go:89] found id: ""
	I0816 13:47:12.625593   57945 logs.go:276] 0 containers: []
	W0816 13:47:12.625604   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:12.625611   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:12.625719   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:12.665018   57945 cri.go:89] found id: ""
	I0816 13:47:12.665048   57945 logs.go:276] 0 containers: []
	W0816 13:47:12.665059   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:12.665067   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:12.665128   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:12.701125   57945 cri.go:89] found id: ""
	I0816 13:47:12.701150   57945 logs.go:276] 0 containers: []
	W0816 13:47:12.701158   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:12.701163   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:12.701218   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:12.740613   57945 cri.go:89] found id: ""
	I0816 13:47:12.740644   57945 logs.go:276] 0 containers: []
	W0816 13:47:12.740654   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:12.740662   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:12.740727   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:12.779620   57945 cri.go:89] found id: ""
	I0816 13:47:12.779652   57945 logs.go:276] 0 containers: []
	W0816 13:47:12.779664   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:12.779678   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:12.779743   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:12.816222   57945 cri.go:89] found id: ""
	I0816 13:47:12.816248   57945 logs.go:276] 0 containers: []
	W0816 13:47:12.816269   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:12.816278   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:12.816327   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:12.853083   57945 cri.go:89] found id: ""
	I0816 13:47:12.853113   57945 logs.go:276] 0 containers: []
	W0816 13:47:12.853125   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:12.853133   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:12.853192   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:12.888197   57945 cri.go:89] found id: ""
	I0816 13:47:12.888223   57945 logs.go:276] 0 containers: []
	W0816 13:47:12.888232   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:12.888240   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:12.888255   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:12.941464   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:12.941502   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:12.955423   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:12.955456   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:13.025515   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:13.025537   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:13.025550   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:13.112409   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:13.112452   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:12.107421   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:14.606538   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:13.857389   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:15.857870   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:16.498127   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:18.498877   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:15.656240   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:15.669505   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:15.669568   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:15.703260   57945 cri.go:89] found id: ""
	I0816 13:47:15.703288   57945 logs.go:276] 0 containers: []
	W0816 13:47:15.703299   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:15.703306   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:15.703368   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:15.740555   57945 cri.go:89] found id: ""
	I0816 13:47:15.740580   57945 logs.go:276] 0 containers: []
	W0816 13:47:15.740590   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:15.740596   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:15.740660   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:15.776207   57945 cri.go:89] found id: ""
	I0816 13:47:15.776233   57945 logs.go:276] 0 containers: []
	W0816 13:47:15.776241   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:15.776247   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:15.776302   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:15.816845   57945 cri.go:89] found id: ""
	I0816 13:47:15.816871   57945 logs.go:276] 0 containers: []
	W0816 13:47:15.816879   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:15.816884   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:15.816953   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:15.851279   57945 cri.go:89] found id: ""
	I0816 13:47:15.851306   57945 logs.go:276] 0 containers: []
	W0816 13:47:15.851318   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:15.851325   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:15.851391   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:15.884960   57945 cri.go:89] found id: ""
	I0816 13:47:15.884987   57945 logs.go:276] 0 containers: []
	W0816 13:47:15.884997   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:15.885004   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:15.885063   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:15.922027   57945 cri.go:89] found id: ""
	I0816 13:47:15.922051   57945 logs.go:276] 0 containers: []
	W0816 13:47:15.922060   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:15.922067   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:15.922130   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:15.956774   57945 cri.go:89] found id: ""
	I0816 13:47:15.956799   57945 logs.go:276] 0 containers: []
	W0816 13:47:15.956806   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:15.956814   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:15.956828   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:16.036342   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:16.036375   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:16.079006   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:16.079033   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:16.130374   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:16.130409   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:16.144707   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:16.144740   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:16.216466   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:18.716696   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:18.729670   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:18.729731   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:18.764481   57945 cri.go:89] found id: ""
	I0816 13:47:18.764513   57945 logs.go:276] 0 containers: []
	W0816 13:47:18.764521   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:18.764527   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:18.764574   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:18.803141   57945 cri.go:89] found id: ""
	I0816 13:47:18.803172   57945 logs.go:276] 0 containers: []
	W0816 13:47:18.803183   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:18.803192   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:18.803257   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:18.847951   57945 cri.go:89] found id: ""
	I0816 13:47:18.847977   57945 logs.go:276] 0 containers: []
	W0816 13:47:18.847985   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:18.847991   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:18.848038   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:18.881370   57945 cri.go:89] found id: ""
	I0816 13:47:18.881402   57945 logs.go:276] 0 containers: []
	W0816 13:47:18.881420   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:18.881434   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:18.881491   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:18.916206   57945 cri.go:89] found id: ""
	I0816 13:47:18.916237   57945 logs.go:276] 0 containers: []
	W0816 13:47:18.916247   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:18.916253   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:18.916314   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:18.946851   57945 cri.go:89] found id: ""
	I0816 13:47:18.946873   57945 logs.go:276] 0 containers: []
	W0816 13:47:18.946883   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:18.946891   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:18.946944   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:18.980684   57945 cri.go:89] found id: ""
	I0816 13:47:18.980710   57945 logs.go:276] 0 containers: []
	W0816 13:47:18.980718   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:18.980724   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:18.980789   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:19.015762   57945 cri.go:89] found id: ""
	I0816 13:47:19.015794   57945 logs.go:276] 0 containers: []
	W0816 13:47:19.015805   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:19.015817   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:19.015837   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:19.101544   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:19.101582   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:19.143587   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:19.143621   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:19.198788   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:19.198826   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:19.212697   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:19.212723   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:19.282719   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:16.607841   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:19.107952   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:18.358184   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:20.857525   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:20.499116   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:22.996642   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:24.998888   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:21.783729   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:21.797977   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:21.798056   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:21.833944   57945 cri.go:89] found id: ""
	I0816 13:47:21.833976   57945 logs.go:276] 0 containers: []
	W0816 13:47:21.833987   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:21.833996   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:21.834053   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:21.870079   57945 cri.go:89] found id: ""
	I0816 13:47:21.870110   57945 logs.go:276] 0 containers: []
	W0816 13:47:21.870120   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:21.870128   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:21.870191   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:21.905834   57945 cri.go:89] found id: ""
	I0816 13:47:21.905864   57945 logs.go:276] 0 containers: []
	W0816 13:47:21.905872   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:21.905878   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:21.905932   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:21.943319   57945 cri.go:89] found id: ""
	I0816 13:47:21.943341   57945 logs.go:276] 0 containers: []
	W0816 13:47:21.943349   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:21.943354   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:21.943412   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:21.982065   57945 cri.go:89] found id: ""
	I0816 13:47:21.982094   57945 logs.go:276] 0 containers: []
	W0816 13:47:21.982103   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:21.982110   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:21.982268   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:22.035131   57945 cri.go:89] found id: ""
	I0816 13:47:22.035167   57945 logs.go:276] 0 containers: []
	W0816 13:47:22.035179   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:22.035186   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:22.035250   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:22.082619   57945 cri.go:89] found id: ""
	I0816 13:47:22.082647   57945 logs.go:276] 0 containers: []
	W0816 13:47:22.082655   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:22.082661   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:22.082720   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:22.128521   57945 cri.go:89] found id: ""
	I0816 13:47:22.128550   57945 logs.go:276] 0 containers: []
	W0816 13:47:22.128559   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:22.128568   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:22.128581   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:22.182794   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:22.182824   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:22.196602   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:22.196628   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:22.264434   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:22.264457   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:22.264472   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:22.343796   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:22.343832   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:24.891164   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:24.904170   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:24.904244   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:24.941046   57945 cri.go:89] found id: ""
	I0816 13:47:24.941082   57945 logs.go:276] 0 containers: []
	W0816 13:47:24.941093   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:24.941101   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:24.941177   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:24.976520   57945 cri.go:89] found id: ""
	I0816 13:47:24.976553   57945 logs.go:276] 0 containers: []
	W0816 13:47:24.976564   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:24.976572   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:24.976635   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:25.024663   57945 cri.go:89] found id: ""
	I0816 13:47:25.024692   57945 logs.go:276] 0 containers: []
	W0816 13:47:25.024704   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:25.024712   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:25.024767   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:25.063892   57945 cri.go:89] found id: ""
	I0816 13:47:25.063920   57945 logs.go:276] 0 containers: []
	W0816 13:47:25.063928   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:25.063934   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:25.064014   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:21.607247   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:23.608388   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:22.857995   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:24.858506   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:27.497595   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:29.997611   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:25.105565   57945 cri.go:89] found id: ""
	I0816 13:47:25.105600   57945 logs.go:276] 0 containers: []
	W0816 13:47:25.105612   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:25.105619   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:25.105676   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:25.150965   57945 cri.go:89] found id: ""
	I0816 13:47:25.150995   57945 logs.go:276] 0 containers: []
	W0816 13:47:25.151006   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:25.151014   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:25.151074   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:25.191170   57945 cri.go:89] found id: ""
	I0816 13:47:25.191202   57945 logs.go:276] 0 containers: []
	W0816 13:47:25.191213   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:25.191220   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:25.191280   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:25.226614   57945 cri.go:89] found id: ""
	I0816 13:47:25.226643   57945 logs.go:276] 0 containers: []
	W0816 13:47:25.226653   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:25.226664   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:25.226680   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:25.239478   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:25.239516   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:25.315450   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:25.315478   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:25.315494   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:25.394755   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:25.394792   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:25.434737   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:25.434768   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:27.984829   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:28.000304   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:28.000378   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:28.042396   57945 cri.go:89] found id: ""
	I0816 13:47:28.042430   57945 logs.go:276] 0 containers: []
	W0816 13:47:28.042447   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:28.042455   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:28.042514   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:28.094491   57945 cri.go:89] found id: ""
	I0816 13:47:28.094515   57945 logs.go:276] 0 containers: []
	W0816 13:47:28.094523   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:28.094528   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:28.094586   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:28.146228   57945 cri.go:89] found id: ""
	I0816 13:47:28.146254   57945 logs.go:276] 0 containers: []
	W0816 13:47:28.146262   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:28.146267   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:28.146314   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:28.179302   57945 cri.go:89] found id: ""
	I0816 13:47:28.179335   57945 logs.go:276] 0 containers: []
	W0816 13:47:28.179347   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:28.179355   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:28.179417   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:28.216707   57945 cri.go:89] found id: ""
	I0816 13:47:28.216737   57945 logs.go:276] 0 containers: []
	W0816 13:47:28.216749   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:28.216757   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:28.216808   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:28.253800   57945 cri.go:89] found id: ""
	I0816 13:47:28.253832   57945 logs.go:276] 0 containers: []
	W0816 13:47:28.253843   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:28.253851   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:28.253906   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:28.289403   57945 cri.go:89] found id: ""
	I0816 13:47:28.289438   57945 logs.go:276] 0 containers: []
	W0816 13:47:28.289450   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:28.289458   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:28.289520   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:28.325174   57945 cri.go:89] found id: ""
	I0816 13:47:28.325206   57945 logs.go:276] 0 containers: []
	W0816 13:47:28.325214   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:28.325222   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:28.325233   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:28.377043   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:28.377077   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:28.390991   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:28.391028   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:28.463563   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:28.463584   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:28.463598   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:28.546593   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:28.546628   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:26.107830   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:28.607294   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:30.613619   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:27.356723   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:29.358026   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:31.857750   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:32.497685   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:34.500214   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:31.084932   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:31.100742   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:31.100809   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:31.134888   57945 cri.go:89] found id: ""
	I0816 13:47:31.134914   57945 logs.go:276] 0 containers: []
	W0816 13:47:31.134921   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:31.134929   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:31.134979   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:31.169533   57945 cri.go:89] found id: ""
	I0816 13:47:31.169558   57945 logs.go:276] 0 containers: []
	W0816 13:47:31.169566   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:31.169572   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:31.169630   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:31.203888   57945 cri.go:89] found id: ""
	I0816 13:47:31.203914   57945 logs.go:276] 0 containers: []
	W0816 13:47:31.203924   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:31.203931   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:31.203993   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:31.239346   57945 cri.go:89] found id: ""
	I0816 13:47:31.239374   57945 logs.go:276] 0 containers: []
	W0816 13:47:31.239387   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:31.239393   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:31.239443   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:31.274011   57945 cri.go:89] found id: ""
	I0816 13:47:31.274038   57945 logs.go:276] 0 containers: []
	W0816 13:47:31.274046   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:31.274053   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:31.274117   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:31.308812   57945 cri.go:89] found id: ""
	I0816 13:47:31.308845   57945 logs.go:276] 0 containers: []
	W0816 13:47:31.308856   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:31.308863   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:31.308950   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:31.343041   57945 cri.go:89] found id: ""
	I0816 13:47:31.343067   57945 logs.go:276] 0 containers: []
	W0816 13:47:31.343075   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:31.343082   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:31.343143   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:31.380969   57945 cri.go:89] found id: ""
	I0816 13:47:31.380998   57945 logs.go:276] 0 containers: []
	W0816 13:47:31.381006   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:31.381015   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:31.381028   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:31.434431   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:31.434465   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:31.449374   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:31.449404   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:31.522134   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:31.522159   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:31.522174   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:31.602707   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:31.602736   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:34.142413   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:34.155531   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:34.155595   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:34.195926   57945 cri.go:89] found id: ""
	I0816 13:47:34.195953   57945 logs.go:276] 0 containers: []
	W0816 13:47:34.195964   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:34.195972   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:34.196040   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:34.230064   57945 cri.go:89] found id: ""
	I0816 13:47:34.230092   57945 logs.go:276] 0 containers: []
	W0816 13:47:34.230103   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:34.230109   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:34.230163   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:34.263973   57945 cri.go:89] found id: ""
	I0816 13:47:34.263998   57945 logs.go:276] 0 containers: []
	W0816 13:47:34.264005   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:34.264012   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:34.264069   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:34.298478   57945 cri.go:89] found id: ""
	I0816 13:47:34.298523   57945 logs.go:276] 0 containers: []
	W0816 13:47:34.298532   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:34.298539   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:34.298597   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:34.337196   57945 cri.go:89] found id: ""
	I0816 13:47:34.337225   57945 logs.go:276] 0 containers: []
	W0816 13:47:34.337233   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:34.337239   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:34.337291   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:34.374716   57945 cri.go:89] found id: ""
	I0816 13:47:34.374751   57945 logs.go:276] 0 containers: []
	W0816 13:47:34.374763   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:34.374771   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:34.374830   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:34.413453   57945 cri.go:89] found id: ""
	I0816 13:47:34.413480   57945 logs.go:276] 0 containers: []
	W0816 13:47:34.413491   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:34.413498   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:34.413563   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:34.450074   57945 cri.go:89] found id: ""
	I0816 13:47:34.450107   57945 logs.go:276] 0 containers: []
	W0816 13:47:34.450119   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:34.450156   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:34.450176   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:34.490214   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:34.490239   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:34.542861   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:34.542895   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:34.557371   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:34.557400   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:34.627976   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:34.627995   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:34.628011   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:33.106665   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:35.107026   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:34.358059   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:36.858347   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:36.998289   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:39.499047   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:37.205741   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:37.219207   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:37.219286   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:37.258254   57945 cri.go:89] found id: ""
	I0816 13:47:37.258288   57945 logs.go:276] 0 containers: []
	W0816 13:47:37.258300   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:37.258307   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:37.258359   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:37.293604   57945 cri.go:89] found id: ""
	I0816 13:47:37.293635   57945 logs.go:276] 0 containers: []
	W0816 13:47:37.293647   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:37.293654   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:37.293715   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:37.334043   57945 cri.go:89] found id: ""
	I0816 13:47:37.334072   57945 logs.go:276] 0 containers: []
	W0816 13:47:37.334084   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:37.334091   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:37.334153   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:37.369745   57945 cri.go:89] found id: ""
	I0816 13:47:37.369770   57945 logs.go:276] 0 containers: []
	W0816 13:47:37.369777   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:37.369784   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:37.369835   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:37.406277   57945 cri.go:89] found id: ""
	I0816 13:47:37.406305   57945 logs.go:276] 0 containers: []
	W0816 13:47:37.406317   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:37.406325   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:37.406407   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:37.440418   57945 cri.go:89] found id: ""
	I0816 13:47:37.440449   57945 logs.go:276] 0 containers: []
	W0816 13:47:37.440456   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:37.440463   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:37.440515   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:37.474527   57945 cri.go:89] found id: ""
	I0816 13:47:37.474561   57945 logs.go:276] 0 containers: []
	W0816 13:47:37.474572   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:37.474580   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:37.474642   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:37.513959   57945 cri.go:89] found id: ""
	I0816 13:47:37.513987   57945 logs.go:276] 0 containers: []
	W0816 13:47:37.513995   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:37.514004   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:37.514020   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:37.569561   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:37.569597   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:37.584095   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:37.584127   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:37.652289   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:37.652317   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:37.652333   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:37.737388   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:37.737434   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:37.107091   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:39.108555   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:39.358316   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:41.858946   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:41.998041   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:44.498467   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:40.281872   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:40.295704   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:40.295763   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:40.336641   57945 cri.go:89] found id: ""
	I0816 13:47:40.336667   57945 logs.go:276] 0 containers: []
	W0816 13:47:40.336678   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:40.336686   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:40.336748   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:40.373500   57945 cri.go:89] found id: ""
	I0816 13:47:40.373524   57945 logs.go:276] 0 containers: []
	W0816 13:47:40.373531   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:40.373536   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:40.373593   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:40.417553   57945 cri.go:89] found id: ""
	I0816 13:47:40.417575   57945 logs.go:276] 0 containers: []
	W0816 13:47:40.417583   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:40.417589   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:40.417645   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:40.452778   57945 cri.go:89] found id: ""
	I0816 13:47:40.452809   57945 logs.go:276] 0 containers: []
	W0816 13:47:40.452819   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:40.452827   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:40.452896   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:40.491389   57945 cri.go:89] found id: ""
	I0816 13:47:40.491424   57945 logs.go:276] 0 containers: []
	W0816 13:47:40.491436   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:40.491445   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:40.491505   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:40.529780   57945 cri.go:89] found id: ""
	I0816 13:47:40.529815   57945 logs.go:276] 0 containers: []
	W0816 13:47:40.529826   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:40.529835   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:40.529903   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:40.567724   57945 cri.go:89] found id: ""
	I0816 13:47:40.567751   57945 logs.go:276] 0 containers: []
	W0816 13:47:40.567761   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:40.567768   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:40.567825   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:40.604260   57945 cri.go:89] found id: ""
	I0816 13:47:40.604299   57945 logs.go:276] 0 containers: []
	W0816 13:47:40.604309   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:40.604319   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:40.604335   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:40.676611   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:40.676642   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:40.676659   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:40.755779   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:40.755815   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:40.793780   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:40.793811   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:40.845869   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:40.845902   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:43.361766   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:43.376247   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:43.376309   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:43.416527   57945 cri.go:89] found id: ""
	I0816 13:47:43.416559   57945 logs.go:276] 0 containers: []
	W0816 13:47:43.416567   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:43.416573   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:43.416621   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:43.458203   57945 cri.go:89] found id: ""
	I0816 13:47:43.458228   57945 logs.go:276] 0 containers: []
	W0816 13:47:43.458239   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:43.458246   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:43.458312   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:43.498122   57945 cri.go:89] found id: ""
	I0816 13:47:43.498146   57945 logs.go:276] 0 containers: []
	W0816 13:47:43.498158   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:43.498166   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:43.498231   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:43.533392   57945 cri.go:89] found id: ""
	I0816 13:47:43.533418   57945 logs.go:276] 0 containers: []
	W0816 13:47:43.533428   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:43.533436   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:43.533510   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:43.569258   57945 cri.go:89] found id: ""
	I0816 13:47:43.569294   57945 logs.go:276] 0 containers: []
	W0816 13:47:43.569301   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:43.569309   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:43.569368   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:43.603599   57945 cri.go:89] found id: ""
	I0816 13:47:43.603624   57945 logs.go:276] 0 containers: []
	W0816 13:47:43.603633   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:43.603639   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:43.603696   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:43.643204   57945 cri.go:89] found id: ""
	I0816 13:47:43.643236   57945 logs.go:276] 0 containers: []
	W0816 13:47:43.643248   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:43.643256   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:43.643343   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:43.678365   57945 cri.go:89] found id: ""
	I0816 13:47:43.678393   57945 logs.go:276] 0 containers: []
	W0816 13:47:43.678412   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:43.678424   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:43.678440   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:43.729472   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:43.729522   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:43.743714   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:43.743749   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:43.819210   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:43.819237   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:43.819252   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:43.899800   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:43.899835   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:41.606734   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:43.608097   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:44.357080   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:46.357589   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:46.503576   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:48.998084   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:46.437795   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:46.450756   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:46.450828   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:46.487036   57945 cri.go:89] found id: ""
	I0816 13:47:46.487059   57945 logs.go:276] 0 containers: []
	W0816 13:47:46.487067   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:46.487073   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:46.487119   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:46.524268   57945 cri.go:89] found id: ""
	I0816 13:47:46.524294   57945 logs.go:276] 0 containers: []
	W0816 13:47:46.524303   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:46.524308   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:46.524360   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:46.561202   57945 cri.go:89] found id: ""
	I0816 13:47:46.561232   57945 logs.go:276] 0 containers: []
	W0816 13:47:46.561244   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:46.561251   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:46.561311   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:46.596006   57945 cri.go:89] found id: ""
	I0816 13:47:46.596032   57945 logs.go:276] 0 containers: []
	W0816 13:47:46.596039   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:46.596045   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:46.596094   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:46.632279   57945 cri.go:89] found id: ""
	I0816 13:47:46.632306   57945 logs.go:276] 0 containers: []
	W0816 13:47:46.632313   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:46.632319   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:46.632372   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:46.669139   57945 cri.go:89] found id: ""
	I0816 13:47:46.669166   57945 logs.go:276] 0 containers: []
	W0816 13:47:46.669174   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:46.669179   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:46.669237   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:46.704084   57945 cri.go:89] found id: ""
	I0816 13:47:46.704115   57945 logs.go:276] 0 containers: []
	W0816 13:47:46.704126   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:46.704134   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:46.704207   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:46.740275   57945 cri.go:89] found id: ""
	I0816 13:47:46.740303   57945 logs.go:276] 0 containers: []
	W0816 13:47:46.740314   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:46.740325   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:46.740341   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:46.792777   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:46.792811   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:46.807390   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:46.807429   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:46.877563   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:46.877589   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:46.877605   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:46.954703   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:46.954737   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:49.497506   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:49.510913   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:49.511007   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:49.547461   57945 cri.go:89] found id: ""
	I0816 13:47:49.547491   57945 logs.go:276] 0 containers: []
	W0816 13:47:49.547503   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:49.547517   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:49.547579   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:49.581972   57945 cri.go:89] found id: ""
	I0816 13:47:49.582005   57945 logs.go:276] 0 containers: []
	W0816 13:47:49.582014   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:49.582021   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:49.582084   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:49.617148   57945 cri.go:89] found id: ""
	I0816 13:47:49.617176   57945 logs.go:276] 0 containers: []
	W0816 13:47:49.617185   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:49.617193   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:49.617260   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:49.652546   57945 cri.go:89] found id: ""
	I0816 13:47:49.652569   57945 logs.go:276] 0 containers: []
	W0816 13:47:49.652578   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:49.652584   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:49.652631   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:49.688040   57945 cri.go:89] found id: ""
	I0816 13:47:49.688071   57945 logs.go:276] 0 containers: []
	W0816 13:47:49.688079   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:49.688084   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:49.688154   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:49.721779   57945 cri.go:89] found id: ""
	I0816 13:47:49.721809   57945 logs.go:276] 0 containers: []
	W0816 13:47:49.721819   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:49.721827   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:49.721890   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:49.758926   57945 cri.go:89] found id: ""
	I0816 13:47:49.758953   57945 logs.go:276] 0 containers: []
	W0816 13:47:49.758960   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:49.758966   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:49.759020   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:49.796328   57945 cri.go:89] found id: ""
	I0816 13:47:49.796358   57945 logs.go:276] 0 containers: []
	W0816 13:47:49.796368   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:49.796378   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:49.796393   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:49.851818   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:49.851855   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:49.867320   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:49.867350   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:49.934885   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:49.934907   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:49.934921   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:50.018012   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:50.018055   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:46.105523   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:48.107122   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:50.606969   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:48.357769   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:50.859617   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:50.998256   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:53.498046   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:52.563101   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:52.576817   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:52.576879   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:52.613425   57945 cri.go:89] found id: ""
	I0816 13:47:52.613459   57945 logs.go:276] 0 containers: []
	W0816 13:47:52.613469   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:52.613475   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:52.613522   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:52.650086   57945 cri.go:89] found id: ""
	I0816 13:47:52.650109   57945 logs.go:276] 0 containers: []
	W0816 13:47:52.650117   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:52.650123   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:52.650186   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:52.686993   57945 cri.go:89] found id: ""
	I0816 13:47:52.687020   57945 logs.go:276] 0 containers: []
	W0816 13:47:52.687028   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:52.687034   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:52.687080   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:52.724307   57945 cri.go:89] found id: ""
	I0816 13:47:52.724337   57945 logs.go:276] 0 containers: []
	W0816 13:47:52.724349   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:52.724357   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:52.724421   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:52.759250   57945 cri.go:89] found id: ""
	I0816 13:47:52.759281   57945 logs.go:276] 0 containers: []
	W0816 13:47:52.759290   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:52.759295   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:52.759350   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:52.798634   57945 cri.go:89] found id: ""
	I0816 13:47:52.798660   57945 logs.go:276] 0 containers: []
	W0816 13:47:52.798670   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:52.798677   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:52.798741   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:52.833923   57945 cri.go:89] found id: ""
	I0816 13:47:52.833946   57945 logs.go:276] 0 containers: []
	W0816 13:47:52.833954   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:52.833960   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:52.834005   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:52.873647   57945 cri.go:89] found id: ""
	I0816 13:47:52.873671   57945 logs.go:276] 0 containers: []
	W0816 13:47:52.873679   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:52.873687   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:52.873701   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:52.887667   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:52.887697   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:52.960494   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:52.960516   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:52.960529   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:53.037132   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:53.037167   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:53.076769   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:53.076799   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:52.607529   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:55.107256   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:53.357315   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:55.357380   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:55.498193   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:57.498238   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:59.997582   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:55.625565   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:55.639296   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:55.639367   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:55.675104   57945 cri.go:89] found id: ""
	I0816 13:47:55.675137   57945 logs.go:276] 0 containers: []
	W0816 13:47:55.675149   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:55.675156   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:55.675220   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:55.710108   57945 cri.go:89] found id: ""
	I0816 13:47:55.710137   57945 logs.go:276] 0 containers: []
	W0816 13:47:55.710149   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:55.710156   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:55.710218   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:55.744190   57945 cri.go:89] found id: ""
	I0816 13:47:55.744212   57945 logs.go:276] 0 containers: []
	W0816 13:47:55.744220   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:55.744225   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:55.744288   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:55.781775   57945 cri.go:89] found id: ""
	I0816 13:47:55.781806   57945 logs.go:276] 0 containers: []
	W0816 13:47:55.781815   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:55.781821   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:55.781879   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:55.818877   57945 cri.go:89] found id: ""
	I0816 13:47:55.818907   57945 logs.go:276] 0 containers: []
	W0816 13:47:55.818915   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:55.818921   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:55.818973   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:55.858751   57945 cri.go:89] found id: ""
	I0816 13:47:55.858773   57945 logs.go:276] 0 containers: []
	W0816 13:47:55.858782   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:55.858790   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:55.858852   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:55.894745   57945 cri.go:89] found id: ""
	I0816 13:47:55.894776   57945 logs.go:276] 0 containers: []
	W0816 13:47:55.894787   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:55.894796   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:55.894854   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:55.928805   57945 cri.go:89] found id: ""
	I0816 13:47:55.928832   57945 logs.go:276] 0 containers: []
	W0816 13:47:55.928843   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:55.928853   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:55.928872   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:55.982684   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:55.982717   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:55.997319   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:55.997354   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:56.063016   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:56.063043   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:56.063059   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:56.147138   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:56.147177   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:58.686160   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:58.699135   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:58.699260   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:58.737566   57945 cri.go:89] found id: ""
	I0816 13:47:58.737597   57945 logs.go:276] 0 containers: []
	W0816 13:47:58.737606   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:58.737613   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:58.737662   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:58.778119   57945 cri.go:89] found id: ""
	I0816 13:47:58.778149   57945 logs.go:276] 0 containers: []
	W0816 13:47:58.778164   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:58.778173   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:58.778243   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:58.815003   57945 cri.go:89] found id: ""
	I0816 13:47:58.815031   57945 logs.go:276] 0 containers: []
	W0816 13:47:58.815040   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:58.815046   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:58.815094   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:58.847912   57945 cri.go:89] found id: ""
	I0816 13:47:58.847941   57945 logs.go:276] 0 containers: []
	W0816 13:47:58.847949   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:58.847955   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:58.848005   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:58.882600   57945 cri.go:89] found id: ""
	I0816 13:47:58.882623   57945 logs.go:276] 0 containers: []
	W0816 13:47:58.882631   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:58.882637   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:58.882686   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:58.920459   57945 cri.go:89] found id: ""
	I0816 13:47:58.920489   57945 logs.go:276] 0 containers: []
	W0816 13:47:58.920500   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:58.920507   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:58.920571   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:58.952411   57945 cri.go:89] found id: ""
	I0816 13:47:58.952445   57945 logs.go:276] 0 containers: []
	W0816 13:47:58.952453   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:58.952460   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:58.952570   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:58.985546   57945 cri.go:89] found id: ""
	I0816 13:47:58.985573   57945 logs.go:276] 0 containers: []
	W0816 13:47:58.985581   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:58.985589   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:58.985600   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:59.067406   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:59.067439   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:59.108076   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:59.108107   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:59.162698   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:59.162734   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:59.178734   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:59.178759   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:59.255267   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:57.606146   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:59.606603   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:57.358416   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:59.861332   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:01.998633   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:04.498646   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:01.756248   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:01.768940   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:01.769009   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:01.804884   57945 cri.go:89] found id: ""
	I0816 13:48:01.804924   57945 logs.go:276] 0 containers: []
	W0816 13:48:01.804936   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:01.804946   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:01.805000   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:01.844010   57945 cri.go:89] found id: ""
	I0816 13:48:01.844035   57945 logs.go:276] 0 containers: []
	W0816 13:48:01.844042   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:01.844051   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:01.844104   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:01.882450   57945 cri.go:89] found id: ""
	I0816 13:48:01.882488   57945 logs.go:276] 0 containers: []
	W0816 13:48:01.882500   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:01.882507   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:01.882568   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:01.916995   57945 cri.go:89] found id: ""
	I0816 13:48:01.917028   57945 logs.go:276] 0 containers: []
	W0816 13:48:01.917039   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:01.917048   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:01.917109   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:01.956289   57945 cri.go:89] found id: ""
	I0816 13:48:01.956312   57945 logs.go:276] 0 containers: []
	W0816 13:48:01.956319   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:01.956325   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:01.956378   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:01.991823   57945 cri.go:89] found id: ""
	I0816 13:48:01.991862   57945 logs.go:276] 0 containers: []
	W0816 13:48:01.991875   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:01.991882   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:01.991953   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:02.034244   57945 cri.go:89] found id: ""
	I0816 13:48:02.034272   57945 logs.go:276] 0 containers: []
	W0816 13:48:02.034282   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:02.034290   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:02.034357   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:02.067902   57945 cri.go:89] found id: ""
	I0816 13:48:02.067930   57945 logs.go:276] 0 containers: []
	W0816 13:48:02.067942   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:02.067953   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:02.067971   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:02.121170   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:02.121196   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:02.177468   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:02.177498   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:02.191721   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:02.191757   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:02.270433   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:02.270463   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:02.270500   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:04.855768   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:04.869098   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:04.869175   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:04.907817   57945 cri.go:89] found id: ""
	I0816 13:48:04.907848   57945 logs.go:276] 0 containers: []
	W0816 13:48:04.907856   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:04.907863   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:04.907919   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:04.943307   57945 cri.go:89] found id: ""
	I0816 13:48:04.943339   57945 logs.go:276] 0 containers: []
	W0816 13:48:04.943349   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:04.943356   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:04.943416   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:04.979884   57945 cri.go:89] found id: ""
	I0816 13:48:04.979914   57945 logs.go:276] 0 containers: []
	W0816 13:48:04.979922   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:04.979929   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:04.979978   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:05.021400   57945 cri.go:89] found id: ""
	I0816 13:48:05.021442   57945 logs.go:276] 0 containers: []
	W0816 13:48:05.021453   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:05.021463   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:05.021542   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:05.057780   57945 cri.go:89] found id: ""
	I0816 13:48:05.057800   57945 logs.go:276] 0 containers: []
	W0816 13:48:05.057808   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:05.057814   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:05.057864   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:05.091947   57945 cri.go:89] found id: ""
	I0816 13:48:05.091976   57945 logs.go:276] 0 containers: []
	W0816 13:48:05.091987   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:05.091995   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:05.092058   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:01.607315   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:04.107759   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:02.358142   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:04.857766   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:06.998437   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:09.496888   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:05.129740   57945 cri.go:89] found id: ""
	I0816 13:48:05.129771   57945 logs.go:276] 0 containers: []
	W0816 13:48:05.129781   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:05.129788   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:05.129857   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:05.163020   57945 cri.go:89] found id: ""
	I0816 13:48:05.163049   57945 logs.go:276] 0 containers: []
	W0816 13:48:05.163060   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:05.163070   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:05.163087   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:05.236240   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:05.236266   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:05.236281   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:05.310559   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:05.310595   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:05.351614   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:05.351646   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:05.404938   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:05.404971   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:07.921010   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:07.934181   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:07.934255   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:07.969474   57945 cri.go:89] found id: ""
	I0816 13:48:07.969502   57945 logs.go:276] 0 containers: []
	W0816 13:48:07.969512   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:07.969520   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:07.969575   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:08.007423   57945 cri.go:89] found id: ""
	I0816 13:48:08.007447   57945 logs.go:276] 0 containers: []
	W0816 13:48:08.007454   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:08.007460   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:08.007515   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:08.043981   57945 cri.go:89] found id: ""
	I0816 13:48:08.044010   57945 logs.go:276] 0 containers: []
	W0816 13:48:08.044021   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:08.044027   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:08.044076   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:08.078631   57945 cri.go:89] found id: ""
	I0816 13:48:08.078656   57945 logs.go:276] 0 containers: []
	W0816 13:48:08.078664   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:08.078669   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:08.078720   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:08.114970   57945 cri.go:89] found id: ""
	I0816 13:48:08.114998   57945 logs.go:276] 0 containers: []
	W0816 13:48:08.115010   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:08.115020   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:08.115081   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:08.149901   57945 cri.go:89] found id: ""
	I0816 13:48:08.149936   57945 logs.go:276] 0 containers: []
	W0816 13:48:08.149944   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:08.149951   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:08.150007   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:08.183104   57945 cri.go:89] found id: ""
	I0816 13:48:08.183128   57945 logs.go:276] 0 containers: []
	W0816 13:48:08.183136   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:08.183141   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:08.183189   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:08.216972   57945 cri.go:89] found id: ""
	I0816 13:48:08.217005   57945 logs.go:276] 0 containers: []
	W0816 13:48:08.217016   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:08.217027   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:08.217043   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:08.231192   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:08.231223   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:08.306779   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:08.306807   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:08.306823   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:08.388235   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:08.388274   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:08.429040   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:08.429071   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:06.110473   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:08.606467   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:07.356589   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:09.357419   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:11.357839   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:11.497754   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:13.997641   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:10.983867   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:10.997649   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:10.997722   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:11.033315   57945 cri.go:89] found id: ""
	I0816 13:48:11.033351   57945 logs.go:276] 0 containers: []
	W0816 13:48:11.033362   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:11.033370   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:11.033437   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:11.069000   57945 cri.go:89] found id: ""
	I0816 13:48:11.069030   57945 logs.go:276] 0 containers: []
	W0816 13:48:11.069038   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:11.069044   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:11.069102   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:11.100668   57945 cri.go:89] found id: ""
	I0816 13:48:11.100691   57945 logs.go:276] 0 containers: []
	W0816 13:48:11.100698   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:11.100704   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:11.100755   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:11.134753   57945 cri.go:89] found id: ""
	I0816 13:48:11.134782   57945 logs.go:276] 0 containers: []
	W0816 13:48:11.134792   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:11.134800   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:11.134857   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:11.169691   57945 cri.go:89] found id: ""
	I0816 13:48:11.169717   57945 logs.go:276] 0 containers: []
	W0816 13:48:11.169726   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:11.169734   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:11.169797   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:11.204048   57945 cri.go:89] found id: ""
	I0816 13:48:11.204077   57945 logs.go:276] 0 containers: []
	W0816 13:48:11.204088   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:11.204095   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:11.204147   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:11.237659   57945 cri.go:89] found id: ""
	I0816 13:48:11.237687   57945 logs.go:276] 0 containers: []
	W0816 13:48:11.237698   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:11.237706   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:11.237768   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:11.271886   57945 cri.go:89] found id: ""
	I0816 13:48:11.271911   57945 logs.go:276] 0 containers: []
	W0816 13:48:11.271922   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:11.271932   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:11.271946   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:11.327237   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:11.327274   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:11.343215   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:11.343256   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:11.419725   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:11.419752   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:11.419768   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:11.498221   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:11.498252   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:14.044619   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:14.057479   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:14.057537   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:14.093405   57945 cri.go:89] found id: ""
	I0816 13:48:14.093439   57945 logs.go:276] 0 containers: []
	W0816 13:48:14.093450   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:14.093459   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:14.093516   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:14.127089   57945 cri.go:89] found id: ""
	I0816 13:48:14.127111   57945 logs.go:276] 0 containers: []
	W0816 13:48:14.127120   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:14.127127   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:14.127190   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:14.165676   57945 cri.go:89] found id: ""
	I0816 13:48:14.165708   57945 logs.go:276] 0 containers: []
	W0816 13:48:14.165719   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:14.165726   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:14.165791   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:14.198630   57945 cri.go:89] found id: ""
	I0816 13:48:14.198652   57945 logs.go:276] 0 containers: []
	W0816 13:48:14.198660   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:14.198665   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:14.198717   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:14.246679   57945 cri.go:89] found id: ""
	I0816 13:48:14.246706   57945 logs.go:276] 0 containers: []
	W0816 13:48:14.246714   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:14.246719   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:14.246774   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:14.290928   57945 cri.go:89] found id: ""
	I0816 13:48:14.290960   57945 logs.go:276] 0 containers: []
	W0816 13:48:14.290972   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:14.290979   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:14.291043   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:14.342499   57945 cri.go:89] found id: ""
	I0816 13:48:14.342527   57945 logs.go:276] 0 containers: []
	W0816 13:48:14.342537   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:14.342544   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:14.342613   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:14.377858   57945 cri.go:89] found id: ""
	I0816 13:48:14.377891   57945 logs.go:276] 0 containers: []
	W0816 13:48:14.377899   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:14.377913   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:14.377928   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:14.431180   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:14.431218   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:14.445355   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:14.445381   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:14.513970   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:14.513991   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:14.514006   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:14.591381   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:14.591416   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:11.108299   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:13.612816   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:13.856979   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:15.857269   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:15.999100   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:18.497473   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:17.133406   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:17.146647   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:17.146703   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:17.180991   57945 cri.go:89] found id: ""
	I0816 13:48:17.181022   57945 logs.go:276] 0 containers: []
	W0816 13:48:17.181032   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:17.181041   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:17.181103   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:17.214862   57945 cri.go:89] found id: ""
	I0816 13:48:17.214892   57945 logs.go:276] 0 containers: []
	W0816 13:48:17.214903   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:17.214910   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:17.214971   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:17.250316   57945 cri.go:89] found id: ""
	I0816 13:48:17.250344   57945 logs.go:276] 0 containers: []
	W0816 13:48:17.250355   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:17.250362   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:17.250425   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:17.282959   57945 cri.go:89] found id: ""
	I0816 13:48:17.282991   57945 logs.go:276] 0 containers: []
	W0816 13:48:17.283001   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:17.283008   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:17.283070   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:17.316185   57945 cri.go:89] found id: ""
	I0816 13:48:17.316213   57945 logs.go:276] 0 containers: []
	W0816 13:48:17.316224   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:17.316232   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:17.316292   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:17.353383   57945 cri.go:89] found id: ""
	I0816 13:48:17.353410   57945 logs.go:276] 0 containers: []
	W0816 13:48:17.353420   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:17.353428   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:17.353487   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:17.390808   57945 cri.go:89] found id: ""
	I0816 13:48:17.390836   57945 logs.go:276] 0 containers: []
	W0816 13:48:17.390844   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:17.390850   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:17.390898   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:17.425484   57945 cri.go:89] found id: ""
	I0816 13:48:17.425517   57945 logs.go:276] 0 containers: []
	W0816 13:48:17.425529   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:17.425539   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:17.425556   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:17.439184   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:17.439220   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:17.511813   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:17.511838   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:17.511853   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:17.597415   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:17.597447   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:17.636703   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:17.636738   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:16.105992   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:18.606940   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:20.607532   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:18.357812   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:20.358351   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:20.498644   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:22.998103   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:24.999122   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:20.193694   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:20.207488   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:20.207549   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:20.246584   57945 cri.go:89] found id: ""
	I0816 13:48:20.246610   57945 logs.go:276] 0 containers: []
	W0816 13:48:20.246618   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:20.246624   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:20.246678   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:20.282030   57945 cri.go:89] found id: ""
	I0816 13:48:20.282060   57945 logs.go:276] 0 containers: []
	W0816 13:48:20.282071   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:20.282078   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:20.282142   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:20.317530   57945 cri.go:89] found id: ""
	I0816 13:48:20.317562   57945 logs.go:276] 0 containers: []
	W0816 13:48:20.317571   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:20.317578   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:20.317630   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:20.352964   57945 cri.go:89] found id: ""
	I0816 13:48:20.352990   57945 logs.go:276] 0 containers: []
	W0816 13:48:20.353000   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:20.353008   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:20.353066   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:20.388108   57945 cri.go:89] found id: ""
	I0816 13:48:20.388138   57945 logs.go:276] 0 containers: []
	W0816 13:48:20.388148   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:20.388156   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:20.388224   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:20.423627   57945 cri.go:89] found id: ""
	I0816 13:48:20.423660   57945 logs.go:276] 0 containers: []
	W0816 13:48:20.423672   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:20.423680   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:20.423741   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:20.460975   57945 cri.go:89] found id: ""
	I0816 13:48:20.461003   57945 logs.go:276] 0 containers: []
	W0816 13:48:20.461011   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:20.461017   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:20.461081   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:20.497707   57945 cri.go:89] found id: ""
	I0816 13:48:20.497728   57945 logs.go:276] 0 containers: []
	W0816 13:48:20.497735   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:20.497743   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:20.497758   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:20.584887   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:20.584939   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:20.627020   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:20.627054   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:20.680716   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:20.680756   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:20.694945   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:20.694973   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:20.770900   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:23.271654   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:23.284709   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:23.284788   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:23.324342   57945 cri.go:89] found id: ""
	I0816 13:48:23.324374   57945 logs.go:276] 0 containers: []
	W0816 13:48:23.324384   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:23.324393   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:23.324453   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:23.358846   57945 cri.go:89] found id: ""
	I0816 13:48:23.358869   57945 logs.go:276] 0 containers: []
	W0816 13:48:23.358879   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:23.358885   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:23.358943   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:23.392580   57945 cri.go:89] found id: ""
	I0816 13:48:23.392607   57945 logs.go:276] 0 containers: []
	W0816 13:48:23.392618   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:23.392626   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:23.392686   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:23.428035   57945 cri.go:89] found id: ""
	I0816 13:48:23.428066   57945 logs.go:276] 0 containers: []
	W0816 13:48:23.428076   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:23.428083   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:23.428164   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:23.470027   57945 cri.go:89] found id: ""
	I0816 13:48:23.470054   57945 logs.go:276] 0 containers: []
	W0816 13:48:23.470066   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:23.470076   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:23.470242   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:23.506497   57945 cri.go:89] found id: ""
	I0816 13:48:23.506522   57945 logs.go:276] 0 containers: []
	W0816 13:48:23.506530   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:23.506536   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:23.506588   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:23.542571   57945 cri.go:89] found id: ""
	I0816 13:48:23.542600   57945 logs.go:276] 0 containers: []
	W0816 13:48:23.542611   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:23.542619   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:23.542683   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:23.578552   57945 cri.go:89] found id: ""
	I0816 13:48:23.578584   57945 logs.go:276] 0 containers: []
	W0816 13:48:23.578592   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:23.578601   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:23.578612   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:23.633145   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:23.633181   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:23.648089   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:23.648129   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:23.724645   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:23.724663   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:23.724675   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:23.812979   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:23.813013   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:23.107986   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:25.607110   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:22.858674   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:25.358411   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:27.497538   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:29.498345   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:26.353455   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:26.367433   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:26.367504   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:26.406717   57945 cri.go:89] found id: ""
	I0816 13:48:26.406746   57945 logs.go:276] 0 containers: []
	W0816 13:48:26.406756   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:26.406764   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:26.406825   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:26.440267   57945 cri.go:89] found id: ""
	I0816 13:48:26.440298   57945 logs.go:276] 0 containers: []
	W0816 13:48:26.440309   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:26.440317   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:26.440379   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:26.479627   57945 cri.go:89] found id: ""
	I0816 13:48:26.479653   57945 logs.go:276] 0 containers: []
	W0816 13:48:26.479662   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:26.479667   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:26.479714   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:26.516608   57945 cri.go:89] found id: ""
	I0816 13:48:26.516638   57945 logs.go:276] 0 containers: []
	W0816 13:48:26.516646   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:26.516653   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:26.516713   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:26.553474   57945 cri.go:89] found id: ""
	I0816 13:48:26.553496   57945 logs.go:276] 0 containers: []
	W0816 13:48:26.553505   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:26.553510   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:26.553566   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:26.586090   57945 cri.go:89] found id: ""
	I0816 13:48:26.586147   57945 logs.go:276] 0 containers: []
	W0816 13:48:26.586160   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:26.586167   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:26.586217   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:26.621874   57945 cri.go:89] found id: ""
	I0816 13:48:26.621903   57945 logs.go:276] 0 containers: []
	W0816 13:48:26.621914   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:26.621923   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:26.621999   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:26.656643   57945 cri.go:89] found id: ""
	I0816 13:48:26.656668   57945 logs.go:276] 0 containers: []
	W0816 13:48:26.656676   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:26.656684   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:26.656694   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:26.710589   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:26.710628   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:26.724403   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:26.724423   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:26.795530   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:26.795550   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:26.795568   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:26.879670   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:26.879709   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:29.420540   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:29.434301   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:29.434368   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:29.471409   57945 cri.go:89] found id: ""
	I0816 13:48:29.471438   57945 logs.go:276] 0 containers: []
	W0816 13:48:29.471455   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:29.471464   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:29.471527   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:29.510841   57945 cri.go:89] found id: ""
	I0816 13:48:29.510865   57945 logs.go:276] 0 containers: []
	W0816 13:48:29.510873   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:29.510880   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:29.510928   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:29.546300   57945 cri.go:89] found id: ""
	I0816 13:48:29.546331   57945 logs.go:276] 0 containers: []
	W0816 13:48:29.546342   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:29.546349   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:29.546409   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:29.579324   57945 cri.go:89] found id: ""
	I0816 13:48:29.579349   57945 logs.go:276] 0 containers: []
	W0816 13:48:29.579357   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:29.579363   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:29.579414   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:29.613729   57945 cri.go:89] found id: ""
	I0816 13:48:29.613755   57945 logs.go:276] 0 containers: []
	W0816 13:48:29.613765   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:29.613772   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:29.613831   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:29.649401   57945 cri.go:89] found id: ""
	I0816 13:48:29.649428   57945 logs.go:276] 0 containers: []
	W0816 13:48:29.649439   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:29.649447   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:29.649510   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:29.685391   57945 cri.go:89] found id: ""
	I0816 13:48:29.685420   57945 logs.go:276] 0 containers: []
	W0816 13:48:29.685428   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:29.685436   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:29.685504   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:29.720954   57945 cri.go:89] found id: ""
	I0816 13:48:29.720981   57945 logs.go:276] 0 containers: []
	W0816 13:48:29.720993   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:29.721004   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:29.721019   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:29.791602   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:29.791625   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:29.791637   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:29.876595   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:29.876633   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:29.917172   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:29.917203   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:29.969511   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:29.969548   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:27.607276   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:30.106660   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:27.856585   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:29.857836   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:31.498615   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:33.999039   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:32.484186   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:32.499320   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:32.499386   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:32.537301   57945 cri.go:89] found id: ""
	I0816 13:48:32.537351   57945 logs.go:276] 0 containers: []
	W0816 13:48:32.537365   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:32.537373   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:32.537441   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:32.574324   57945 cri.go:89] found id: ""
	I0816 13:48:32.574350   57945 logs.go:276] 0 containers: []
	W0816 13:48:32.574360   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:32.574367   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:32.574445   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:32.610672   57945 cri.go:89] found id: ""
	I0816 13:48:32.610697   57945 logs.go:276] 0 containers: []
	W0816 13:48:32.610704   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:32.610709   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:32.610760   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:32.649916   57945 cri.go:89] found id: ""
	I0816 13:48:32.649941   57945 logs.go:276] 0 containers: []
	W0816 13:48:32.649949   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:32.649955   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:32.650010   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:32.684204   57945 cri.go:89] found id: ""
	I0816 13:48:32.684234   57945 logs.go:276] 0 containers: []
	W0816 13:48:32.684245   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:32.684257   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:32.684319   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:32.723735   57945 cri.go:89] found id: ""
	I0816 13:48:32.723764   57945 logs.go:276] 0 containers: []
	W0816 13:48:32.723772   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:32.723778   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:32.723838   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:32.759709   57945 cri.go:89] found id: ""
	I0816 13:48:32.759732   57945 logs.go:276] 0 containers: []
	W0816 13:48:32.759740   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:32.759746   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:32.759798   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:32.798782   57945 cri.go:89] found id: ""
	I0816 13:48:32.798807   57945 logs.go:276] 0 containers: []
	W0816 13:48:32.798815   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:32.798823   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:32.798835   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:32.876166   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:32.876188   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:32.876199   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:32.956218   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:32.956253   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:32.996625   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:32.996662   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:33.050093   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:33.050128   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:32.107363   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:34.607045   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:32.357801   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:34.856980   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:36.857321   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:36.497064   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:38.498666   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:35.565097   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:35.578582   57945 kubeadm.go:597] duration metric: took 4m3.330349632s to restartPrimaryControlPlane
	W0816 13:48:35.578670   57945 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0816 13:48:35.578704   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 13:48:36.655625   57945 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.076898816s)
	I0816 13:48:36.655703   57945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 13:48:36.670273   57945 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 13:48:36.681600   57945 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 13:48:36.691816   57945 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 13:48:36.691835   57945 kubeadm.go:157] found existing configuration files:
	
	I0816 13:48:36.691877   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 13:48:36.701841   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 13:48:36.701901   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 13:48:36.711571   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 13:48:36.720990   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 13:48:36.721055   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 13:48:36.730948   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 13:48:36.740294   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 13:48:36.740361   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 13:48:36.750725   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 13:48:36.761936   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 13:48:36.762009   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 13:48:36.772572   57945 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 13:48:37.001184   57945 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 13:48:36.608364   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:39.106585   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:38.857386   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:41.358217   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:40.997776   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:42.998819   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:44.999474   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:41.106806   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:43.107007   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:45.606716   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:42.357715   57440 pod_ready.go:82] duration metric: took 4m0.006671881s for pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace to be "Ready" ...
	E0816 13:48:42.357741   57440 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0816 13:48:42.357749   57440 pod_ready.go:39] duration metric: took 4m4.542302811s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:48:42.357762   57440 api_server.go:52] waiting for apiserver process to appear ...
	I0816 13:48:42.357787   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:42.357834   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:42.415231   57440 cri.go:89] found id: "17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1"
	I0816 13:48:42.415255   57440 cri.go:89] found id: ""
	I0816 13:48:42.415265   57440 logs.go:276] 1 containers: [17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1]
	I0816 13:48:42.415324   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:42.421713   57440 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:42.421779   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:42.462840   57440 cri.go:89] found id: "43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453"
	I0816 13:48:42.462867   57440 cri.go:89] found id: ""
	I0816 13:48:42.462878   57440 logs.go:276] 1 containers: [43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453]
	I0816 13:48:42.462940   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:42.467260   57440 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:42.467321   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:42.505423   57440 cri.go:89] found id: "1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc"
	I0816 13:48:42.505449   57440 cri.go:89] found id: ""
	I0816 13:48:42.505458   57440 logs.go:276] 1 containers: [1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc]
	I0816 13:48:42.505517   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:42.510072   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:42.510124   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:42.551873   57440 cri.go:89] found id: "db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4"
	I0816 13:48:42.551894   57440 cri.go:89] found id: ""
	I0816 13:48:42.551902   57440 logs.go:276] 1 containers: [db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4]
	I0816 13:48:42.551949   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:42.556735   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:42.556783   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:42.595853   57440 cri.go:89] found id: "ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5"
	I0816 13:48:42.595884   57440 cri.go:89] found id: ""
	I0816 13:48:42.595895   57440 logs.go:276] 1 containers: [ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5]
	I0816 13:48:42.595948   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:42.600951   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:42.601003   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:42.639288   57440 cri.go:89] found id: "d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd"
	I0816 13:48:42.639311   57440 cri.go:89] found id: ""
	I0816 13:48:42.639320   57440 logs.go:276] 1 containers: [d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd]
	I0816 13:48:42.639367   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:42.644502   57440 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:42.644554   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:42.685041   57440 cri.go:89] found id: ""
	I0816 13:48:42.685065   57440 logs.go:276] 0 containers: []
	W0816 13:48:42.685074   57440 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:42.685079   57440 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 13:48:42.685137   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 13:48:42.722485   57440 cri.go:89] found id: "b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae"
	I0816 13:48:42.722506   57440 cri.go:89] found id: "35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f"
	I0816 13:48:42.722510   57440 cri.go:89] found id: ""
	I0816 13:48:42.722519   57440 logs.go:276] 2 containers: [b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae 35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f]
	I0816 13:48:42.722590   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:42.727136   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:42.731169   57440 logs.go:123] Gathering logs for kube-controller-manager [d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd] ...
	I0816 13:48:42.731189   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd"
	I0816 13:48:42.794303   57440 logs.go:123] Gathering logs for storage-provisioner [b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae] ...
	I0816 13:48:42.794334   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae"
	I0816 13:48:42.833686   57440 logs.go:123] Gathering logs for container status ...
	I0816 13:48:42.833715   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:42.874606   57440 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:42.874632   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:42.948074   57440 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:42.948111   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:42.963546   57440 logs.go:123] Gathering logs for etcd [43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453] ...
	I0816 13:48:42.963571   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453"
	I0816 13:48:43.027410   57440 logs.go:123] Gathering logs for kube-scheduler [db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4] ...
	I0816 13:48:43.027446   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4"
	I0816 13:48:43.067643   57440 logs.go:123] Gathering logs for kube-proxy [ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5] ...
	I0816 13:48:43.067670   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5"
	I0816 13:48:43.115156   57440 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:43.115183   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 13:48:43.246588   57440 logs.go:123] Gathering logs for kube-apiserver [17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1] ...
	I0816 13:48:43.246618   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1"
	I0816 13:48:43.291042   57440 logs.go:123] Gathering logs for coredns [1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc] ...
	I0816 13:48:43.291069   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc"
	I0816 13:48:43.330741   57440 logs.go:123] Gathering logs for storage-provisioner [35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f] ...
	I0816 13:48:43.330771   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f"
	I0816 13:48:43.371970   57440 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:43.371999   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:46.357313   57440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:46.373368   57440 api_server.go:72] duration metric: took 4m16.32601859s to wait for apiserver process to appear ...
	I0816 13:48:46.373396   57440 api_server.go:88] waiting for apiserver healthz status ...
	I0816 13:48:46.373426   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:46.373473   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:46.411034   57440 cri.go:89] found id: "17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1"
	I0816 13:48:46.411059   57440 cri.go:89] found id: ""
	I0816 13:48:46.411067   57440 logs.go:276] 1 containers: [17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1]
	I0816 13:48:46.411121   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:46.415948   57440 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:46.416009   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:46.458648   57440 cri.go:89] found id: "43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453"
	I0816 13:48:46.458673   57440 cri.go:89] found id: ""
	I0816 13:48:46.458684   57440 logs.go:276] 1 containers: [43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453]
	I0816 13:48:46.458735   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:46.463268   57440 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:46.463332   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:46.502120   57440 cri.go:89] found id: "1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc"
	I0816 13:48:46.502139   57440 cri.go:89] found id: ""
	I0816 13:48:46.502149   57440 logs.go:276] 1 containers: [1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc]
	I0816 13:48:46.502319   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:46.508632   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:46.508692   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:46.552732   57440 cri.go:89] found id: "db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4"
	I0816 13:48:46.552757   57440 cri.go:89] found id: ""
	I0816 13:48:46.552765   57440 logs.go:276] 1 containers: [db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4]
	I0816 13:48:46.552812   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:46.557459   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:46.557524   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:46.598286   57440 cri.go:89] found id: "ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5"
	I0816 13:48:46.598308   57440 cri.go:89] found id: ""
	I0816 13:48:46.598330   57440 logs.go:276] 1 containers: [ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5]
	I0816 13:48:46.598403   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:46.603050   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:46.603110   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:46.641616   57440 cri.go:89] found id: "d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd"
	I0816 13:48:46.641638   57440 cri.go:89] found id: ""
	I0816 13:48:46.641648   57440 logs.go:276] 1 containers: [d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd]
	I0816 13:48:46.641712   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:46.646008   57440 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:46.646076   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:46.682259   57440 cri.go:89] found id: ""
	I0816 13:48:46.682290   57440 logs.go:276] 0 containers: []
	W0816 13:48:46.682302   57440 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:46.682310   57440 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 13:48:46.682366   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 13:48:46.718955   57440 cri.go:89] found id: "b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae"
	I0816 13:48:46.718979   57440 cri.go:89] found id: "35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f"
	I0816 13:48:46.718985   57440 cri.go:89] found id: ""
	I0816 13:48:46.718993   57440 logs.go:276] 2 containers: [b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae 35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f]
	I0816 13:48:46.719049   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:46.723519   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:46.727942   57440 logs.go:123] Gathering logs for kube-scheduler [db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4] ...
	I0816 13:48:46.727968   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4"
	I0816 13:48:46.771942   57440 logs.go:123] Gathering logs for storage-provisioner [b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae] ...
	I0816 13:48:46.771971   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae"
	I0816 13:48:46.818294   57440 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:46.818319   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:46.887977   57440 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:46.888021   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:46.903567   57440 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:46.903599   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 13:48:47.010715   57440 logs.go:123] Gathering logs for kube-apiserver [17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1] ...
	I0816 13:48:47.010747   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1"
	I0816 13:48:47.056317   57440 logs.go:123] Gathering logs for etcd [43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453] ...
	I0816 13:48:47.056346   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453"
	I0816 13:48:47.114669   57440 logs.go:123] Gathering logs for coredns [1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc] ...
	I0816 13:48:47.114696   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc"
	I0816 13:48:47.498472   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:49.998541   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:47.606991   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:49.607458   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:47.157046   57440 logs.go:123] Gathering logs for storage-provisioner [35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f] ...
	I0816 13:48:47.157073   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f"
	I0816 13:48:47.199364   57440 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:47.199393   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:47.640964   57440 logs.go:123] Gathering logs for kube-proxy [ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5] ...
	I0816 13:48:47.641003   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5"
	I0816 13:48:47.683503   57440 logs.go:123] Gathering logs for kube-controller-manager [d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd] ...
	I0816 13:48:47.683541   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd"
	I0816 13:48:47.746748   57440 logs.go:123] Gathering logs for container status ...
	I0816 13:48:47.746798   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:50.296176   57440 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0816 13:48:50.300482   57440 api_server.go:279] https://192.168.61.116:8443/healthz returned 200:
	ok
	I0816 13:48:50.301550   57440 api_server.go:141] control plane version: v1.31.0
	I0816 13:48:50.301570   57440 api_server.go:131] duration metric: took 3.928168044s to wait for apiserver health ...
	I0816 13:48:50.301578   57440 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 13:48:50.301599   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:50.301653   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:50.343199   57440 cri.go:89] found id: "17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1"
	I0816 13:48:50.343223   57440 cri.go:89] found id: ""
	I0816 13:48:50.343231   57440 logs.go:276] 1 containers: [17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1]
	I0816 13:48:50.343276   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:50.347576   57440 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:50.347651   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:50.387912   57440 cri.go:89] found id: "43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453"
	I0816 13:48:50.387937   57440 cri.go:89] found id: ""
	I0816 13:48:50.387947   57440 logs.go:276] 1 containers: [43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453]
	I0816 13:48:50.388004   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:50.392120   57440 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:50.392188   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:50.428655   57440 cri.go:89] found id: "1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc"
	I0816 13:48:50.428680   57440 cri.go:89] found id: ""
	I0816 13:48:50.428688   57440 logs.go:276] 1 containers: [1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc]
	I0816 13:48:50.428734   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:50.432863   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:50.432941   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:50.472269   57440 cri.go:89] found id: "db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4"
	I0816 13:48:50.472295   57440 cri.go:89] found id: ""
	I0816 13:48:50.472304   57440 logs.go:276] 1 containers: [db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4]
	I0816 13:48:50.472351   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:50.476961   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:50.477006   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:50.514772   57440 cri.go:89] found id: "ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5"
	I0816 13:48:50.514793   57440 cri.go:89] found id: ""
	I0816 13:48:50.514801   57440 logs.go:276] 1 containers: [ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5]
	I0816 13:48:50.514857   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:50.520430   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:50.520492   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:50.564708   57440 cri.go:89] found id: "d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd"
	I0816 13:48:50.564733   57440 cri.go:89] found id: ""
	I0816 13:48:50.564741   57440 logs.go:276] 1 containers: [d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd]
	I0816 13:48:50.564788   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:50.569255   57440 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:50.569306   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:50.607803   57440 cri.go:89] found id: ""
	I0816 13:48:50.607823   57440 logs.go:276] 0 containers: []
	W0816 13:48:50.607829   57440 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:50.607835   57440 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 13:48:50.607888   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 13:48:50.643909   57440 cri.go:89] found id: "b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae"
	I0816 13:48:50.643934   57440 cri.go:89] found id: "35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f"
	I0816 13:48:50.643940   57440 cri.go:89] found id: ""
	I0816 13:48:50.643949   57440 logs.go:276] 2 containers: [b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae 35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f]
	I0816 13:48:50.643994   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:50.648575   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:50.653322   57440 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:50.653354   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:50.667847   57440 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:50.667878   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 13:48:50.774932   57440 logs.go:123] Gathering logs for kube-apiserver [17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1] ...
	I0816 13:48:50.774969   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1"
	I0816 13:48:50.823473   57440 logs.go:123] Gathering logs for etcd [43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453] ...
	I0816 13:48:50.823503   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453"
	I0816 13:48:50.884009   57440 logs.go:123] Gathering logs for storage-provisioner [b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae] ...
	I0816 13:48:50.884044   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae"
	I0816 13:48:50.925187   57440 logs.go:123] Gathering logs for container status ...
	I0816 13:48:50.925219   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:50.965019   57440 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:50.965046   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:51.033614   57440 logs.go:123] Gathering logs for coredns [1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc] ...
	I0816 13:48:51.033651   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc"
	I0816 13:48:51.068360   57440 logs.go:123] Gathering logs for kube-scheduler [db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4] ...
	I0816 13:48:51.068387   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4"
	I0816 13:48:51.107768   57440 logs.go:123] Gathering logs for kube-proxy [ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5] ...
	I0816 13:48:51.107792   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5"
	I0816 13:48:51.163637   57440 logs.go:123] Gathering logs for kube-controller-manager [d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd] ...
	I0816 13:48:51.163673   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd"
	I0816 13:48:51.227436   57440 logs.go:123] Gathering logs for storage-provisioner [35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f] ...
	I0816 13:48:51.227462   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f"
	I0816 13:48:51.265505   57440 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:51.265531   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:54.130801   57440 system_pods.go:59] 8 kube-system pods found
	I0816 13:48:54.130828   57440 system_pods.go:61] "coredns-6f6b679f8f-8kbs6" [e732183e-3b22-4a11-909a-246de5fc1c8a] Running
	I0816 13:48:54.130833   57440 system_pods.go:61] "etcd-no-preload-311070" [e05deb95-06ff-4a8d-b520-0350b2332814] Running
	I0816 13:48:54.130837   57440 system_pods.go:61] "kube-apiserver-no-preload-311070" [06cb4989-0511-4c14-a3ec-e966829cf2ec] Running
	I0816 13:48:54.130840   57440 system_pods.go:61] "kube-controller-manager-no-preload-311070" [baa9f379-4e14-4873-a456-42e90a816b0b] Running
	I0816 13:48:54.130843   57440 system_pods.go:61] "kube-proxy-b8d5b" [9ed1c33b-903f-43e8-880c-b9a49c658806] Running
	I0816 13:48:54.130846   57440 system_pods.go:61] "kube-scheduler-no-preload-311070" [166c0f64-ebf6-413f-82d5-f1c32991c63a] Running
	I0816 13:48:54.130852   57440 system_pods.go:61] "metrics-server-6867b74b74-mgxhv" [e9654a8e-4db2-494d-93a7-a134b0e2bb50] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 13:48:54.130855   57440 system_pods.go:61] "storage-provisioner" [f340d2e3-2889-4200-b477-830494b827c6] Running
	I0816 13:48:54.130862   57440 system_pods.go:74] duration metric: took 3.829279192s to wait for pod list to return data ...
	I0816 13:48:54.130868   57440 default_sa.go:34] waiting for default service account to be created ...
	I0816 13:48:54.133253   57440 default_sa.go:45] found service account: "default"
	I0816 13:48:54.133282   57440 default_sa.go:55] duration metric: took 2.407297ms for default service account to be created ...
	I0816 13:48:54.133292   57440 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 13:48:54.138812   57440 system_pods.go:86] 8 kube-system pods found
	I0816 13:48:54.138835   57440 system_pods.go:89] "coredns-6f6b679f8f-8kbs6" [e732183e-3b22-4a11-909a-246de5fc1c8a] Running
	I0816 13:48:54.138841   57440 system_pods.go:89] "etcd-no-preload-311070" [e05deb95-06ff-4a8d-b520-0350b2332814] Running
	I0816 13:48:54.138845   57440 system_pods.go:89] "kube-apiserver-no-preload-311070" [06cb4989-0511-4c14-a3ec-e966829cf2ec] Running
	I0816 13:48:54.138849   57440 system_pods.go:89] "kube-controller-manager-no-preload-311070" [baa9f379-4e14-4873-a456-42e90a816b0b] Running
	I0816 13:48:54.138853   57440 system_pods.go:89] "kube-proxy-b8d5b" [9ed1c33b-903f-43e8-880c-b9a49c658806] Running
	I0816 13:48:54.138856   57440 system_pods.go:89] "kube-scheduler-no-preload-311070" [166c0f64-ebf6-413f-82d5-f1c32991c63a] Running
	I0816 13:48:54.138863   57440 system_pods.go:89] "metrics-server-6867b74b74-mgxhv" [e9654a8e-4db2-494d-93a7-a134b0e2bb50] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 13:48:54.138868   57440 system_pods.go:89] "storage-provisioner" [f340d2e3-2889-4200-b477-830494b827c6] Running
	I0816 13:48:54.138874   57440 system_pods.go:126] duration metric: took 5.576801ms to wait for k8s-apps to be running ...
	I0816 13:48:54.138879   57440 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 13:48:54.138922   57440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 13:48:54.154406   57440 system_svc.go:56] duration metric: took 15.507123ms WaitForService to wait for kubelet
	I0816 13:48:54.154438   57440 kubeadm.go:582] duration metric: took 4m24.107091364s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 13:48:54.154463   57440 node_conditions.go:102] verifying NodePressure condition ...
	I0816 13:48:54.156991   57440 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 13:48:54.157012   57440 node_conditions.go:123] node cpu capacity is 2
	I0816 13:48:54.157027   57440 node_conditions.go:105] duration metric: took 2.558338ms to run NodePressure ...
	I0816 13:48:54.157041   57440 start.go:241] waiting for startup goroutines ...
	I0816 13:48:54.157052   57440 start.go:246] waiting for cluster config update ...
	I0816 13:48:54.157070   57440 start.go:255] writing updated cluster config ...
	I0816 13:48:54.157381   57440 ssh_runner.go:195] Run: rm -f paused
	I0816 13:48:54.205583   57440 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 13:48:54.207845   57440 out.go:177] * Done! kubectl is now configured to use "no-preload-311070" cluster and "default" namespace by default
	I0816 13:48:51.999301   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:54.498057   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:52.107465   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:54.606735   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:56.498967   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:58.997311   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:56.606925   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:58.606970   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:00.607943   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:00.997760   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:02.998653   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:03.107555   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:05.606363   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:05.497723   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:07.498572   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:09.997905   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:07.607916   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:09.606579   58430 pod_ready.go:82] duration metric: took 4m0.00617652s for pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace to be "Ready" ...
	E0816 13:49:09.606602   58430 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0816 13:49:09.606612   58430 pod_ready.go:39] duration metric: took 4m3.606005486s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:49:09.606627   58430 api_server.go:52] waiting for apiserver process to appear ...
	I0816 13:49:09.606652   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:49:09.606698   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:49:09.660442   58430 cri.go:89] found id: "4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190"
	I0816 13:49:09.660461   58430 cri.go:89] found id: ""
	I0816 13:49:09.660469   58430 logs.go:276] 1 containers: [4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190]
	I0816 13:49:09.660519   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:09.664752   58430 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:49:09.664813   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:49:09.701589   58430 cri.go:89] found id: "83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176"
	I0816 13:49:09.701615   58430 cri.go:89] found id: ""
	I0816 13:49:09.701625   58430 logs.go:276] 1 containers: [83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176]
	I0816 13:49:09.701681   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:09.706048   58430 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:49:09.706114   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:49:09.743810   58430 cri.go:89] found id: "8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910"
	I0816 13:49:09.743832   58430 cri.go:89] found id: ""
	I0816 13:49:09.743841   58430 logs.go:276] 1 containers: [8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910]
	I0816 13:49:09.743898   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:09.748197   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:49:09.748271   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:49:09.783730   58430 cri.go:89] found id: "ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9"
	I0816 13:49:09.783752   58430 cri.go:89] found id: ""
	I0816 13:49:09.783765   58430 logs.go:276] 1 containers: [ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9]
	I0816 13:49:09.783828   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:09.787845   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:49:09.787909   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:49:09.828449   58430 cri.go:89] found id: "99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb"
	I0816 13:49:09.828472   58430 cri.go:89] found id: ""
	I0816 13:49:09.828481   58430 logs.go:276] 1 containers: [99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb]
	I0816 13:49:09.828546   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:09.832890   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:49:09.832963   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:49:09.880136   58430 cri.go:89] found id: "590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239"
	I0816 13:49:09.880164   58430 cri.go:89] found id: ""
	I0816 13:49:09.880175   58430 logs.go:276] 1 containers: [590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239]
	I0816 13:49:09.880232   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:09.884533   58430 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:49:09.884599   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:49:09.924776   58430 cri.go:89] found id: ""
	I0816 13:49:09.924805   58430 logs.go:276] 0 containers: []
	W0816 13:49:09.924816   58430 logs.go:278] No container was found matching "kindnet"
	I0816 13:49:09.924828   58430 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 13:49:09.924889   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 13:49:09.971663   58430 cri.go:89] found id: "7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b"
	I0816 13:49:09.971689   58430 cri.go:89] found id: "17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825"
	I0816 13:49:09.971695   58430 cri.go:89] found id: ""
	I0816 13:49:09.971705   58430 logs.go:276] 2 containers: [7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b 17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825]
	I0816 13:49:09.971770   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:09.976297   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:09.980815   58430 logs.go:123] Gathering logs for coredns [8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910] ...
	I0816 13:49:09.980844   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910"
	I0816 13:49:10.020287   58430 logs.go:123] Gathering logs for kube-scheduler [ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9] ...
	I0816 13:49:10.020317   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9"
	I0816 13:49:10.060266   58430 logs.go:123] Gathering logs for kube-controller-manager [590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239] ...
	I0816 13:49:10.060291   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239"
	I0816 13:49:10.113574   58430 logs.go:123] Gathering logs for storage-provisioner [7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b] ...
	I0816 13:49:10.113608   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b"
	I0816 13:49:10.153457   58430 logs.go:123] Gathering logs for storage-provisioner [17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825] ...
	I0816 13:49:10.153482   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825"
	I0816 13:49:10.191530   58430 logs.go:123] Gathering logs for dmesg ...
	I0816 13:49:10.191559   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:49:10.206267   58430 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:49:10.206296   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 13:49:10.326723   58430 logs.go:123] Gathering logs for etcd [83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176] ...
	I0816 13:49:10.326753   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176"
	I0816 13:49:10.377541   58430 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:49:10.377574   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:49:10.895387   58430 logs.go:123] Gathering logs for container status ...
	I0816 13:49:10.895445   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:49:10.947447   58430 logs.go:123] Gathering logs for kubelet ...
	I0816 13:49:10.947475   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:49:11.997943   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:13.998932   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:11.020745   58430 logs.go:123] Gathering logs for kube-apiserver [4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190] ...
	I0816 13:49:11.020786   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190"
	I0816 13:49:11.081224   58430 logs.go:123] Gathering logs for kube-proxy [99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb] ...
	I0816 13:49:11.081257   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb"
	I0816 13:49:13.632726   58430 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:49:13.651185   58430 api_server.go:72] duration metric: took 4m14.880109274s to wait for apiserver process to appear ...
	I0816 13:49:13.651214   58430 api_server.go:88] waiting for apiserver healthz status ...
	I0816 13:49:13.651254   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:49:13.651308   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:49:13.691473   58430 cri.go:89] found id: "4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190"
	I0816 13:49:13.691495   58430 cri.go:89] found id: ""
	I0816 13:49:13.691503   58430 logs.go:276] 1 containers: [4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190]
	I0816 13:49:13.691582   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:13.695945   58430 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:49:13.695998   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:49:13.730798   58430 cri.go:89] found id: "83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176"
	I0816 13:49:13.730830   58430 cri.go:89] found id: ""
	I0816 13:49:13.730840   58430 logs.go:276] 1 containers: [83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176]
	I0816 13:49:13.730913   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:13.735156   58430 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:49:13.735222   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:49:13.769612   58430 cri.go:89] found id: "8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910"
	I0816 13:49:13.769639   58430 cri.go:89] found id: ""
	I0816 13:49:13.769650   58430 logs.go:276] 1 containers: [8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910]
	I0816 13:49:13.769710   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:13.773690   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:49:13.773745   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:49:13.815417   58430 cri.go:89] found id: "ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9"
	I0816 13:49:13.815444   58430 cri.go:89] found id: ""
	I0816 13:49:13.815454   58430 logs.go:276] 1 containers: [ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9]
	I0816 13:49:13.815515   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:13.819596   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:49:13.819666   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:49:13.852562   58430 cri.go:89] found id: "99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb"
	I0816 13:49:13.852587   58430 cri.go:89] found id: ""
	I0816 13:49:13.852597   58430 logs.go:276] 1 containers: [99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb]
	I0816 13:49:13.852657   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:13.856697   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:49:13.856757   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:49:13.902327   58430 cri.go:89] found id: "590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239"
	I0816 13:49:13.902346   58430 cri.go:89] found id: ""
	I0816 13:49:13.902353   58430 logs.go:276] 1 containers: [590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239]
	I0816 13:49:13.902416   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:13.906789   58430 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:49:13.906840   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:49:13.943401   58430 cri.go:89] found id: ""
	I0816 13:49:13.943430   58430 logs.go:276] 0 containers: []
	W0816 13:49:13.943438   58430 logs.go:278] No container was found matching "kindnet"
	I0816 13:49:13.943443   58430 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 13:49:13.943490   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 13:49:13.979154   58430 cri.go:89] found id: "7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b"
	I0816 13:49:13.979178   58430 cri.go:89] found id: "17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825"
	I0816 13:49:13.979182   58430 cri.go:89] found id: ""
	I0816 13:49:13.979189   58430 logs.go:276] 2 containers: [7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b 17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825]
	I0816 13:49:13.979235   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:13.983301   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:13.988522   58430 logs.go:123] Gathering logs for dmesg ...
	I0816 13:49:13.988545   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:49:14.005891   58430 logs.go:123] Gathering logs for kube-apiserver [4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190] ...
	I0816 13:49:14.005916   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190"
	I0816 13:49:14.055686   58430 logs.go:123] Gathering logs for etcd [83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176] ...
	I0816 13:49:14.055713   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176"
	I0816 13:49:14.104975   58430 logs.go:123] Gathering logs for kube-proxy [99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb] ...
	I0816 13:49:14.105010   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb"
	I0816 13:49:14.145761   58430 logs.go:123] Gathering logs for kube-controller-manager [590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239] ...
	I0816 13:49:14.145786   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239"
	I0816 13:49:14.198935   58430 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:49:14.198966   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:49:14.662287   58430 logs.go:123] Gathering logs for container status ...
	I0816 13:49:14.662323   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:49:14.717227   58430 logs.go:123] Gathering logs for kubelet ...
	I0816 13:49:14.717256   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:49:14.789824   58430 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:49:14.789868   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 13:49:14.902892   58430 logs.go:123] Gathering logs for coredns [8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910] ...
	I0816 13:49:14.902922   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910"
	I0816 13:49:14.946711   58430 logs.go:123] Gathering logs for kube-scheduler [ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9] ...
	I0816 13:49:14.946736   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9"
	I0816 13:49:14.986143   58430 logs.go:123] Gathering logs for storage-provisioner [7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b] ...
	I0816 13:49:14.986175   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b"
	I0816 13:49:15.022107   58430 logs.go:123] Gathering logs for storage-provisioner [17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825] ...
	I0816 13:49:15.022138   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825"
	I0816 13:49:16.497493   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:18.497979   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:17.556820   58430 api_server.go:253] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 13:49:17.562249   58430 api_server.go:279] https://192.168.50.186:8444/healthz returned 200:
	ok
	I0816 13:49:17.563264   58430 api_server.go:141] control plane version: v1.31.0
	I0816 13:49:17.563280   58430 api_server.go:131] duration metric: took 3.912060569s to wait for apiserver health ...
	I0816 13:49:17.563288   58430 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 13:49:17.563312   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:49:17.563377   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:49:17.604072   58430 cri.go:89] found id: "4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190"
	I0816 13:49:17.604099   58430 cri.go:89] found id: ""
	I0816 13:49:17.604109   58430 logs.go:276] 1 containers: [4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190]
	I0816 13:49:17.604163   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:17.608623   58430 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:49:17.608678   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:49:17.650241   58430 cri.go:89] found id: "83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176"
	I0816 13:49:17.650267   58430 cri.go:89] found id: ""
	I0816 13:49:17.650275   58430 logs.go:276] 1 containers: [83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176]
	I0816 13:49:17.650328   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:17.654928   58430 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:49:17.655000   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:49:17.690057   58430 cri.go:89] found id: "8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910"
	I0816 13:49:17.690085   58430 cri.go:89] found id: ""
	I0816 13:49:17.690095   58430 logs.go:276] 1 containers: [8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910]
	I0816 13:49:17.690164   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:17.694636   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:49:17.694692   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:49:17.730134   58430 cri.go:89] found id: "ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9"
	I0816 13:49:17.730167   58430 cri.go:89] found id: ""
	I0816 13:49:17.730177   58430 logs.go:276] 1 containers: [ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9]
	I0816 13:49:17.730238   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:17.734364   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:49:17.734420   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:49:17.769579   58430 cri.go:89] found id: "99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb"
	I0816 13:49:17.769595   58430 cri.go:89] found id: ""
	I0816 13:49:17.769603   58430 logs.go:276] 1 containers: [99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb]
	I0816 13:49:17.769643   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:17.773543   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:49:17.773601   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:49:17.814287   58430 cri.go:89] found id: "590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239"
	I0816 13:49:17.814310   58430 cri.go:89] found id: ""
	I0816 13:49:17.814319   58430 logs.go:276] 1 containers: [590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239]
	I0816 13:49:17.814393   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:17.818904   58430 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:49:17.818977   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:49:17.858587   58430 cri.go:89] found id: ""
	I0816 13:49:17.858614   58430 logs.go:276] 0 containers: []
	W0816 13:49:17.858622   58430 logs.go:278] No container was found matching "kindnet"
	I0816 13:49:17.858627   58430 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 13:49:17.858674   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 13:49:17.901759   58430 cri.go:89] found id: "7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b"
	I0816 13:49:17.901784   58430 cri.go:89] found id: "17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825"
	I0816 13:49:17.901788   58430 cri.go:89] found id: ""
	I0816 13:49:17.901796   58430 logs.go:276] 2 containers: [7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b 17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825]
	I0816 13:49:17.901853   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:17.906139   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:17.910273   58430 logs.go:123] Gathering logs for dmesg ...
	I0816 13:49:17.910293   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:49:17.924565   58430 logs.go:123] Gathering logs for etcd [83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176] ...
	I0816 13:49:17.924590   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176"
	I0816 13:49:17.971895   58430 logs.go:123] Gathering logs for coredns [8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910] ...
	I0816 13:49:17.971927   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910"
	I0816 13:49:18.011332   58430 logs.go:123] Gathering logs for kube-scheduler [ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9] ...
	I0816 13:49:18.011364   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9"
	I0816 13:49:18.049264   58430 logs.go:123] Gathering logs for storage-provisioner [7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b] ...
	I0816 13:49:18.049292   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b"
	I0816 13:49:18.084004   58430 logs.go:123] Gathering logs for container status ...
	I0816 13:49:18.084030   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:49:18.136961   58430 logs.go:123] Gathering logs for kubelet ...
	I0816 13:49:18.137000   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:49:18.210452   58430 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:49:18.210483   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 13:49:18.327398   58430 logs.go:123] Gathering logs for kube-apiserver [4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190] ...
	I0816 13:49:18.327429   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190"
	I0816 13:49:18.378777   58430 logs.go:123] Gathering logs for kube-proxy [99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb] ...
	I0816 13:49:18.378809   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb"
	I0816 13:49:18.430052   58430 logs.go:123] Gathering logs for kube-controller-manager [590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239] ...
	I0816 13:49:18.430088   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239"
	I0816 13:49:18.496775   58430 logs.go:123] Gathering logs for storage-provisioner [17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825] ...
	I0816 13:49:18.496806   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825"
	I0816 13:49:18.540493   58430 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:49:18.540523   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:49:21.451644   58430 system_pods.go:59] 8 kube-system pods found
	I0816 13:49:21.451673   58430 system_pods.go:61] "coredns-6f6b679f8f-xdwhx" [66987c52-9a8c-4ddd-a6cf-ac84172d8c8c] Running
	I0816 13:49:21.451679   58430 system_pods.go:61] "etcd-default-k8s-diff-port-893736" [54f5bb47-00f5-4c65-88ae-460571872fd1] Running
	I0816 13:49:21.451682   58430 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-893736" [c8c57619-3a4c-4cc9-b637-7842194071ad] Running
	I0816 13:49:21.451687   58430 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-893736" [e553aced-34bd-4d30-b473-082001fe2948] Running
	I0816 13:49:21.451691   58430 system_pods.go:61] "kube-proxy-btq6r" [a2b7b283-da62-4cb8-a039-07a509491e5e] Running
	I0816 13:49:21.451694   58430 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-893736" [0a30cbd0-a2ac-4ca9-bddc-674e6fc9ae77] Running
	I0816 13:49:21.451701   58430 system_pods.go:61] "metrics-server-6867b74b74-j9tqh" [ef077e6d-f368-4872-bb87-9e031d3ea764] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 13:49:21.451705   58430 system_pods.go:61] "storage-provisioner" [e2fbf16a-3bc7-4300-8023-5dbb20ba70bc] Running
	I0816 13:49:21.451713   58430 system_pods.go:74] duration metric: took 3.888418707s to wait for pod list to return data ...
	I0816 13:49:21.451719   58430 default_sa.go:34] waiting for default service account to be created ...
	I0816 13:49:21.454558   58430 default_sa.go:45] found service account: "default"
	I0816 13:49:21.454578   58430 default_sa.go:55] duration metric: took 2.853068ms for default service account to be created ...
	I0816 13:49:21.454585   58430 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 13:49:21.458906   58430 system_pods.go:86] 8 kube-system pods found
	I0816 13:49:21.458930   58430 system_pods.go:89] "coredns-6f6b679f8f-xdwhx" [66987c52-9a8c-4ddd-a6cf-ac84172d8c8c] Running
	I0816 13:49:21.458935   58430 system_pods.go:89] "etcd-default-k8s-diff-port-893736" [54f5bb47-00f5-4c65-88ae-460571872fd1] Running
	I0816 13:49:21.458941   58430 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-893736" [c8c57619-3a4c-4cc9-b637-7842194071ad] Running
	I0816 13:49:21.458944   58430 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-893736" [e553aced-34bd-4d30-b473-082001fe2948] Running
	I0816 13:49:21.458948   58430 system_pods.go:89] "kube-proxy-btq6r" [a2b7b283-da62-4cb8-a039-07a509491e5e] Running
	I0816 13:49:21.458951   58430 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-893736" [0a30cbd0-a2ac-4ca9-bddc-674e6fc9ae77] Running
	I0816 13:49:21.458958   58430 system_pods.go:89] "metrics-server-6867b74b74-j9tqh" [ef077e6d-f368-4872-bb87-9e031d3ea764] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 13:49:21.458961   58430 system_pods.go:89] "storage-provisioner" [e2fbf16a-3bc7-4300-8023-5dbb20ba70bc] Running
	I0816 13:49:21.458968   58430 system_pods.go:126] duration metric: took 4.378971ms to wait for k8s-apps to be running ...
	I0816 13:49:21.458975   58430 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 13:49:21.459016   58430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 13:49:21.476060   58430 system_svc.go:56] duration metric: took 17.075817ms WaitForService to wait for kubelet
	I0816 13:49:21.476086   58430 kubeadm.go:582] duration metric: took 4m22.705015833s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 13:49:21.476109   58430 node_conditions.go:102] verifying NodePressure condition ...
	I0816 13:49:21.479557   58430 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 13:49:21.479585   58430 node_conditions.go:123] node cpu capacity is 2
	I0816 13:49:21.479600   58430 node_conditions.go:105] duration metric: took 3.483638ms to run NodePressure ...
	I0816 13:49:21.479613   58430 start.go:241] waiting for startup goroutines ...
	I0816 13:49:21.479622   58430 start.go:246] waiting for cluster config update ...
	I0816 13:49:21.479637   58430 start.go:255] writing updated cluster config ...
	I0816 13:49:21.479949   58430 ssh_runner.go:195] Run: rm -f paused
	I0816 13:49:21.530237   58430 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 13:49:21.532328   58430 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-893736" cluster and "default" namespace by default
	I0816 13:49:20.998486   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:23.498358   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:25.498502   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:27.998622   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:30.491886   57240 pod_ready.go:82] duration metric: took 4m0.000539211s for pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace to be "Ready" ...
	E0816 13:49:30.491929   57240 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace to be "Ready" (will not retry!)
	I0816 13:49:30.491945   57240 pod_ready.go:39] duration metric: took 4m12.492024576s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:49:30.491972   57240 kubeadm.go:597] duration metric: took 4m19.795438093s to restartPrimaryControlPlane
	W0816 13:49:30.492032   57240 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0816 13:49:30.492059   57240 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 13:49:56.783263   57240 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.29118348s)
	I0816 13:49:56.783321   57240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 13:49:56.798550   57240 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 13:49:56.810542   57240 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 13:49:56.820837   57240 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 13:49:56.820873   57240 kubeadm.go:157] found existing configuration files:
	
	I0816 13:49:56.820947   57240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 13:49:56.831998   57240 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 13:49:56.832057   57240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 13:49:56.842351   57240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 13:49:56.852062   57240 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 13:49:56.852119   57240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 13:49:56.862337   57240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 13:49:56.872000   57240 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 13:49:56.872050   57240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 13:49:56.881764   57240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 13:49:56.891211   57240 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 13:49:56.891276   57240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 13:49:56.900969   57240 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 13:49:56.942823   57240 kubeadm.go:310] W0816 13:49:56.895203    2544 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 13:49:56.943751   57240 kubeadm.go:310] W0816 13:49:56.896255    2544 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 13:49:57.049491   57240 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 13:50:05.244505   57240 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0816 13:50:05.244561   57240 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 13:50:05.244657   57240 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 13:50:05.244775   57240 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 13:50:05.244901   57240 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0816 13:50:05.244989   57240 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 13:50:05.246568   57240 out.go:235]   - Generating certificates and keys ...
	I0816 13:50:05.246667   57240 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 13:50:05.246779   57240 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 13:50:05.246885   57240 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 13:50:05.246968   57240 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 13:50:05.247065   57240 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 13:50:05.247125   57240 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 13:50:05.247195   57240 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 13:50:05.247260   57240 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 13:50:05.247372   57240 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 13:50:05.247480   57240 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 13:50:05.247521   57240 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 13:50:05.247590   57240 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 13:50:05.247670   57240 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 13:50:05.247751   57240 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0816 13:50:05.247830   57240 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 13:50:05.247886   57240 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 13:50:05.247965   57240 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 13:50:05.248046   57240 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 13:50:05.248100   57240 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 13:50:05.249601   57240 out.go:235]   - Booting up control plane ...
	I0816 13:50:05.249698   57240 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 13:50:05.249779   57240 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 13:50:05.249835   57240 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 13:50:05.249930   57240 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 13:50:05.250007   57240 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 13:50:05.250046   57240 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 13:50:05.250184   57240 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0816 13:50:05.250289   57240 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0816 13:50:05.250343   57240 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002296228s
	I0816 13:50:05.250403   57240 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0816 13:50:05.250456   57240 kubeadm.go:310] [api-check] The API server is healthy after 5.002119618s
	I0816 13:50:05.250546   57240 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0816 13:50:05.250651   57240 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0816 13:50:05.250700   57240 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0816 13:50:05.250876   57240 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-302520 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0816 13:50:05.250930   57240 kubeadm.go:310] [bootstrap-token] Using token: dta4cr.diyk2wto3tx3ixlb
	I0816 13:50:05.252120   57240 out.go:235]   - Configuring RBAC rules ...
	I0816 13:50:05.252207   57240 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0816 13:50:05.252287   57240 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0816 13:50:05.252418   57240 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0816 13:50:05.252542   57240 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0816 13:50:05.252648   57240 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0816 13:50:05.252724   57240 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0816 13:50:05.252819   57240 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0816 13:50:05.252856   57240 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0816 13:50:05.252895   57240 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0816 13:50:05.252901   57240 kubeadm.go:310] 
	I0816 13:50:05.253004   57240 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0816 13:50:05.253022   57240 kubeadm.go:310] 
	I0816 13:50:05.253116   57240 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0816 13:50:05.253126   57240 kubeadm.go:310] 
	I0816 13:50:05.253155   57240 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0816 13:50:05.253240   57240 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0816 13:50:05.253283   57240 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0816 13:50:05.253289   57240 kubeadm.go:310] 
	I0816 13:50:05.253340   57240 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0816 13:50:05.253347   57240 kubeadm.go:310] 
	I0816 13:50:05.253405   57240 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0816 13:50:05.253423   57240 kubeadm.go:310] 
	I0816 13:50:05.253484   57240 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0816 13:50:05.253556   57240 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0816 13:50:05.253621   57240 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0816 13:50:05.253629   57240 kubeadm.go:310] 
	I0816 13:50:05.253710   57240 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0816 13:50:05.253840   57240 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0816 13:50:05.253855   57240 kubeadm.go:310] 
	I0816 13:50:05.253962   57240 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token dta4cr.diyk2wto3tx3ixlb \
	I0816 13:50:05.254087   57240 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4dc33a2efd2478f5cbf5aa38b4f971f510c5b41ce450063af834610254851641 \
	I0816 13:50:05.254122   57240 kubeadm.go:310] 	--control-plane 
	I0816 13:50:05.254126   57240 kubeadm.go:310] 
	I0816 13:50:05.254202   57240 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0816 13:50:05.254209   57240 kubeadm.go:310] 
	I0816 13:50:05.254280   57240 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token dta4cr.diyk2wto3tx3ixlb \
	I0816 13:50:05.254394   57240 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4dc33a2efd2478f5cbf5aa38b4f971f510c5b41ce450063af834610254851641 
	I0816 13:50:05.254407   57240 cni.go:84] Creating CNI manager for ""
	I0816 13:50:05.254416   57240 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:50:05.255889   57240 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 13:50:05.257086   57240 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 13:50:05.268668   57240 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 13:50:05.288676   57240 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 13:50:05.288735   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 13:50:05.288755   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-302520 minikube.k8s.io/updated_at=2024_08_16T13_50_05_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ab84f9bc76071a77c857a14f5c66dccc01002b05 minikube.k8s.io/name=embed-certs-302520 minikube.k8s.io/primary=true
	I0816 13:50:05.494987   57240 ops.go:34] apiserver oom_adj: -16
	I0816 13:50:05.495066   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 13:50:05.995792   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 13:50:06.495937   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 13:50:06.995513   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 13:50:07.495437   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 13:50:07.995600   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 13:50:08.495194   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 13:50:08.995101   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 13:50:09.495533   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 13:50:09.659383   57240 kubeadm.go:1113] duration metric: took 4.370714211s to wait for elevateKubeSystemPrivileges
	I0816 13:50:09.659425   57240 kubeadm.go:394] duration metric: took 4m59.010243945s to StartCluster
	I0816 13:50:09.659448   57240 settings.go:142] acquiring lock: {Name:mk96ffc9331ceacd8f3c1c33a59b38e047898a0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:50:09.659529   57240 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 13:50:09.661178   57240 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/kubeconfig: {Name:mk102d7d0e1fecb6a50b9d6c1ee82dcff7f7a898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:50:09.661475   57240 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 13:50:09.661579   57240 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 13:50:09.661662   57240 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-302520"
	I0816 13:50:09.661678   57240 addons.go:69] Setting default-storageclass=true in profile "embed-certs-302520"
	I0816 13:50:09.661693   57240 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-302520"
	W0816 13:50:09.661701   57240 addons.go:243] addon storage-provisioner should already be in state true
	I0816 13:50:09.661683   57240 config.go:182] Loaded profile config "embed-certs-302520": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 13:50:09.661707   57240 addons.go:69] Setting metrics-server=true in profile "embed-certs-302520"
	I0816 13:50:09.661730   57240 host.go:66] Checking if "embed-certs-302520" exists ...
	I0816 13:50:09.661732   57240 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-302520"
	I0816 13:50:09.661744   57240 addons.go:234] Setting addon metrics-server=true in "embed-certs-302520"
	W0816 13:50:09.661758   57240 addons.go:243] addon metrics-server should already be in state true
	I0816 13:50:09.661789   57240 host.go:66] Checking if "embed-certs-302520" exists ...
	I0816 13:50:09.662063   57240 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:50:09.662070   57240 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:50:09.662092   57240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:50:09.662093   57240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:50:09.662125   57240 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:50:09.662177   57240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:50:09.663568   57240 out.go:177] * Verifying Kubernetes components...
	I0816 13:50:09.665144   57240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:50:09.679643   57240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44127
	I0816 13:50:09.679976   57240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33121
	I0816 13:50:09.680138   57240 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:50:09.680460   57240 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:50:09.680652   57240 main.go:141] libmachine: Using API Version  1
	I0816 13:50:09.680677   57240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:50:09.681040   57240 main.go:141] libmachine: Using API Version  1
	I0816 13:50:09.681060   57240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:50:09.681084   57240 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:50:09.681449   57240 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:50:09.681659   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetState
	I0816 13:50:09.681706   57240 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:50:09.681737   57240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:50:09.682300   57240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42691
	I0816 13:50:09.682644   57240 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:50:09.683099   57240 main.go:141] libmachine: Using API Version  1
	I0816 13:50:09.683121   57240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:50:09.683464   57240 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:50:09.683993   57240 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:50:09.684020   57240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:50:09.684695   57240 addons.go:234] Setting addon default-storageclass=true in "embed-certs-302520"
	W0816 13:50:09.684713   57240 addons.go:243] addon default-storageclass should already be in state true
	I0816 13:50:09.684733   57240 host.go:66] Checking if "embed-certs-302520" exists ...
	I0816 13:50:09.685016   57240 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:50:09.685044   57240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:50:09.699612   57240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40707
	I0816 13:50:09.700235   57240 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:50:09.700244   57240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36139
	I0816 13:50:09.700776   57240 main.go:141] libmachine: Using API Version  1
	I0816 13:50:09.700795   57240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:50:09.700827   57240 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:50:09.701285   57240 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:50:09.701369   57240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40047
	I0816 13:50:09.701457   57240 main.go:141] libmachine: Using API Version  1
	I0816 13:50:09.701467   57240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:50:09.701939   57240 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:50:09.701980   57240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:50:09.702188   57240 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:50:09.702209   57240 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:50:09.702494   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetState
	I0816 13:50:09.702618   57240 main.go:141] libmachine: Using API Version  1
	I0816 13:50:09.702635   57240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:50:09.703042   57240 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:50:09.703250   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetState
	I0816 13:50:09.704568   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	I0816 13:50:09.705308   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	I0816 13:50:09.707074   57240 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0816 13:50:09.707074   57240 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:50:09.708773   57240 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 13:50:09.708792   57240 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 13:50:09.708813   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:50:09.708894   57240 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 13:50:09.708924   57240 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 13:50:09.708941   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:50:09.714305   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:50:09.714338   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:50:09.714812   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:50:09.714840   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:50:09.714874   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:50:09.714928   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:50:09.715181   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:50:09.715215   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:50:09.715363   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:50:09.715399   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:50:09.715512   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:50:09.715556   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:50:09.715634   57240 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/embed-certs-302520/id_rsa Username:docker}
	I0816 13:50:09.715876   57240 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/embed-certs-302520/id_rsa Username:docker}
	I0816 13:50:09.724172   57240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37325
	I0816 13:50:09.724636   57240 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:50:09.725184   57240 main.go:141] libmachine: Using API Version  1
	I0816 13:50:09.725213   57240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:50:09.725596   57240 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:50:09.725799   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetState
	I0816 13:50:09.727188   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	I0816 13:50:09.727410   57240 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 13:50:09.727426   57240 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 13:50:09.727447   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:50:09.729840   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:50:09.730228   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:50:09.730255   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:50:09.730534   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:50:09.730723   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:50:09.730867   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:50:09.731014   57240 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/embed-certs-302520/id_rsa Username:docker}
	I0816 13:50:09.899195   57240 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 13:50:09.939173   57240 node_ready.go:35] waiting up to 6m0s for node "embed-certs-302520" to be "Ready" ...
	I0816 13:50:09.958087   57240 node_ready.go:49] node "embed-certs-302520" has status "Ready":"True"
	I0816 13:50:09.958119   57240 node_ready.go:38] duration metric: took 18.911367ms for node "embed-certs-302520" to be "Ready" ...
	I0816 13:50:09.958130   57240 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:50:09.963326   57240 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-whnqh" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:10.083721   57240 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 13:50:10.184794   57240 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 13:50:10.203192   57240 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 13:50:10.203214   57240 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0816 13:50:10.285922   57240 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 13:50:10.285950   57240 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 13:50:10.370797   57240 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 13:50:10.370825   57240 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 13:50:10.420892   57240 main.go:141] libmachine: Making call to close driver server
	I0816 13:50:10.420942   57240 main.go:141] libmachine: (embed-certs-302520) Calling .Close
	I0816 13:50:10.421261   57240 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:50:10.421280   57240 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:50:10.421282   57240 main.go:141] libmachine: (embed-certs-302520) DBG | Closing plugin on server side
	I0816 13:50:10.421293   57240 main.go:141] libmachine: Making call to close driver server
	I0816 13:50:10.421303   57240 main.go:141] libmachine: (embed-certs-302520) Calling .Close
	I0816 13:50:10.421556   57240 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:50:10.421620   57240 main.go:141] libmachine: (embed-certs-302520) DBG | Closing plugin on server side
	I0816 13:50:10.421625   57240 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:50:10.427229   57240 main.go:141] libmachine: Making call to close driver server
	I0816 13:50:10.427250   57240 main.go:141] libmachine: (embed-certs-302520) Calling .Close
	I0816 13:50:10.427591   57240 main.go:141] libmachine: (embed-certs-302520) DBG | Closing plugin on server side
	I0816 13:50:10.427638   57240 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:50:10.427655   57240 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:50:10.454486   57240 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 13:50:11.225905   57240 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.041077031s)
	I0816 13:50:11.225958   57240 main.go:141] libmachine: Making call to close driver server
	I0816 13:50:11.225969   57240 main.go:141] libmachine: (embed-certs-302520) Calling .Close
	I0816 13:50:11.226248   57240 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:50:11.226268   57240 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:50:11.226273   57240 main.go:141] libmachine: (embed-certs-302520) DBG | Closing plugin on server side
	I0816 13:50:11.226295   57240 main.go:141] libmachine: Making call to close driver server
	I0816 13:50:11.226310   57240 main.go:141] libmachine: (embed-certs-302520) Calling .Close
	I0816 13:50:11.226561   57240 main.go:141] libmachine: (embed-certs-302520) DBG | Closing plugin on server side
	I0816 13:50:11.226608   57240 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:50:11.226627   57240 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:50:11.447454   57240 main.go:141] libmachine: Making call to close driver server
	I0816 13:50:11.447484   57240 main.go:141] libmachine: (embed-certs-302520) Calling .Close
	I0816 13:50:11.447823   57240 main.go:141] libmachine: (embed-certs-302520) DBG | Closing plugin on server side
	I0816 13:50:11.447890   57240 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:50:11.447908   57240 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:50:11.447924   57240 main.go:141] libmachine: Making call to close driver server
	I0816 13:50:11.447936   57240 main.go:141] libmachine: (embed-certs-302520) Calling .Close
	I0816 13:50:11.448179   57240 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:50:11.448195   57240 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:50:11.448241   57240 addons.go:475] Verifying addon metrics-server=true in "embed-certs-302520"
	I0816 13:50:11.450274   57240 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0816 13:50:11.451676   57240 addons.go:510] duration metric: took 1.790101568s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0816 13:50:11.971087   57240 pod_ready.go:103] pod "coredns-6f6b679f8f-whnqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:50:12.470167   57240 pod_ready.go:93] pod "coredns-6f6b679f8f-whnqh" in "kube-system" namespace has status "Ready":"True"
	I0816 13:50:12.470193   57240 pod_ready.go:82] duration metric: took 2.506842546s for pod "coredns-6f6b679f8f-whnqh" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:12.470203   57240 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-zh69g" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:12.474959   57240 pod_ready.go:93] pod "coredns-6f6b679f8f-zh69g" in "kube-system" namespace has status "Ready":"True"
	I0816 13:50:12.474980   57240 pod_ready.go:82] duration metric: took 4.769458ms for pod "coredns-6f6b679f8f-zh69g" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:12.474988   57240 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:12.479388   57240 pod_ready.go:93] pod "etcd-embed-certs-302520" in "kube-system" namespace has status "Ready":"True"
	I0816 13:50:12.479410   57240 pod_ready.go:82] duration metric: took 4.41564ms for pod "etcd-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:12.479421   57240 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:12.483567   57240 pod_ready.go:93] pod "kube-apiserver-embed-certs-302520" in "kube-system" namespace has status "Ready":"True"
	I0816 13:50:12.483589   57240 pod_ready.go:82] duration metric: took 4.159906ms for pod "kube-apiserver-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:12.483600   57240 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:14.490212   57240 pod_ready.go:103] pod "kube-controller-manager-embed-certs-302520" in "kube-system" namespace has status "Ready":"False"
	I0816 13:50:15.990204   57240 pod_ready.go:93] pod "kube-controller-manager-embed-certs-302520" in "kube-system" namespace has status "Ready":"True"
	I0816 13:50:15.990226   57240 pod_ready.go:82] duration metric: took 3.506618768s for pod "kube-controller-manager-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:15.990235   57240 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-spgtw" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:15.994580   57240 pod_ready.go:93] pod "kube-proxy-spgtw" in "kube-system" namespace has status "Ready":"True"
	I0816 13:50:15.994597   57240 pod_ready.go:82] duration metric: took 4.356588ms for pod "kube-proxy-spgtw" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:15.994605   57240 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:16.068472   57240 pod_ready.go:93] pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace has status "Ready":"True"
	I0816 13:50:16.068495   57240 pod_ready.go:82] duration metric: took 73.884906ms for pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:16.068503   57240 pod_ready.go:39] duration metric: took 6.110362477s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:50:16.068519   57240 api_server.go:52] waiting for apiserver process to appear ...
	I0816 13:50:16.068579   57240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:50:16.086318   57240 api_server.go:72] duration metric: took 6.424804798s to wait for apiserver process to appear ...
	I0816 13:50:16.086345   57240 api_server.go:88] waiting for apiserver healthz status ...
	I0816 13:50:16.086361   57240 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0816 13:50:16.091170   57240 api_server.go:279] https://192.168.39.125:8443/healthz returned 200:
	ok
	I0816 13:50:16.092122   57240 api_server.go:141] control plane version: v1.31.0
	I0816 13:50:16.092138   57240 api_server.go:131] duration metric: took 5.787898ms to wait for apiserver health ...
	I0816 13:50:16.092146   57240 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 13:50:16.271303   57240 system_pods.go:59] 9 kube-system pods found
	I0816 13:50:16.271338   57240 system_pods.go:61] "coredns-6f6b679f8f-whnqh" [6f4d69de-4130-4959-b1ef-9ddfbe5d6a72] Running
	I0816 13:50:16.271344   57240 system_pods.go:61] "coredns-6f6b679f8f-zh69g" [b65235cd-590b-4108-b5fc-b5f6072c8f5f] Running
	I0816 13:50:16.271348   57240 system_pods.go:61] "etcd-embed-certs-302520" [54a46f37-7b4c-4732-908d-df64558dd74f] Running
	I0816 13:50:16.271353   57240 system_pods.go:61] "kube-apiserver-embed-certs-302520" [d58b625b-c94e-44a7-ac30-18b1e2e8691e] Running
	I0816 13:50:16.271359   57240 system_pods.go:61] "kube-controller-manager-embed-certs-302520" [6bb26bff-7111-40c5-9f18-9ca1b733f990] Running
	I0816 13:50:16.271364   57240 system_pods.go:61] "kube-proxy-spgtw" [e9b2b029-a32e-4dd5-b4ff-bd3d61b97a02] Running
	I0816 13:50:16.271370   57240 system_pods.go:61] "kube-scheduler-embed-certs-302520" [aea7ddf8-67b1-468d-9ab8-c78b0bfecdbb] Running
	I0816 13:50:16.271379   57240 system_pods.go:61] "metrics-server-6867b74b74-q58h2" [1351eabe-df61-4b9c-b67b-2e9c963b0eaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 13:50:16.271389   57240 system_pods.go:61] "storage-provisioner" [8e139aaf-e6d1-4661-8c7b-90c1cc9827d4] Running
	I0816 13:50:16.271398   57240 system_pods.go:74] duration metric: took 179.244421ms to wait for pod list to return data ...
	I0816 13:50:16.271410   57240 default_sa.go:34] waiting for default service account to be created ...
	I0816 13:50:16.468167   57240 default_sa.go:45] found service account: "default"
	I0816 13:50:16.468196   57240 default_sa.go:55] duration metric: took 196.779435ms for default service account to be created ...
	I0816 13:50:16.468207   57240 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 13:50:16.670917   57240 system_pods.go:86] 9 kube-system pods found
	I0816 13:50:16.670943   57240 system_pods.go:89] "coredns-6f6b679f8f-whnqh" [6f4d69de-4130-4959-b1ef-9ddfbe5d6a72] Running
	I0816 13:50:16.670949   57240 system_pods.go:89] "coredns-6f6b679f8f-zh69g" [b65235cd-590b-4108-b5fc-b5f6072c8f5f] Running
	I0816 13:50:16.670953   57240 system_pods.go:89] "etcd-embed-certs-302520" [54a46f37-7b4c-4732-908d-df64558dd74f] Running
	I0816 13:50:16.670957   57240 system_pods.go:89] "kube-apiserver-embed-certs-302520" [d58b625b-c94e-44a7-ac30-18b1e2e8691e] Running
	I0816 13:50:16.670960   57240 system_pods.go:89] "kube-controller-manager-embed-certs-302520" [6bb26bff-7111-40c5-9f18-9ca1b733f990] Running
	I0816 13:50:16.670963   57240 system_pods.go:89] "kube-proxy-spgtw" [e9b2b029-a32e-4dd5-b4ff-bd3d61b97a02] Running
	I0816 13:50:16.670967   57240 system_pods.go:89] "kube-scheduler-embed-certs-302520" [aea7ddf8-67b1-468d-9ab8-c78b0bfecdbb] Running
	I0816 13:50:16.670972   57240 system_pods.go:89] "metrics-server-6867b74b74-q58h2" [1351eabe-df61-4b9c-b67b-2e9c963b0eaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 13:50:16.670976   57240 system_pods.go:89] "storage-provisioner" [8e139aaf-e6d1-4661-8c7b-90c1cc9827d4] Running
	I0816 13:50:16.670984   57240 system_pods.go:126] duration metric: took 202.771216ms to wait for k8s-apps to be running ...
	I0816 13:50:16.670990   57240 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 13:50:16.671040   57240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 13:50:16.686873   57240 system_svc.go:56] duration metric: took 15.876641ms WaitForService to wait for kubelet
	I0816 13:50:16.686906   57240 kubeadm.go:582] duration metric: took 7.025397638s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 13:50:16.686925   57240 node_conditions.go:102] verifying NodePressure condition ...
	I0816 13:50:16.869367   57240 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 13:50:16.869393   57240 node_conditions.go:123] node cpu capacity is 2
	I0816 13:50:16.869405   57240 node_conditions.go:105] duration metric: took 182.475776ms to run NodePressure ...
	I0816 13:50:16.869420   57240 start.go:241] waiting for startup goroutines ...
	I0816 13:50:16.869427   57240 start.go:246] waiting for cluster config update ...
	I0816 13:50:16.869436   57240 start.go:255] writing updated cluster config ...
	I0816 13:50:16.869686   57240 ssh_runner.go:195] Run: rm -f paused
	I0816 13:50:16.919168   57240 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 13:50:16.921207   57240 out.go:177] * Done! kubectl is now configured to use "embed-certs-302520" cluster and "default" namespace by default
	I0816 13:50:32.875973   57945 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0816 13:50:32.876092   57945 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0816 13:50:32.877853   57945 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0816 13:50:32.877964   57945 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 13:50:32.878066   57945 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 13:50:32.878184   57945 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 13:50:32.878286   57945 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0816 13:50:32.878362   57945 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 13:50:32.880211   57945 out.go:235]   - Generating certificates and keys ...
	I0816 13:50:32.880308   57945 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 13:50:32.880389   57945 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 13:50:32.880480   57945 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 13:50:32.880575   57945 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 13:50:32.880684   57945 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 13:50:32.880782   57945 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 13:50:32.880874   57945 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 13:50:32.880988   57945 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 13:50:32.881100   57945 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 13:50:32.881190   57945 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 13:50:32.881228   57945 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 13:50:32.881274   57945 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 13:50:32.881318   57945 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 13:50:32.881362   57945 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 13:50:32.881418   57945 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 13:50:32.881473   57945 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 13:50:32.881585   57945 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 13:50:32.881676   57945 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 13:50:32.881747   57945 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 13:50:32.881846   57945 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 13:50:32.883309   57945 out.go:235]   - Booting up control plane ...
	I0816 13:50:32.883394   57945 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 13:50:32.883493   57945 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 13:50:32.883563   57945 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 13:50:32.883661   57945 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 13:50:32.883867   57945 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0816 13:50:32.883916   57945 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0816 13:50:32.883985   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:50:32.884185   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:50:32.884285   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:50:32.884483   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:50:32.884557   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:50:32.884718   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:50:32.884775   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:50:32.884984   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:50:32.885058   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:50:32.885258   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:50:32.885272   57945 kubeadm.go:310] 
	I0816 13:50:32.885367   57945 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0816 13:50:32.885419   57945 kubeadm.go:310] 		timed out waiting for the condition
	I0816 13:50:32.885426   57945 kubeadm.go:310] 
	I0816 13:50:32.885455   57945 kubeadm.go:310] 	This error is likely caused by:
	I0816 13:50:32.885489   57945 kubeadm.go:310] 		- The kubelet is not running
	I0816 13:50:32.885579   57945 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0816 13:50:32.885587   57945 kubeadm.go:310] 
	I0816 13:50:32.885709   57945 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0816 13:50:32.885745   57945 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0816 13:50:32.885774   57945 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0816 13:50:32.885781   57945 kubeadm.go:310] 
	I0816 13:50:32.885866   57945 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0816 13:50:32.885938   57945 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0816 13:50:32.885945   57945 kubeadm.go:310] 
	I0816 13:50:32.886039   57945 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0816 13:50:32.886139   57945 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0816 13:50:32.886251   57945 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0816 13:50:32.886331   57945 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0816 13:50:32.886369   57945 kubeadm.go:310] 
	W0816 13:50:32.886438   57945 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0816 13:50:32.886474   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 13:50:33.351503   57945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 13:50:33.366285   57945 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 13:50:33.378157   57945 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 13:50:33.378180   57945 kubeadm.go:157] found existing configuration files:
	
	I0816 13:50:33.378241   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 13:50:33.389301   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 13:50:33.389358   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 13:50:33.400730   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 13:50:33.412130   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 13:50:33.412209   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 13:50:33.423484   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 13:50:33.433610   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 13:50:33.433676   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 13:50:33.445384   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 13:50:33.456098   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 13:50:33.456159   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 13:50:33.466036   57945 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 13:50:33.693238   57945 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 13:52:29.699171   57945 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0816 13:52:29.699367   57945 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0816 13:52:29.700903   57945 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0816 13:52:29.701036   57945 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 13:52:29.701228   57945 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 13:52:29.701460   57945 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 13:52:29.701761   57945 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0816 13:52:29.701863   57945 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 13:52:29.703486   57945 out.go:235]   - Generating certificates and keys ...
	I0816 13:52:29.703550   57945 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 13:52:29.703603   57945 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 13:52:29.703671   57945 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 13:52:29.703732   57945 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 13:52:29.703823   57945 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 13:52:29.703918   57945 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 13:52:29.704016   57945 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 13:52:29.704098   57945 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 13:52:29.704190   57945 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 13:52:29.704283   57945 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 13:52:29.704344   57945 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 13:52:29.704407   57945 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 13:52:29.704469   57945 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 13:52:29.704541   57945 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 13:52:29.704630   57945 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 13:52:29.704674   57945 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 13:52:29.704753   57945 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 13:52:29.704824   57945 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 13:52:29.704855   57945 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 13:52:29.704939   57945 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 13:52:29.706461   57945 out.go:235]   - Booting up control plane ...
	I0816 13:52:29.706555   57945 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 13:52:29.706672   57945 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 13:52:29.706744   57945 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 13:52:29.706836   57945 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 13:52:29.707002   57945 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0816 13:52:29.707047   57945 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0816 13:52:29.707126   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:52:29.707345   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:52:29.707438   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:52:29.707691   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:52:29.707752   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:52:29.707892   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:52:29.707969   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:52:29.708132   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:52:29.708219   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:52:29.708478   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:52:29.708500   57945 kubeadm.go:310] 
	I0816 13:52:29.708538   57945 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0816 13:52:29.708579   57945 kubeadm.go:310] 		timed out waiting for the condition
	I0816 13:52:29.708593   57945 kubeadm.go:310] 
	I0816 13:52:29.708633   57945 kubeadm.go:310] 	This error is likely caused by:
	I0816 13:52:29.708660   57945 kubeadm.go:310] 		- The kubelet is not running
	I0816 13:52:29.708743   57945 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0816 13:52:29.708750   57945 kubeadm.go:310] 
	I0816 13:52:29.708841   57945 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0816 13:52:29.708892   57945 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0816 13:52:29.708959   57945 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0816 13:52:29.708969   57945 kubeadm.go:310] 
	I0816 13:52:29.709120   57945 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0816 13:52:29.709237   57945 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0816 13:52:29.709248   57945 kubeadm.go:310] 
	I0816 13:52:29.709412   57945 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0816 13:52:29.709551   57945 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0816 13:52:29.709660   57945 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0816 13:52:29.709755   57945 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0816 13:52:29.709782   57945 kubeadm.go:310] 
	I0816 13:52:29.709836   57945 kubeadm.go:394] duration metric: took 7m57.514215667s to StartCluster
	I0816 13:52:29.709886   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:52:29.709942   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:52:29.753540   57945 cri.go:89] found id: ""
	I0816 13:52:29.753569   57945 logs.go:276] 0 containers: []
	W0816 13:52:29.753580   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:52:29.753588   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:52:29.753655   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:52:29.793951   57945 cri.go:89] found id: ""
	I0816 13:52:29.793975   57945 logs.go:276] 0 containers: []
	W0816 13:52:29.793983   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:52:29.793988   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:52:29.794040   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:52:29.831303   57945 cri.go:89] found id: ""
	I0816 13:52:29.831334   57945 logs.go:276] 0 containers: []
	W0816 13:52:29.831345   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:52:29.831356   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:52:29.831420   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:52:29.867252   57945 cri.go:89] found id: ""
	I0816 13:52:29.867277   57945 logs.go:276] 0 containers: []
	W0816 13:52:29.867285   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:52:29.867296   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:52:29.867349   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:52:29.901161   57945 cri.go:89] found id: ""
	I0816 13:52:29.901188   57945 logs.go:276] 0 containers: []
	W0816 13:52:29.901204   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:52:29.901212   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:52:29.901268   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:52:29.935781   57945 cri.go:89] found id: ""
	I0816 13:52:29.935808   57945 logs.go:276] 0 containers: []
	W0816 13:52:29.935816   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:52:29.935823   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:52:29.935873   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:52:29.970262   57945 cri.go:89] found id: ""
	I0816 13:52:29.970292   57945 logs.go:276] 0 containers: []
	W0816 13:52:29.970303   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:52:29.970310   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:52:29.970370   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:52:30.026580   57945 cri.go:89] found id: ""
	I0816 13:52:30.026610   57945 logs.go:276] 0 containers: []
	W0816 13:52:30.026621   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:52:30.026642   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:52:30.026657   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:52:30.050718   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:52:30.050747   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:52:30.146600   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:52:30.146623   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:52:30.146637   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:52:30.268976   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:52:30.269012   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:52:30.312306   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:52:30.312341   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0816 13:52:30.363242   57945 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0816 13:52:30.363303   57945 out.go:270] * 
	W0816 13:52:30.363365   57945 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0816 13:52:30.363377   57945 out.go:270] * 
	W0816 13:52:30.364104   57945 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 13:52:30.366989   57945 out.go:201] 
	W0816 13:52:30.368192   57945 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0816 13:52:30.368293   57945 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0816 13:52:30.368318   57945 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0816 13:52:30.369674   57945 out.go:201] 
	
	
	==> CRI-O <==
	Aug 16 13:57:56 no-preload-311070 crio[719]: time="2024-08-16 13:57:56.249870247Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae,PodSandboxId:3125fb14de6f6f2602848b6908ff2459df145dcc7b09d0d47d6185dfb4e27998,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723815898514580307,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f340d2e3-2889-4200-b477-830494b827c6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea81a9459ce06058de2dd74f477ececfbe3527bca36613b0f20187f8bbad6be,PodSandboxId:158ed4beb224d2a1ee2d224faaab5e1a05b43e1f7cbee8cbcff9944fb7073edb,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723815878571525416,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9af952ee-3d22-4bd5-8138-87534a89702c,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc,PodSandboxId:b72d7a25c2e011e72c29b783da89adcb9a87a329dda01d9c5c1d4350ee7a118c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723815875348466280,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8kbs6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e732183e-3b22-4a11-909a-246de5fc1c8a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5,PodSandboxId:4d46a3a717255294115a141e0492f386a501475c04a7326fb383c35d7bc4314d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723815867712947590,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b8d5b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ed1c33b-903f-43e8-88
0c-b9a49c658806,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f,PodSandboxId:3125fb14de6f6f2602848b6908ff2459df145dcc7b09d0d47d6185dfb4e27998,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723815867713806908,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f340d2e3-2889-4200-b477-830494b827c
6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd,PodSandboxId:a7ca36fe4257f236158999f72df3bd5c692914e6868c51b4f3d1cbd104f2c61e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723815863000468665,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-311070,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 796943b75caef6e46
cae3edcad9a83de,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4,PodSandboxId:8bb04f5bf9e67209e7a7ab46b15e8e780c8efd5d82de662c34edacc58e3cebc4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723815862983423374,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-311070,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82514b8622d04376f3e5fe85f0cb7b
09,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453,PodSandboxId:e7e53bbe2e9c477d736f96b7724eb109b77fdf46ca7f183ff426f80c47127d46,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723815862919630139,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-311070,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 346913957544dd3f3a427f9db15be919,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1,PodSandboxId:711e122055cf49cbac18c4aaee1af0a2054198bda4111c6fceb09b400aba1e64,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723815862901636353,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-311070,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 625eb3629609e577befcb415fe7a3e35,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=39a7b9c4-8fd4-46d6-9807-c35066a9cdff name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:57:56 no-preload-311070 crio[719]: time="2024-08-16 13:57:56.259703643Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=0422cad3-dac1-482b-ba75-eb66c7f8d438 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 16 13:57:56 no-preload-311070 crio[719]: time="2024-08-16 13:57:56.259929213Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:158ed4beb224d2a1ee2d224faaab5e1a05b43e1f7cbee8cbcff9944fb7073edb,Metadata:&PodSandboxMetadata{Name:busybox,Uid:9af952ee-3d22-4bd5-8138-87534a89702c,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723815875201761934,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9af952ee-3d22-4bd5-8138-87534a89702c,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-16T13:44:27.227353411Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b72d7a25c2e011e72c29b783da89adcb9a87a329dda01d9c5c1d4350ee7a118c,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-8kbs6,Uid:e732183e-3b22-4a11-909a-246de5fc1c8a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:17238158751039860
79,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-8kbs6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e732183e-3b22-4a11-909a-246de5fc1c8a,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-16T13:44:27.227339254Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1b0e578f94c6573292814bea96ad1465da186583929a5d23da393739f77d35f3,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-mgxhv,Uid:e9654a8e-4db2-494d-93a7-a134b0e2bb50,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723815874309453243,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-mgxhv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9654a8e-4db2-494d-93a7-a134b0e2bb50,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-16T13:44:27.2
27351866Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3125fb14de6f6f2602848b6908ff2459df145dcc7b09d0d47d6185dfb4e27998,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:f340d2e3-2889-4200-b477-830494b827c6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723815867541416390,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f340d2e3-2889-4200-b477-830494b827c6,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-m
inikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-16T13:44:27.227348690Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4d46a3a717255294115a141e0492f386a501475c04a7326fb383c35d7bc4314d,Metadata:&PodSandboxMetadata{Name:kube-proxy-b8d5b,Uid:9ed1c33b-903f-43e8-880c-b9a49c658806,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723815867540759513,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-b8d5b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ed1c33b-903f-43e8-880c-b9a49c658806,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io
/config.seen: 2024-08-16T13:44:27.227356852Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:711e122055cf49cbac18c4aaee1af0a2054198bda4111c6fceb09b400aba1e64,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-311070,Uid:625eb3629609e577befcb415fe7a3e35,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723815862726484414,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-311070,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 625eb3629609e577befcb415fe7a3e35,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.116:8443,kubernetes.io/config.hash: 625eb3629609e577befcb415fe7a3e35,kubernetes.io/config.seen: 2024-08-16T13:44:22.216683963Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a7ca36fe4257f236158999f72df3bd5c692914e6868c51b4f3d1cbd104f2c61e,Metadata:&PodSandboxMetadata{N
ame:kube-controller-manager-no-preload-311070,Uid:796943b75caef6e46cae3edcad9a83de,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723815862725022832,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-311070,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 796943b75caef6e46cae3edcad9a83de,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 796943b75caef6e46cae3edcad9a83de,kubernetes.io/config.seen: 2024-08-16T13:44:22.216685360Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e7e53bbe2e9c477d736f96b7724eb109b77fdf46ca7f183ff426f80c47127d46,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-311070,Uid:346913957544dd3f3a427f9db15be919,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723815862722936805,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-311
070,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 346913957544dd3f3a427f9db15be919,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.116:2379,kubernetes.io/config.hash: 346913957544dd3f3a427f9db15be919,kubernetes.io/config.seen: 2024-08-16T13:44:22.308216891Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8bb04f5bf9e67209e7a7ab46b15e8e780c8efd5d82de662c34edacc58e3cebc4,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-311070,Uid:82514b8622d04376f3e5fe85f0cb7b09,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723815862712955887,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-311070,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82514b8622d04376f3e5fe85f0cb7b09,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 82514b8622d04376f3e5fe85f0cb7b09,ku
bernetes.io/config.seen: 2024-08-16T13:44:22.216679687Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=0422cad3-dac1-482b-ba75-eb66c7f8d438 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 16 13:57:56 no-preload-311070 crio[719]: time="2024-08-16 13:57:56.260535874Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ad332540-5a6d-4d28-9696-437a3d696d69 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:57:56 no-preload-311070 crio[719]: time="2024-08-16 13:57:56.260603554Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ad332540-5a6d-4d28-9696-437a3d696d69 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:57:56 no-preload-311070 crio[719]: time="2024-08-16 13:57:56.260774503Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae,PodSandboxId:3125fb14de6f6f2602848b6908ff2459df145dcc7b09d0d47d6185dfb4e27998,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723815898514580307,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f340d2e3-2889-4200-b477-830494b827c6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea81a9459ce06058de2dd74f477ececfbe3527bca36613b0f20187f8bbad6be,PodSandboxId:158ed4beb224d2a1ee2d224faaab5e1a05b43e1f7cbee8cbcff9944fb7073edb,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723815878571525416,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9af952ee-3d22-4bd5-8138-87534a89702c,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc,PodSandboxId:b72d7a25c2e011e72c29b783da89adcb9a87a329dda01d9c5c1d4350ee7a118c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723815875348466280,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8kbs6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e732183e-3b22-4a11-909a-246de5fc1c8a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5,PodSandboxId:4d46a3a717255294115a141e0492f386a501475c04a7326fb383c35d7bc4314d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723815867712947590,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b8d5b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ed1c33b-903f-43e8-88
0c-b9a49c658806,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f,PodSandboxId:3125fb14de6f6f2602848b6908ff2459df145dcc7b09d0d47d6185dfb4e27998,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723815867713806908,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f340d2e3-2889-4200-b477-830494b827c
6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd,PodSandboxId:a7ca36fe4257f236158999f72df3bd5c692914e6868c51b4f3d1cbd104f2c61e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723815863000468665,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-311070,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 796943b75caef6e46
cae3edcad9a83de,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4,PodSandboxId:8bb04f5bf9e67209e7a7ab46b15e8e780c8efd5d82de662c34edacc58e3cebc4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723815862983423374,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-311070,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82514b8622d04376f3e5fe85f0cb7b
09,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453,PodSandboxId:e7e53bbe2e9c477d736f96b7724eb109b77fdf46ca7f183ff426f80c47127d46,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723815862919630139,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-311070,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 346913957544dd3f3a427f9db15be919,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1,PodSandboxId:711e122055cf49cbac18c4aaee1af0a2054198bda4111c6fceb09b400aba1e64,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723815862901636353,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-311070,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 625eb3629609e577befcb415fe7a3e35,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ad332540-5a6d-4d28-9696-437a3d696d69 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:57:56 no-preload-311070 crio[719]: time="2024-08-16 13:57:56.292700091Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3c57ddbc-9b30-4b5d-bb07-6084bdb6a0e6 name=/runtime.v1.RuntimeService/Version
	Aug 16 13:57:56 no-preload-311070 crio[719]: time="2024-08-16 13:57:56.292792464Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3c57ddbc-9b30-4b5d-bb07-6084bdb6a0e6 name=/runtime.v1.RuntimeService/Version
	Aug 16 13:57:56 no-preload-311070 crio[719]: time="2024-08-16 13:57:56.295578501Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6e670761-b845-415d-9ae6-9b30f45e040b name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 16 13:57:56 no-preload-311070 crio[719]: time="2024-08-16 13:57:56.295832886Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:158ed4beb224d2a1ee2d224faaab5e1a05b43e1f7cbee8cbcff9944fb7073edb,Metadata:&PodSandboxMetadata{Name:busybox,Uid:9af952ee-3d22-4bd5-8138-87534a89702c,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723815875201761934,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9af952ee-3d22-4bd5-8138-87534a89702c,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-16T13:44:27.227353411Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b72d7a25c2e011e72c29b783da89adcb9a87a329dda01d9c5c1d4350ee7a118c,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-8kbs6,Uid:e732183e-3b22-4a11-909a-246de5fc1c8a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:17238158751039860
79,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-8kbs6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e732183e-3b22-4a11-909a-246de5fc1c8a,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-16T13:44:27.227339254Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1b0e578f94c6573292814bea96ad1465da186583929a5d23da393739f77d35f3,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-mgxhv,Uid:e9654a8e-4db2-494d-93a7-a134b0e2bb50,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723815874309453243,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-mgxhv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9654a8e-4db2-494d-93a7-a134b0e2bb50,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-16T13:44:27.2
27351866Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3125fb14de6f6f2602848b6908ff2459df145dcc7b09d0d47d6185dfb4e27998,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:f340d2e3-2889-4200-b477-830494b827c6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723815867541416390,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f340d2e3-2889-4200-b477-830494b827c6,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-m
inikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-16T13:44:27.227348690Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4d46a3a717255294115a141e0492f386a501475c04a7326fb383c35d7bc4314d,Metadata:&PodSandboxMetadata{Name:kube-proxy-b8d5b,Uid:9ed1c33b-903f-43e8-880c-b9a49c658806,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723815867540759513,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-b8d5b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ed1c33b-903f-43e8-880c-b9a49c658806,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io
/config.seen: 2024-08-16T13:44:27.227356852Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:711e122055cf49cbac18c4aaee1af0a2054198bda4111c6fceb09b400aba1e64,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-311070,Uid:625eb3629609e577befcb415fe7a3e35,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723815862726484414,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-311070,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 625eb3629609e577befcb415fe7a3e35,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.116:8443,kubernetes.io/config.hash: 625eb3629609e577befcb415fe7a3e35,kubernetes.io/config.seen: 2024-08-16T13:44:22.216683963Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a7ca36fe4257f236158999f72df3bd5c692914e6868c51b4f3d1cbd104f2c61e,Metadata:&PodSandboxMetadata{N
ame:kube-controller-manager-no-preload-311070,Uid:796943b75caef6e46cae3edcad9a83de,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723815862725022832,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-311070,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 796943b75caef6e46cae3edcad9a83de,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 796943b75caef6e46cae3edcad9a83de,kubernetes.io/config.seen: 2024-08-16T13:44:22.216685360Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e7e53bbe2e9c477d736f96b7724eb109b77fdf46ca7f183ff426f80c47127d46,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-311070,Uid:346913957544dd3f3a427f9db15be919,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723815862722936805,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-311
070,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 346913957544dd3f3a427f9db15be919,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.116:2379,kubernetes.io/config.hash: 346913957544dd3f3a427f9db15be919,kubernetes.io/config.seen: 2024-08-16T13:44:22.308216891Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8bb04f5bf9e67209e7a7ab46b15e8e780c8efd5d82de662c34edacc58e3cebc4,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-311070,Uid:82514b8622d04376f3e5fe85f0cb7b09,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723815862712955887,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-311070,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82514b8622d04376f3e5fe85f0cb7b09,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 82514b8622d04376f3e5fe85f0cb7b09,ku
bernetes.io/config.seen: 2024-08-16T13:44:22.216679687Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=6e670761-b845-415d-9ae6-9b30f45e040b name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 16 13:57:56 no-preload-311070 crio[719]: time="2024-08-16 13:57:56.296378974Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f171f52a-e721-4121-b2ef-da39775b39b9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:57:56 no-preload-311070 crio[719]: time="2024-08-16 13:57:56.296471158Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f171f52a-e721-4121-b2ef-da39775b39b9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:57:56 no-preload-311070 crio[719]: time="2024-08-16 13:57:56.296824514Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae,PodSandboxId:3125fb14de6f6f2602848b6908ff2459df145dcc7b09d0d47d6185dfb4e27998,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723815898514580307,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f340d2e3-2889-4200-b477-830494b827c6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea81a9459ce06058de2dd74f477ececfbe3527bca36613b0f20187f8bbad6be,PodSandboxId:158ed4beb224d2a1ee2d224faaab5e1a05b43e1f7cbee8cbcff9944fb7073edb,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723815878571525416,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9af952ee-3d22-4bd5-8138-87534a89702c,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc,PodSandboxId:b72d7a25c2e011e72c29b783da89adcb9a87a329dda01d9c5c1d4350ee7a118c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723815875348466280,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8kbs6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e732183e-3b22-4a11-909a-246de5fc1c8a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5,PodSandboxId:4d46a3a717255294115a141e0492f386a501475c04a7326fb383c35d7bc4314d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723815867712947590,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b8d5b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ed1c33b-903f-43e8-88
0c-b9a49c658806,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd,PodSandboxId:a7ca36fe4257f236158999f72df3bd5c692914e6868c51b4f3d1cbd104f2c61e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723815863000468665,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-311070,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7969
43b75caef6e46cae3edcad9a83de,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4,PodSandboxId:8bb04f5bf9e67209e7a7ab46b15e8e780c8efd5d82de662c34edacc58e3cebc4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723815862983423374,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-311070,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82514b8622d04376f
3e5fe85f0cb7b09,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453,PodSandboxId:e7e53bbe2e9c477d736f96b7724eb109b77fdf46ca7f183ff426f80c47127d46,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723815862919630139,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-311070,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 346913957544dd3f3a427f9db15be919,},Annotations:map[string]st
ring{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1,PodSandboxId:711e122055cf49cbac18c4aaee1af0a2054198bda4111c6fceb09b400aba1e64,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723815862901636353,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-311070,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 625eb3629609e577befcb415fe7a3e35,},Annotations:map[string]string{io.kuber
netes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f171f52a-e721-4121-b2ef-da39775b39b9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:57:56 no-preload-311070 crio[719]: time="2024-08-16 13:57:56.297966462Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3dcf9a6a-0a5b-40e9-b787-afedc639d818 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 13:57:56 no-preload-311070 crio[719]: time="2024-08-16 13:57:56.298364142Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723816676298345870,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3dcf9a6a-0a5b-40e9-b787-afedc639d818 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 13:57:56 no-preload-311070 crio[719]: time="2024-08-16 13:57:56.298847318Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8d7141e6-4ba2-4556-8bc4-1c92a0f39af4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:57:56 no-preload-311070 crio[719]: time="2024-08-16 13:57:56.298909429Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8d7141e6-4ba2-4556-8bc4-1c92a0f39af4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:57:56 no-preload-311070 crio[719]: time="2024-08-16 13:57:56.299183288Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae,PodSandboxId:3125fb14de6f6f2602848b6908ff2459df145dcc7b09d0d47d6185dfb4e27998,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723815898514580307,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f340d2e3-2889-4200-b477-830494b827c6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea81a9459ce06058de2dd74f477ececfbe3527bca36613b0f20187f8bbad6be,PodSandboxId:158ed4beb224d2a1ee2d224faaab5e1a05b43e1f7cbee8cbcff9944fb7073edb,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723815878571525416,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9af952ee-3d22-4bd5-8138-87534a89702c,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc,PodSandboxId:b72d7a25c2e011e72c29b783da89adcb9a87a329dda01d9c5c1d4350ee7a118c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723815875348466280,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8kbs6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e732183e-3b22-4a11-909a-246de5fc1c8a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5,PodSandboxId:4d46a3a717255294115a141e0492f386a501475c04a7326fb383c35d7bc4314d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723815867712947590,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b8d5b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ed1c33b-903f-43e8-88
0c-b9a49c658806,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f,PodSandboxId:3125fb14de6f6f2602848b6908ff2459df145dcc7b09d0d47d6185dfb4e27998,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723815867713806908,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f340d2e3-2889-4200-b477-830494b827c
6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd,PodSandboxId:a7ca36fe4257f236158999f72df3bd5c692914e6868c51b4f3d1cbd104f2c61e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723815863000468665,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-311070,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 796943b75caef6e46
cae3edcad9a83de,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4,PodSandboxId:8bb04f5bf9e67209e7a7ab46b15e8e780c8efd5d82de662c34edacc58e3cebc4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723815862983423374,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-311070,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82514b8622d04376f3e5fe85f0cb7b
09,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453,PodSandboxId:e7e53bbe2e9c477d736f96b7724eb109b77fdf46ca7f183ff426f80c47127d46,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723815862919630139,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-311070,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 346913957544dd3f3a427f9db15be919,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1,PodSandboxId:711e122055cf49cbac18c4aaee1af0a2054198bda4111c6fceb09b400aba1e64,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723815862901636353,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-311070,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 625eb3629609e577befcb415fe7a3e35,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8d7141e6-4ba2-4556-8bc4-1c92a0f39af4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:57:56 no-preload-311070 crio[719]: time="2024-08-16 13:57:56.339307901Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bcdc895e-c49a-4aaf-a1ee-24f0824cdccf name=/runtime.v1.RuntimeService/Version
	Aug 16 13:57:56 no-preload-311070 crio[719]: time="2024-08-16 13:57:56.339417560Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bcdc895e-c49a-4aaf-a1ee-24f0824cdccf name=/runtime.v1.RuntimeService/Version
	Aug 16 13:57:56 no-preload-311070 crio[719]: time="2024-08-16 13:57:56.341692915Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=01fce029-8731-40b8-a23e-5efe46a24809 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 13:57:56 no-preload-311070 crio[719]: time="2024-08-16 13:57:56.342118599Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723816676342033995,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=01fce029-8731-40b8-a23e-5efe46a24809 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 13:57:56 no-preload-311070 crio[719]: time="2024-08-16 13:57:56.343108101Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f7d9aac4-6402-4cc4-acef-ea6ea3490c74 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:57:56 no-preload-311070 crio[719]: time="2024-08-16 13:57:56.343164055Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f7d9aac4-6402-4cc4-acef-ea6ea3490c74 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:57:56 no-preload-311070 crio[719]: time="2024-08-16 13:57:56.343366617Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae,PodSandboxId:3125fb14de6f6f2602848b6908ff2459df145dcc7b09d0d47d6185dfb4e27998,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723815898514580307,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f340d2e3-2889-4200-b477-830494b827c6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea81a9459ce06058de2dd74f477ececfbe3527bca36613b0f20187f8bbad6be,PodSandboxId:158ed4beb224d2a1ee2d224faaab5e1a05b43e1f7cbee8cbcff9944fb7073edb,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723815878571525416,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9af952ee-3d22-4bd5-8138-87534a89702c,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc,PodSandboxId:b72d7a25c2e011e72c29b783da89adcb9a87a329dda01d9c5c1d4350ee7a118c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723815875348466280,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8kbs6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e732183e-3b22-4a11-909a-246de5fc1c8a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5,PodSandboxId:4d46a3a717255294115a141e0492f386a501475c04a7326fb383c35d7bc4314d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723815867712947590,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b8d5b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ed1c33b-903f-43e8-88
0c-b9a49c658806,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f,PodSandboxId:3125fb14de6f6f2602848b6908ff2459df145dcc7b09d0d47d6185dfb4e27998,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723815867713806908,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f340d2e3-2889-4200-b477-830494b827c
6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd,PodSandboxId:a7ca36fe4257f236158999f72df3bd5c692914e6868c51b4f3d1cbd104f2c61e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723815863000468665,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-311070,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 796943b75caef6e46
cae3edcad9a83de,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4,PodSandboxId:8bb04f5bf9e67209e7a7ab46b15e8e780c8efd5d82de662c34edacc58e3cebc4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723815862983423374,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-311070,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82514b8622d04376f3e5fe85f0cb7b
09,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453,PodSandboxId:e7e53bbe2e9c477d736f96b7724eb109b77fdf46ca7f183ff426f80c47127d46,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723815862919630139,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-311070,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 346913957544dd3f3a427f9db15be919,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1,PodSandboxId:711e122055cf49cbac18c4aaee1af0a2054198bda4111c6fceb09b400aba1e64,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723815862901636353,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-311070,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 625eb3629609e577befcb415fe7a3e35,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f7d9aac4-6402-4cc4-acef-ea6ea3490c74 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b9150d56b0778       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   3125fb14de6f6       storage-provisioner
	0ea81a9459ce0       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   158ed4beb224d       busybox
	1c89ddcb90aa2       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   b72d7a25c2e01       coredns-6f6b679f8f-8kbs6
	35ef9517598da       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   3125fb14de6f6       storage-provisioner
	ca2c017b0b7fc       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      13 minutes ago      Running             kube-proxy                1                   4d46a3a717255       kube-proxy-b8d5b
	d8cda792253cd       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      13 minutes ago      Running             kube-controller-manager   1                   a7ca36fe4257f       kube-controller-manager-no-preload-311070
	db946a5971167       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      13 minutes ago      Running             kube-scheduler            1                   8bb04f5bf9e67       kube-scheduler-no-preload-311070
	43c9169b2abc2       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   e7e53bbe2e9c4       etcd-no-preload-311070
	17b3d9ea47cdf       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      13 minutes ago      Running             kube-apiserver            1                   711e122055cf4       kube-apiserver-no-preload-311070
	
	
	==> coredns [1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:34903 - 33828 "HINFO IN 8326533554559909018.250990010125686623. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.01452461s
	
	
	==> describe nodes <==
	Name:               no-preload-311070
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-311070
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab84f9bc76071a77c857a14f5c66dccc01002b05
	                    minikube.k8s.io/name=no-preload-311070
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_16T13_36_03_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 13:35:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-311070
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 13:57:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 13:55:11 +0000   Fri, 16 Aug 2024 13:35:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 13:55:11 +0000   Fri, 16 Aug 2024 13:35:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 13:55:11 +0000   Fri, 16 Aug 2024 13:35:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 13:55:11 +0000   Fri, 16 Aug 2024 13:44:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.116
	  Hostname:    no-preload-311070
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b8b176131bdb451e96436ef571244feb
	  System UUID:                b8b17613-1bdb-451e-9643-6ef571244feb
	  Boot ID:                    33340544-bf0f-4dc3-87b7-35d230a40dd6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 coredns-6f6b679f8f-8kbs6                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 etcd-no-preload-311070                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         21m
	  kube-system                 kube-apiserver-no-preload-311070             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-no-preload-311070    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-b8d5b                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-no-preload-311070             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 metrics-server-6867b74b74-mgxhv              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         20m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 21m                kube-proxy       
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node no-preload-311070 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node no-preload-311070 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node no-preload-311070 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    21m (x2 over 21m)  kubelet          Node no-preload-311070 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m (x2 over 21m)  kubelet          Node no-preload-311070 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     21m (x2 over 21m)  kubelet          Node no-preload-311070 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                21m                kubelet          Node no-preload-311070 status is now: NodeReady
	  Normal  RegisteredNode           21m                node-controller  Node no-preload-311070 event: Registered Node no-preload-311070 in Controller
	  Normal  CIDRAssignmentFailed     21m                cidrAllocator    Node no-preload-311070 status is now: CIDRAssignmentFailed
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-311070 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-311070 status is now: NodeHasSufficientMemory
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-311070 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-311070 event: Registered Node no-preload-311070 in Controller
	
	
	==> dmesg <==
	[Aug16 13:43] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050641] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040192] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.764452] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.400769] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.839831] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Aug16 13:44] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.054814] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053301] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.156121] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +0.136468] systemd-fstab-generator[674]: Ignoring "noauto" option for root device
	[  +0.277150] systemd-fstab-generator[703]: Ignoring "noauto" option for root device
	[ +15.547270] systemd-fstab-generator[1300]: Ignoring "noauto" option for root device
	[  +0.067283] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.981902] systemd-fstab-generator[1420]: Ignoring "noauto" option for root device
	[  +5.577177] kauditd_printk_skb: 100 callbacks suppressed
	[  +2.514602] systemd-fstab-generator[2053]: Ignoring "noauto" option for root device
	[  +4.212777] kauditd_printk_skb: 58 callbacks suppressed
	[ +24.223736] kauditd_printk_skb: 46 callbacks suppressed
	
	
	==> etcd [43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453] <==
	{"level":"warn","ts":"2024-08-16T13:44:32.496034Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T13:44:31.012700Z","time spent":"1.483325926s","remote":"127.0.0.1:34352","response type":"/etcdserverpb.KV/Range","request count":0,"request size":87,"response count":1,"response size":393,"request content":"key:\"/registry/configmaps/kube-system/kube-apiserver-legacy-service-account-token-tracking\" "}
	{"level":"warn","ts":"2024-08-16T13:44:32.496297Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.483599742s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/endpoint-controller\" ","response":"range_response_count:1 size:203"}
	{"level":"warn","ts":"2024-08-16T13:44:32.495982Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.483190287s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-311070\" ","response":"range_response_count:1 size:4639"}
	{"level":"warn","ts":"2024-08-16T13:44:32.495951Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.483145028s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-311070\" ","response":"range_response_count:1 size:4639"}
	{"level":"warn","ts":"2024-08-16T13:44:32.495796Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.482905324s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-311070\" ","response":"range_response_count:1 size:4639"}
	{"level":"warn","ts":"2024-08-16T13:44:32.496011Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.483229971s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-311070\" ","response":"range_response_count:1 size:4639"}
	{"level":"warn","ts":"2024-08-16T13:44:32.495441Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.48267918s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/endpointslice-controller\" ","response":"range_response_count:1 size:214"}
	{"level":"warn","ts":"2024-08-16T13:44:32.497025Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.488690754s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-311070\" ","response":"range_response_count:1 size:4639"}
	{"level":"info","ts":"2024-08-16T13:44:32.500865Z","caller":"traceutil/trace.go:171","msg":"trace[1315754133] range","detail":"{range_begin:/registry/minions/no-preload-311070; range_end:; response_count:1; response_revision:552; }","duration":"1.492523185s","start":"2024-08-16T13:44:31.008327Z","end":"2024-08-16T13:44:32.500851Z","steps":["trace[1315754133] 'agreement among raft nodes before linearized reading'  (duration: 1.488656065s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T13:44:32.501728Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T13:44:31.008296Z","time spent":"1.493416788s","remote":"127.0.0.1:34424","response type":"/etcdserverpb.KV/Range","request count":0,"request size":37,"response count":1,"response size":4662,"request content":"key:\"/registry/minions/no-preload-311070\" "}
	{"level":"info","ts":"2024-08-16T13:44:32.506229Z","caller":"traceutil/trace.go:171","msg":"trace[1697015200] range","detail":"{range_begin:/registry/minions/no-preload-311070; range_end:; response_count:1; response_revision:552; }","duration":"1.493401299s","start":"2024-08-16T13:44:31.012800Z","end":"2024-08-16T13:44:32.506202Z","steps":["trace[1697015200] 'agreement among raft nodes before linearized reading'  (duration: 1.483107073s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T13:44:32.506287Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T13:44:31.012793Z","time spent":"1.493480975s","remote":"127.0.0.1:34424","response type":"/etcdserverpb.KV/Range","request count":0,"request size":37,"response count":1,"response size":4662,"request content":"key:\"/registry/minions/no-preload-311070\" "}
	{"level":"info","ts":"2024-08-16T13:44:32.506376Z","caller":"traceutil/trace.go:171","msg":"trace[1757811805] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/endpoint-controller; range_end:; response_count:1; response_revision:552; }","duration":"1.493679829s","start":"2024-08-16T13:44:31.012689Z","end":"2024-08-16T13:44:32.506369Z","steps":["trace[1757811805] 'agreement among raft nodes before linearized reading'  (duration: 1.483575106s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T13:44:32.506392Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T13:44:31.012659Z","time spent":"1.493728608s","remote":"127.0.0.1:34472","response type":"/etcdserverpb.KV/Range","request count":0,"request size":59,"response count":1,"response size":226,"request content":"key:\"/registry/serviceaccounts/kube-system/endpoint-controller\" "}
	{"level":"info","ts":"2024-08-16T13:44:32.506442Z","caller":"traceutil/trace.go:171","msg":"trace[782167257] range","detail":"{range_begin:/registry/minions/no-preload-311070; range_end:; response_count:1; response_revision:552; }","duration":"1.493647636s","start":"2024-08-16T13:44:31.012790Z","end":"2024-08-16T13:44:32.506438Z","steps":["trace[782167257] 'agreement among raft nodes before linearized reading'  (duration: 1.483169543s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T13:44:32.506477Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T13:44:31.012782Z","time spent":"1.493690009s","remote":"127.0.0.1:34424","response type":"/etcdserverpb.KV/Range","request count":0,"request size":37,"response count":1,"response size":4662,"request content":"key:\"/registry/minions/no-preload-311070\" "}
	{"level":"info","ts":"2024-08-16T13:44:32.506696Z","caller":"traceutil/trace.go:171","msg":"trace[1913843442] range","detail":"{range_begin:/registry/minions/no-preload-311070; range_end:; response_count:1; response_revision:552; }","duration":"1.4939096s","start":"2024-08-16T13:44:31.012778Z","end":"2024-08-16T13:44:32.506688Z","steps":["trace[1913843442] 'agreement among raft nodes before linearized reading'  (duration: 1.483210808s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T13:44:32.506751Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T13:44:31.012770Z","time spent":"1.493971533s","remote":"127.0.0.1:34424","response type":"/etcdserverpb.KV/Range","request count":0,"request size":37,"response count":1,"response size":4662,"request content":"key:\"/registry/minions/no-preload-311070\" "}
	{"level":"info","ts":"2024-08-16T13:44:32.506844Z","caller":"traceutil/trace.go:171","msg":"trace[130826627] range","detail":"{range_begin:/registry/minions/no-preload-311070; range_end:; response_count:1; response_revision:552; }","duration":"1.494112613s","start":"2024-08-16T13:44:31.012725Z","end":"2024-08-16T13:44:32.506837Z","steps":["trace[130826627] 'agreement among raft nodes before linearized reading'  (duration: 1.482731812s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T13:44:32.506888Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T13:44:31.012715Z","time spent":"1.494166163s","remote":"127.0.0.1:34424","response type":"/etcdserverpb.KV/Range","request count":0,"request size":37,"response count":1,"response size":4662,"request content":"key:\"/registry/minions/no-preload-311070\" "}
	{"level":"info","ts":"2024-08-16T13:44:32.507050Z","caller":"traceutil/trace.go:171","msg":"trace[1429387092] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/endpointslice-controller; range_end:; response_count:1; response_revision:552; }","duration":"1.494291576s","start":"2024-08-16T13:44:31.012752Z","end":"2024-08-16T13:44:32.507043Z","steps":["trace[1429387092] 'agreement among raft nodes before linearized reading'  (duration: 1.482648043s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T13:44:32.507151Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T13:44:31.012744Z","time spent":"1.49439897s","remote":"127.0.0.1:34472","response type":"/etcdserverpb.KV/Range","request count":0,"request size":64,"response count":1,"response size":237,"request content":"key:\"/registry/serviceaccounts/kube-system/endpointslice-controller\" "}
	{"level":"info","ts":"2024-08-16T13:54:25.111980Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":893}
	{"level":"info","ts":"2024-08-16T13:54:25.126114Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":893,"took":"13.649946ms","hash":4035405371,"current-db-size-bytes":2809856,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":2809856,"current-db-size-in-use":"2.8 MB"}
	{"level":"info","ts":"2024-08-16T13:54:25.126175Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4035405371,"revision":893,"compact-revision":-1}
	
	
	==> kernel <==
	 13:57:56 up 14 min,  0 users,  load average: 0.17, 0.12, 0.09
	Linux no-preload-311070 5.10.207 #1 SMP Wed Aug 14 19:18:01 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1] <==
	W0816 13:54:27.683966       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 13:54:27.684018       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0816 13:54:27.685019       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0816 13:54:27.685112       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0816 13:55:27.686203       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 13:55:27.686567       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0816 13:55:27.686488       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 13:55:27.686675       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0816 13:55:27.687858       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0816 13:55:27.687894       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0816 13:57:27.689051       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 13:57:27.689186       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0816 13:57:27.689145       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 13:57:27.689469       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0816 13:57:27.690355       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0816 13:57:27.691419       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd] <==
	E0816 13:52:30.418215       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 13:52:30.883661       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 13:53:00.423408       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 13:53:00.891557       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 13:53:30.430289       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 13:53:30.898438       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 13:54:00.438693       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 13:54:00.908032       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 13:54:30.443875       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 13:54:30.914747       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 13:55:00.450405       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 13:55:00.921995       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0816 13:55:11.285544       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-311070"
	E0816 13:55:30.457737       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 13:55:30.929243       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0816 13:55:40.312827       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="216.861µs"
	I0816 13:55:54.312555       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="51.464µs"
	E0816 13:56:00.462792       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 13:56:00.936314       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 13:56:30.469271       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 13:56:30.945037       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 13:57:00.477006       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 13:57:00.951937       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 13:57:30.485457       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 13:57:30.960006       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0816 13:44:28.244999       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0816 13:44:28.266220       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.116"]
	E0816 13:44:28.266360       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 13:44:28.326563       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0816 13:44:28.326685       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0816 13:44:28.326742       1 server_linux.go:169] "Using iptables Proxier"
	I0816 13:44:28.330369       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 13:44:28.330862       1 server.go:483] "Version info" version="v1.31.0"
	I0816 13:44:28.330927       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 13:44:28.336143       1 config.go:326] "Starting node config controller"
	I0816 13:44:28.336177       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 13:44:28.338982       1 config.go:197] "Starting service config controller"
	I0816 13:44:28.339017       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 13:44:28.339032       1 config.go:104] "Starting endpoint slice config controller"
	I0816 13:44:28.339038       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 13:44:28.339466       1 shared_informer.go:320] Caches are synced for service config
	I0816 13:44:28.437231       1 shared_informer.go:320] Caches are synced for node config
	I0816 13:44:28.439445       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4] <==
	I0816 13:44:24.377492       1 serving.go:386] Generated self-signed cert in-memory
	W0816 13:44:26.647517       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0816 13:44:26.647606       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0816 13:44:26.647615       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0816 13:44:26.647622       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0816 13:44:26.738176       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0816 13:44:26.738234       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 13:44:26.748597       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0816 13:44:26.748648       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 13:44:26.751505       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0816 13:44:26.751620       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0816 13:44:26.849981       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 16 13:56:50 no-preload-311070 kubelet[1427]: E0816 13:56:50.295273    1427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-mgxhv" podUID="e9654a8e-4db2-494d-93a7-a134b0e2bb50"
	Aug 16 13:56:52 no-preload-311070 kubelet[1427]: E0816 13:56:52.484839    1427 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723816612484466998,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:56:52 no-preload-311070 kubelet[1427]: E0816 13:56:52.485250    1427 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723816612484466998,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:57:02 no-preload-311070 kubelet[1427]: E0816 13:57:02.489876    1427 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723816622489499519,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:57:02 no-preload-311070 kubelet[1427]: E0816 13:57:02.489939    1427 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723816622489499519,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:57:03 no-preload-311070 kubelet[1427]: E0816 13:57:03.296616    1427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-mgxhv" podUID="e9654a8e-4db2-494d-93a7-a134b0e2bb50"
	Aug 16 13:57:12 no-preload-311070 kubelet[1427]: E0816 13:57:12.491388    1427 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723816632490914486,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:57:12 no-preload-311070 kubelet[1427]: E0816 13:57:12.491449    1427 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723816632490914486,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:57:17 no-preload-311070 kubelet[1427]: E0816 13:57:17.295156    1427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-mgxhv" podUID="e9654a8e-4db2-494d-93a7-a134b0e2bb50"
	Aug 16 13:57:22 no-preload-311070 kubelet[1427]: E0816 13:57:22.325112    1427 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 13:57:22 no-preload-311070 kubelet[1427]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 13:57:22 no-preload-311070 kubelet[1427]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 13:57:22 no-preload-311070 kubelet[1427]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 13:57:22 no-preload-311070 kubelet[1427]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 13:57:22 no-preload-311070 kubelet[1427]: E0816 13:57:22.494003    1427 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723816642492816575,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:57:22 no-preload-311070 kubelet[1427]: E0816 13:57:22.494120    1427 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723816642492816575,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:57:29 no-preload-311070 kubelet[1427]: E0816 13:57:29.294939    1427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-mgxhv" podUID="e9654a8e-4db2-494d-93a7-a134b0e2bb50"
	Aug 16 13:57:32 no-preload-311070 kubelet[1427]: E0816 13:57:32.495381    1427 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723816652495042351,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:57:32 no-preload-311070 kubelet[1427]: E0816 13:57:32.495421    1427 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723816652495042351,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:57:41 no-preload-311070 kubelet[1427]: E0816 13:57:41.294684    1427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-mgxhv" podUID="e9654a8e-4db2-494d-93a7-a134b0e2bb50"
	Aug 16 13:57:42 no-preload-311070 kubelet[1427]: E0816 13:57:42.496641    1427 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723816662496418286,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:57:42 no-preload-311070 kubelet[1427]: E0816 13:57:42.496687    1427 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723816662496418286,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:57:52 no-preload-311070 kubelet[1427]: E0816 13:57:52.498487    1427 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723816672498244396,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:57:52 no-preload-311070 kubelet[1427]: E0816 13:57:52.498528    1427 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723816672498244396,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:57:53 no-preload-311070 kubelet[1427]: E0816 13:57:53.294336    1427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-mgxhv" podUID="e9654a8e-4db2-494d-93a7-a134b0e2bb50"
	
	
	==> storage-provisioner [35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f] <==
	I0816 13:44:27.947014       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0816 13:44:57.953583       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae] <==
	I0816 13:44:58.620148       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0816 13:44:58.632919       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0816 13:44:58.633214       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0816 13:45:16.039534       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0816 13:45:16.039878       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-311070_1ee62cc1-65e0-4a9e-97c6-ada22117f8b3!
	I0816 13:45:16.041887       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2039a4dc-6f93-4a66-bad3-9b5760e7138c", APIVersion:"v1", ResourceVersion:"678", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-311070_1ee62cc1-65e0-4a9e-97c6-ada22117f8b3 became leader
	I0816 13:45:16.140913       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-311070_1ee62cc1-65e0-4a9e-97c6-ada22117f8b3!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-311070 -n no-preload-311070
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-311070 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-mgxhv
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-311070 describe pod metrics-server-6867b74b74-mgxhv
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-311070 describe pod metrics-server-6867b74b74-mgxhv: exit status 1 (63.216755ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-mgxhv" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-311070 describe pod metrics-server-6867b74b74-mgxhv: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-893736 -n default-k8s-diff-port-893736
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-08-16 13:58:22.060021028 +0000 UTC m=+5844.968711646
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-893736 -n default-k8s-diff-port-893736
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-893736 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-893736 logs -n 25: (2.074712129s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cert-options-779306 -- sudo                         | cert-options-779306          | jenkins | v1.33.1 | 16 Aug 24 13:34 UTC | 16 Aug 24 13:34 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-779306                                 | cert-options-779306          | jenkins | v1.33.1 | 16 Aug 24 13:34 UTC | 16 Aug 24 13:34 UTC |
	| start   | -p no-preload-311070                                   | no-preload-311070            | jenkins | v1.33.1 | 16 Aug 24 13:34 UTC | 16 Aug 24 13:36 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-759623                           | kubernetes-upgrade-759623    | jenkins | v1.33.1 | 16 Aug 24 13:35 UTC | 16 Aug 24 13:35 UTC |
	| start   | -p embed-certs-302520                                  | embed-certs-302520           | jenkins | v1.33.1 | 16 Aug 24 13:35 UTC | 16 Aug 24 13:36 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-302520            | embed-certs-302520           | jenkins | v1.33.1 | 16 Aug 24 13:36 UTC | 16 Aug 24 13:36 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-302520                                  | embed-certs-302520           | jenkins | v1.33.1 | 16 Aug 24 13:36 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-311070             | no-preload-311070            | jenkins | v1.33.1 | 16 Aug 24 13:36 UTC | 16 Aug 24 13:37 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-311070                                   | no-preload-311070            | jenkins | v1.33.1 | 16 Aug 24 13:37 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-050553                              | cert-expiration-050553       | jenkins | v1.33.1 | 16 Aug 24 13:37 UTC | 16 Aug 24 13:38 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-050553                              | cert-expiration-050553       | jenkins | v1.33.1 | 16 Aug 24 13:38 UTC | 16 Aug 24 13:38 UTC |
	| delete  | -p                                                     | disable-driver-mounts-338033 | jenkins | v1.33.1 | 16 Aug 24 13:38 UTC | 16 Aug 24 13:38 UTC |
	|         | disable-driver-mounts-338033                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-893736 | jenkins | v1.33.1 | 16 Aug 24 13:38 UTC | 16 Aug 24 13:39 UTC |
	|         | default-k8s-diff-port-893736                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-302520                 | embed-certs-302520           | jenkins | v1.33.1 | 16 Aug 24 13:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-882237        | old-k8s-version-882237       | jenkins | v1.33.1 | 16 Aug 24 13:39 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-302520                                  | embed-certs-302520           | jenkins | v1.33.1 | 16 Aug 24 13:39 UTC | 16 Aug 24 13:50 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-311070                  | no-preload-311070            | jenkins | v1.33.1 | 16 Aug 24 13:39 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-311070                                   | no-preload-311070            | jenkins | v1.33.1 | 16 Aug 24 13:39 UTC | 16 Aug 24 13:48 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-893736  | default-k8s-diff-port-893736 | jenkins | v1.33.1 | 16 Aug 24 13:39 UTC | 16 Aug 24 13:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-893736 | jenkins | v1.33.1 | 16 Aug 24 13:39 UTC |                     |
	|         | default-k8s-diff-port-893736                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-882237                              | old-k8s-version-882237       | jenkins | v1.33.1 | 16 Aug 24 13:40 UTC | 16 Aug 24 13:40 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-882237             | old-k8s-version-882237       | jenkins | v1.33.1 | 16 Aug 24 13:40 UTC | 16 Aug 24 13:40 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-882237                              | old-k8s-version-882237       | jenkins | v1.33.1 | 16 Aug 24 13:40 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-893736       | default-k8s-diff-port-893736 | jenkins | v1.33.1 | 16 Aug 24 13:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-893736 | jenkins | v1.33.1 | 16 Aug 24 13:42 UTC | 16 Aug 24 13:49 UTC |
	|         | default-k8s-diff-port-893736                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 13:42:15
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 13:42:15.998819   58430 out.go:345] Setting OutFile to fd 1 ...
	I0816 13:42:15.998960   58430 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 13:42:15.998970   58430 out.go:358] Setting ErrFile to fd 2...
	I0816 13:42:15.998976   58430 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 13:42:15.999197   58430 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-3966/.minikube/bin
	I0816 13:42:15.999747   58430 out.go:352] Setting JSON to false
	I0816 13:42:16.000715   58430 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5081,"bootTime":1723810655,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 13:42:16.000770   58430 start.go:139] virtualization: kvm guest
	I0816 13:42:16.003216   58430 out.go:177] * [default-k8s-diff-port-893736] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 13:42:16.004663   58430 notify.go:220] Checking for updates...
	I0816 13:42:16.004698   58430 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 13:42:16.006298   58430 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 13:42:16.007719   58430 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 13:42:16.009073   58430 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-3966/.minikube
	I0816 13:42:16.010602   58430 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 13:42:16.012058   58430 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 13:42:16.013799   58430 config.go:182] Loaded profile config "default-k8s-diff-port-893736": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 13:42:16.014204   58430 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:42:16.014278   58430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:42:16.029427   58430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34093
	I0816 13:42:16.029977   58430 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:42:16.030548   58430 main.go:141] libmachine: Using API Version  1
	I0816 13:42:16.030573   58430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:42:16.030903   58430 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:42:16.031164   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:42:16.031412   58430 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 13:42:16.031691   58430 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:42:16.031731   58430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:42:16.046245   58430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38243
	I0816 13:42:16.046668   58430 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:42:16.047205   58430 main.go:141] libmachine: Using API Version  1
	I0816 13:42:16.047244   58430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:42:16.047537   58430 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:42:16.047730   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:42:16.080470   58430 out.go:177] * Using the kvm2 driver based on existing profile
	I0816 13:42:16.081700   58430 start.go:297] selected driver: kvm2
	I0816 13:42:16.081721   58430 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-893736 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-893736 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.186 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 13:42:16.081825   58430 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 13:42:16.082512   58430 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 13:42:16.082593   58430 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-3966/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 13:42:16.097784   58430 install.go:137] /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0816 13:42:16.098155   58430 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 13:42:16.098223   58430 cni.go:84] Creating CNI manager for ""
	I0816 13:42:16.098233   58430 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:42:16.098274   58430 start.go:340] cluster config:
	{Name:default-k8s-diff-port-893736 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-893736 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.186 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 13:42:16.098365   58430 iso.go:125] acquiring lock: {Name:mk00139b359c6fe4c84d80e843a2099303fb7c5b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 13:42:16.100341   58430 out.go:177] * Starting "default-k8s-diff-port-893736" primary control-plane node in "default-k8s-diff-port-893736" cluster
	I0816 13:42:17.205125   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:42:16.101925   58430 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 13:42:16.101966   58430 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0816 13:42:16.101973   58430 cache.go:56] Caching tarball of preloaded images
	I0816 13:42:16.102052   58430 preload.go:172] Found /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 13:42:16.102063   58430 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0816 13:42:16.102162   58430 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/default-k8s-diff-port-893736/config.json ...
	I0816 13:42:16.102344   58430 start.go:360] acquireMachinesLock for default-k8s-diff-port-893736: {Name:mk0b4b88ee8893cefe753073bc08069ac50e5641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 13:42:23.285172   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:42:26.357214   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:42:32.437218   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:42:35.509221   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:42:41.589174   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:42:44.661162   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:42:50.741223   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:42:53.813193   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:42:59.893180   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:43:02.965205   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:43:09.045252   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:43:12.117232   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:43:18.197189   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:43:21.269234   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:43:27.349182   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:43:30.421174   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:43:36.501197   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:43:39.573246   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:43:42.577406   57440 start.go:364] duration metric: took 4m10.318515071s to acquireMachinesLock for "no-preload-311070"
	I0816 13:43:42.577513   57440 start.go:96] Skipping create...Using existing machine configuration
	I0816 13:43:42.577529   57440 fix.go:54] fixHost starting: 
	I0816 13:43:42.577955   57440 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:43:42.577989   57440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:43:42.593032   57440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43785
	I0816 13:43:42.593416   57440 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:43:42.593860   57440 main.go:141] libmachine: Using API Version  1
	I0816 13:43:42.593882   57440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:43:42.594256   57440 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:43:42.594434   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	I0816 13:43:42.594586   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetState
	I0816 13:43:42.596234   57440 fix.go:112] recreateIfNeeded on no-preload-311070: state=Stopped err=<nil>
	I0816 13:43:42.596261   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	W0816 13:43:42.596431   57440 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 13:43:42.598334   57440 out.go:177] * Restarting existing kvm2 VM for "no-preload-311070" ...
	I0816 13:43:42.574954   57240 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 13:43:42.574990   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetMachineName
	I0816 13:43:42.575324   57240 buildroot.go:166] provisioning hostname "embed-certs-302520"
	I0816 13:43:42.575349   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetMachineName
	I0816 13:43:42.575554   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:43:42.577250   57240 machine.go:96] duration metric: took 4m37.4289608s to provisionDockerMachine
	I0816 13:43:42.577309   57240 fix.go:56] duration metric: took 4m37.450613575s for fixHost
	I0816 13:43:42.577314   57240 start.go:83] releasing machines lock for "embed-certs-302520", held for 4m37.450631849s
	W0816 13:43:42.577330   57240 start.go:714] error starting host: provision: host is not running
	W0816 13:43:42.577401   57240 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0816 13:43:42.577410   57240 start.go:729] Will try again in 5 seconds ...
	I0816 13:43:42.599558   57440 main.go:141] libmachine: (no-preload-311070) Calling .Start
	I0816 13:43:42.599720   57440 main.go:141] libmachine: (no-preload-311070) Ensuring networks are active...
	I0816 13:43:42.600383   57440 main.go:141] libmachine: (no-preload-311070) Ensuring network default is active
	I0816 13:43:42.600682   57440 main.go:141] libmachine: (no-preload-311070) Ensuring network mk-no-preload-311070 is active
	I0816 13:43:42.601157   57440 main.go:141] libmachine: (no-preload-311070) Getting domain xml...
	I0816 13:43:42.601868   57440 main.go:141] libmachine: (no-preload-311070) Creating domain...
	I0816 13:43:43.816308   57440 main.go:141] libmachine: (no-preload-311070) Waiting to get IP...
	I0816 13:43:43.817179   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:43.817566   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:43.817586   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:43.817516   58770 retry.go:31] will retry after 295.385031ms: waiting for machine to come up
	I0816 13:43:44.115046   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:44.115850   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:44.115875   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:44.115787   58770 retry.go:31] will retry after 340.249659ms: waiting for machine to come up
	I0816 13:43:44.457278   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:44.457722   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:44.457752   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:44.457657   58770 retry.go:31] will retry after 476.905089ms: waiting for machine to come up
	I0816 13:43:44.936230   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:44.936674   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:44.936714   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:44.936640   58770 retry.go:31] will retry after 555.288542ms: waiting for machine to come up
	I0816 13:43:45.493301   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:45.493698   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:45.493724   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:45.493657   58770 retry.go:31] will retry after 462.336365ms: waiting for machine to come up
	I0816 13:43:45.957163   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:45.957553   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:45.957580   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:45.957509   58770 retry.go:31] will retry after 886.665194ms: waiting for machine to come up
	I0816 13:43:46.845380   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:46.845743   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:46.845763   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:46.845723   58770 retry.go:31] will retry after 909.05227ms: waiting for machine to come up
	I0816 13:43:47.579134   57240 start.go:360] acquireMachinesLock for embed-certs-302520: {Name:mk0b4b88ee8893cefe753073bc08069ac50e5641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 13:43:47.755998   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:47.756439   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:47.756460   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:47.756407   58770 retry.go:31] will retry after 1.380778497s: waiting for machine to come up
	I0816 13:43:49.138398   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:49.138861   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:49.138884   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:49.138811   58770 retry.go:31] will retry after 1.788185586s: waiting for machine to come up
	I0816 13:43:50.929915   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:50.930326   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:50.930356   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:50.930276   58770 retry.go:31] will retry after 1.603049262s: waiting for machine to come up
	I0816 13:43:52.536034   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:52.536492   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:52.536518   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:52.536438   58770 retry.go:31] will retry after 1.964966349s: waiting for machine to come up
	I0816 13:43:54.504003   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:54.504408   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:54.504440   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:54.504363   58770 retry.go:31] will retry after 3.616796835s: waiting for machine to come up
	I0816 13:43:58.122295   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:58.122714   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:58.122747   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:58.122673   58770 retry.go:31] will retry after 3.893804146s: waiting for machine to come up
	I0816 13:44:02.020870   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.021351   57440 main.go:141] libmachine: (no-preload-311070) Found IP for machine: 192.168.61.116
	I0816 13:44:02.021372   57440 main.go:141] libmachine: (no-preload-311070) Reserving static IP address...
	I0816 13:44:02.021385   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has current primary IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.021917   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "no-preload-311070", mac: "52:54:00:14:17:b3", ip: "192.168.61.116"} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:02.021948   57440 main.go:141] libmachine: (no-preload-311070) Reserved static IP address: 192.168.61.116
	I0816 13:44:02.021966   57440 main.go:141] libmachine: (no-preload-311070) DBG | skip adding static IP to network mk-no-preload-311070 - found existing host DHCP lease matching {name: "no-preload-311070", mac: "52:54:00:14:17:b3", ip: "192.168.61.116"}
	I0816 13:44:02.021977   57440 main.go:141] libmachine: (no-preload-311070) DBG | Getting to WaitForSSH function...
	I0816 13:44:02.021989   57440 main.go:141] libmachine: (no-preload-311070) Waiting for SSH to be available...
	I0816 13:44:02.024661   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.025071   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:02.025094   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.025327   57440 main.go:141] libmachine: (no-preload-311070) DBG | Using SSH client type: external
	I0816 13:44:02.025349   57440 main.go:141] libmachine: (no-preload-311070) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/no-preload-311070/id_rsa (-rw-------)
	I0816 13:44:02.025376   57440 main.go:141] libmachine: (no-preload-311070) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-3966/.minikube/machines/no-preload-311070/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 13:44:02.025387   57440 main.go:141] libmachine: (no-preload-311070) DBG | About to run SSH command:
	I0816 13:44:02.025406   57440 main.go:141] libmachine: (no-preload-311070) DBG | exit 0
	I0816 13:44:02.148864   57440 main.go:141] libmachine: (no-preload-311070) DBG | SSH cmd err, output: <nil>: 
	I0816 13:44:02.149279   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetConfigRaw
	I0816 13:44:02.149868   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetIP
	I0816 13:44:02.152149   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.152460   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:02.152481   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.152681   57440 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070/config.json ...
	I0816 13:44:02.152853   57440 machine.go:93] provisionDockerMachine start ...
	I0816 13:44:02.152869   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	I0816 13:44:02.153131   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:02.155341   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.155703   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:02.155743   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.155845   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:03.389847   57945 start.go:364] duration metric: took 3m33.186277254s to acquireMachinesLock for "old-k8s-version-882237"
	I0816 13:44:03.389911   57945 start.go:96] Skipping create...Using existing machine configuration
	I0816 13:44:03.389923   57945 fix.go:54] fixHost starting: 
	I0816 13:44:03.390344   57945 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:03.390384   57945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:03.406808   57945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40841
	I0816 13:44:03.407227   57945 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:03.407790   57945 main.go:141] libmachine: Using API Version  1
	I0816 13:44:03.407819   57945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:03.408124   57945 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:03.408341   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:44:03.408506   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetState
	I0816 13:44:03.409993   57945 fix.go:112] recreateIfNeeded on old-k8s-version-882237: state=Stopped err=<nil>
	I0816 13:44:03.410029   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	W0816 13:44:03.410200   57945 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 13:44:03.412299   57945 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-882237" ...
	I0816 13:44:02.156024   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:02.156199   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:02.156350   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:02.156557   57440 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:02.156747   57440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0816 13:44:02.156758   57440 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 13:44:02.261263   57440 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 13:44:02.261290   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetMachineName
	I0816 13:44:02.261514   57440 buildroot.go:166] provisioning hostname "no-preload-311070"
	I0816 13:44:02.261528   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetMachineName
	I0816 13:44:02.261696   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:02.264473   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.264892   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:02.264936   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.265030   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:02.265198   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:02.265365   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:02.265485   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:02.265624   57440 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:02.265796   57440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0816 13:44:02.265813   57440 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-311070 && echo "no-preload-311070" | sudo tee /etc/hostname
	I0816 13:44:02.384079   57440 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-311070
	
	I0816 13:44:02.384112   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:02.386756   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.387065   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:02.387104   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.387285   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:02.387501   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:02.387699   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:02.387843   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:02.387997   57440 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:02.388193   57440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0816 13:44:02.388218   57440 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-311070' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-311070/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-311070' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 13:44:02.502089   57440 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 13:44:02.502122   57440 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-3966/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-3966/.minikube}
	I0816 13:44:02.502159   57440 buildroot.go:174] setting up certificates
	I0816 13:44:02.502173   57440 provision.go:84] configureAuth start
	I0816 13:44:02.502191   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetMachineName
	I0816 13:44:02.502484   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetIP
	I0816 13:44:02.505215   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.505523   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:02.505560   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.505726   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:02.507770   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.508033   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:02.508062   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.508193   57440 provision.go:143] copyHostCerts
	I0816 13:44:02.508249   57440 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem, removing ...
	I0816 13:44:02.508267   57440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem
	I0816 13:44:02.508336   57440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem (1082 bytes)
	I0816 13:44:02.508426   57440 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem, removing ...
	I0816 13:44:02.508435   57440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem
	I0816 13:44:02.508460   57440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem (1123 bytes)
	I0816 13:44:02.508520   57440 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem, removing ...
	I0816 13:44:02.508527   57440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem
	I0816 13:44:02.508548   57440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem (1675 bytes)
	I0816 13:44:02.508597   57440 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem org=jenkins.no-preload-311070 san=[127.0.0.1 192.168.61.116 localhost minikube no-preload-311070]
	I0816 13:44:02.732379   57440 provision.go:177] copyRemoteCerts
	I0816 13:44:02.732434   57440 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 13:44:02.732458   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:02.735444   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.735803   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:02.735837   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.736040   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:02.736274   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:02.736428   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:02.736587   57440 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/no-preload-311070/id_rsa Username:docker}
	I0816 13:44:02.819602   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 13:44:02.843489   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0816 13:44:02.866482   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 13:44:02.889908   57440 provision.go:87] duration metric: took 387.723287ms to configureAuth
	I0816 13:44:02.889936   57440 buildroot.go:189] setting minikube options for container-runtime
	I0816 13:44:02.890151   57440 config.go:182] Loaded profile config "no-preload-311070": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 13:44:02.890250   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:02.892851   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.893158   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:02.893184   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.893381   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:02.893607   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:02.893777   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:02.893925   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:02.894076   57440 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:02.894267   57440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0816 13:44:02.894286   57440 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 13:44:03.153730   57440 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 13:44:03.153755   57440 machine.go:96] duration metric: took 1.000891309s to provisionDockerMachine
	I0816 13:44:03.153766   57440 start.go:293] postStartSetup for "no-preload-311070" (driver="kvm2")
	I0816 13:44:03.153776   57440 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 13:44:03.153790   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	I0816 13:44:03.154088   57440 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 13:44:03.154122   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:03.156612   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:03.156931   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:03.156969   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:03.157113   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:03.157299   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:03.157438   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:03.157595   57440 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/no-preload-311070/id_rsa Username:docker}
	I0816 13:44:03.241700   57440 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 13:44:03.246133   57440 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 13:44:03.246155   57440 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/addons for local assets ...
	I0816 13:44:03.246221   57440 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/files for local assets ...
	I0816 13:44:03.246292   57440 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem -> 111492.pem in /etc/ssl/certs
	I0816 13:44:03.246379   57440 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 13:44:03.257778   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /etc/ssl/certs/111492.pem (1708 bytes)
	I0816 13:44:03.283511   57440 start.go:296] duration metric: took 129.718161ms for postStartSetup
	I0816 13:44:03.283552   57440 fix.go:56] duration metric: took 20.706029776s for fixHost
	I0816 13:44:03.283603   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:03.286296   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:03.286608   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:03.286651   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:03.286803   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:03.287016   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:03.287158   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:03.287298   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:03.287477   57440 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:03.287639   57440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0816 13:44:03.287649   57440 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 13:44:03.389691   57440 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723815843.358144829
	
	I0816 13:44:03.389710   57440 fix.go:216] guest clock: 1723815843.358144829
	I0816 13:44:03.389717   57440 fix.go:229] Guest: 2024-08-16 13:44:03.358144829 +0000 UTC Remote: 2024-08-16 13:44:03.283556408 +0000 UTC m=+271.159980604 (delta=74.588421ms)
	I0816 13:44:03.389749   57440 fix.go:200] guest clock delta is within tolerance: 74.588421ms
	I0816 13:44:03.389754   57440 start.go:83] releasing machines lock for "no-preload-311070", held for 20.812259998s
	I0816 13:44:03.389779   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	I0816 13:44:03.390029   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetIP
	I0816 13:44:03.392788   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:03.393137   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:03.393160   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:03.393365   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	I0816 13:44:03.393870   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	I0816 13:44:03.394042   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	I0816 13:44:03.394125   57440 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 13:44:03.394180   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:03.394215   57440 ssh_runner.go:195] Run: cat /version.json
	I0816 13:44:03.394235   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:03.396749   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:03.396813   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:03.397124   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:03.397152   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:03.397180   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:03.397197   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:03.397466   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:03.397543   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:03.397717   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:03.397731   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:03.397874   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:03.397921   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:03.397998   57440 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/no-preload-311070/id_rsa Username:docker}
	I0816 13:44:03.398077   57440 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/no-preload-311070/id_rsa Username:docker}
	I0816 13:44:03.473552   57440 ssh_runner.go:195] Run: systemctl --version
	I0816 13:44:03.497958   57440 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 13:44:03.644212   57440 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 13:44:03.651347   57440 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 13:44:03.651455   57440 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 13:44:03.667822   57440 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 13:44:03.667842   57440 start.go:495] detecting cgroup driver to use...
	I0816 13:44:03.667915   57440 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 13:44:03.685838   57440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 13:44:03.700002   57440 docker.go:217] disabling cri-docker service (if available) ...
	I0816 13:44:03.700073   57440 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 13:44:03.713465   57440 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 13:44:03.726793   57440 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 13:44:03.838274   57440 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 13:44:03.967880   57440 docker.go:233] disabling docker service ...
	I0816 13:44:03.967951   57440 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 13:44:03.982178   57440 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 13:44:03.994574   57440 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 13:44:04.132374   57440 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 13:44:04.242820   57440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 13:44:04.257254   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 13:44:04.277961   57440 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 13:44:04.278018   57440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:04.288557   57440 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 13:44:04.288621   57440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:04.299108   57440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:04.310139   57440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:04.320850   57440 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 13:44:04.332224   57440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:04.342905   57440 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:04.361606   57440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:04.372423   57440 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 13:44:04.382305   57440 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 13:44:04.382355   57440 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 13:44:04.396774   57440 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 13:44:04.408417   57440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:44:04.516638   57440 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 13:44:04.684247   57440 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 13:44:04.684316   57440 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 13:44:04.689824   57440 start.go:563] Will wait 60s for crictl version
	I0816 13:44:04.689878   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:44:04.693456   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 13:44:04.732628   57440 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 13:44:04.732712   57440 ssh_runner.go:195] Run: crio --version
	I0816 13:44:04.760111   57440 ssh_runner.go:195] Run: crio --version
	I0816 13:44:04.790127   57440 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 13:44:03.413613   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .Start
	I0816 13:44:03.413783   57945 main.go:141] libmachine: (old-k8s-version-882237) Ensuring networks are active...
	I0816 13:44:03.414567   57945 main.go:141] libmachine: (old-k8s-version-882237) Ensuring network default is active
	I0816 13:44:03.414873   57945 main.go:141] libmachine: (old-k8s-version-882237) Ensuring network mk-old-k8s-version-882237 is active
	I0816 13:44:03.415336   57945 main.go:141] libmachine: (old-k8s-version-882237) Getting domain xml...
	I0816 13:44:03.416198   57945 main.go:141] libmachine: (old-k8s-version-882237) Creating domain...
	I0816 13:44:04.671017   57945 main.go:141] libmachine: (old-k8s-version-882237) Waiting to get IP...
	I0816 13:44:04.672035   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:04.672467   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:04.672560   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:04.672467   58914 retry.go:31] will retry after 271.707338ms: waiting for machine to come up
	I0816 13:44:04.946147   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:04.946549   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:04.946577   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:04.946506   58914 retry.go:31] will retry after 324.872897ms: waiting for machine to come up
	I0816 13:44:04.791315   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetIP
	I0816 13:44:04.794224   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:04.794587   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:04.794613   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:04.794794   57440 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0816 13:44:04.798848   57440 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 13:44:04.811522   57440 kubeadm.go:883] updating cluster {Name:no-preload-311070 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-311070 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 13:44:04.811628   57440 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 13:44:04.811685   57440 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 13:44:04.845546   57440 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 13:44:04.845567   57440 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0816 13:44:04.845630   57440 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:04.845654   57440 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0816 13:44:04.845687   57440 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 13:44:04.845714   57440 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0816 13:44:04.845694   57440 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0816 13:44:04.845789   57440 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0816 13:44:04.845839   57440 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0816 13:44:04.845875   57440 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0816 13:44:04.847428   57440 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0816 13:44:04.847440   57440 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0816 13:44:04.847454   57440 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0816 13:44:04.847428   57440 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0816 13:44:04.847484   57440 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0816 13:44:04.847429   57440 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0816 13:44:04.847431   57440 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:04.847508   57440 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 13:44:05.036225   57440 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0816 13:44:05.071514   57440 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0816 13:44:05.075186   57440 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0816 13:44:05.075233   57440 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0816 13:44:05.075273   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:44:05.111591   57440 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0816 13:44:05.111634   57440 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0816 13:44:05.111687   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:44:05.111704   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0816 13:44:05.145127   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0816 13:44:05.145289   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0816 13:44:05.186194   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0816 13:44:05.200886   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0816 13:44:05.203824   57440 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0816 13:44:05.208201   57440 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0816 13:44:05.209021   57440 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 13:44:05.234117   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0816 13:44:05.234893   57440 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0816 13:44:05.245119   57440 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0816 13:44:05.305971   57440 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0816 13:44:05.306084   57440 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0816 13:44:05.374880   57440 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0816 13:44:05.374922   57440 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0816 13:44:05.374971   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:44:05.399114   57440 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0816 13:44:05.399156   57440 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0816 13:44:05.399187   57440 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0816 13:44:05.399216   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:44:05.399225   57440 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 13:44:05.399267   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:44:05.399318   57440 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0816 13:44:05.399414   57440 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0816 13:44:05.401940   57440 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0816 13:44:05.401975   57440 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0816 13:44:05.402006   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:44:05.513930   57440 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0816 13:44:05.513961   57440 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0816 13:44:05.514017   57440 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0816 13:44:05.514032   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0816 13:44:05.514059   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0816 13:44:05.514112   57440 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0816 13:44:05.514115   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 13:44:05.514150   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0816 13:44:05.634275   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0816 13:44:05.634340   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 13:44:05.864118   57440 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:05.273252   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:05.273730   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:05.273758   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:05.273682   58914 retry.go:31] will retry after 300.46858ms: waiting for machine to come up
	I0816 13:44:05.576567   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:05.577060   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:05.577088   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:05.577023   58914 retry.go:31] will retry after 471.968976ms: waiting for machine to come up
	I0816 13:44:06.050651   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:06.051035   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:06.051075   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:06.051005   58914 retry.go:31] will retry after 696.85088ms: waiting for machine to come up
	I0816 13:44:06.750108   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:06.750611   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:06.750643   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:06.750548   58914 retry.go:31] will retry after 752.204898ms: waiting for machine to come up
	I0816 13:44:07.504321   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:07.504741   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:07.504766   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:07.504706   58914 retry.go:31] will retry after 734.932569ms: waiting for machine to come up
	I0816 13:44:08.241587   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:08.241950   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:08.241977   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:08.241895   58914 retry.go:31] will retry after 1.245731112s: waiting for machine to come up
	I0816 13:44:09.488787   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:09.489326   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:09.489370   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:09.489282   58914 retry.go:31] will retry after 1.454286295s: waiting for machine to come up
	I0816 13:44:07.542707   57440 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (2.028664898s)
	I0816 13:44:07.542745   57440 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0816 13:44:07.542770   57440 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0816 13:44:07.542773   57440 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1: (2.028589727s)
	I0816 13:44:07.542817   57440 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0: (2.028737534s)
	I0816 13:44:07.542831   57440 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0816 13:44:07.542837   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0816 13:44:07.542869   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0816 13:44:07.542888   57440 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0: (1.908584925s)
	I0816 13:44:07.542937   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0816 13:44:07.542951   57440 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0: (1.908590671s)
	I0816 13:44:07.542995   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 13:44:07.543034   57440 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.678883978s)
	I0816 13:44:07.543068   57440 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0816 13:44:07.543103   57440 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:07.543138   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:44:11.390456   57440 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0: (3.847434032s)
	I0816 13:44:11.390507   57440 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0816 13:44:11.390610   57440 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0816 13:44:11.390647   57440 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.847797916s)
	I0816 13:44:11.390674   57440 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0816 13:44:11.390684   57440 ssh_runner.go:235] Completed: which crictl: (3.847535001s)
	I0816 13:44:11.390740   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:11.390780   57440 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0: (3.847819859s)
	I0816 13:44:11.390810   57440 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1: (3.847960553s)
	I0816 13:44:11.390825   57440 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0816 13:44:11.390848   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0816 13:44:11.390908   57440 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0816 13:44:11.390923   57440 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0: (3.848033361s)
	I0816 13:44:11.390978   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0816 13:44:11.461833   57440 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0816 13:44:11.461859   57440 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0816 13:44:11.461905   57440 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0816 13:44:11.461922   57440 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0816 13:44:11.461843   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:11.461990   57440 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0816 13:44:11.462013   57440 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0816 13:44:11.462557   57440 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0816 13:44:11.462649   57440 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0816 13:44:10.944947   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:10.945395   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:10.945459   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:10.945352   58914 retry.go:31] will retry after 1.738238967s: waiting for machine to come up
	I0816 13:44:12.686147   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:12.686673   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:12.686701   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:12.686630   58914 retry.go:31] will retry after 2.778761596s: waiting for machine to come up
	I0816 13:44:13.839070   57440 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (2.377139357s)
	I0816 13:44:13.839101   57440 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0816 13:44:13.839141   57440 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0816 13:44:13.839207   57440 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0816 13:44:13.839255   57440 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.377282192s)
	I0816 13:44:13.839312   57440 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1: (2.377281378s)
	I0816 13:44:13.839358   57440 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0816 13:44:13.839358   57440 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0: (2.376690281s)
	I0816 13:44:13.839379   57440 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0816 13:44:13.839318   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:13.880059   57440 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0816 13:44:13.880203   57440 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0816 13:44:15.818912   57440 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (1.979684366s)
	I0816 13:44:15.818943   57440 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0816 13:44:15.818975   57440 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.938747663s)
	I0816 13:44:15.818986   57440 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0816 13:44:15.819000   57440 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0816 13:44:15.819043   57440 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0816 13:44:15.468356   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:15.468788   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:15.468817   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:15.468739   58914 retry.go:31] will retry after 2.807621726s: waiting for machine to come up
	I0816 13:44:18.277604   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:18.277980   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:18.278013   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:18.277937   58914 retry.go:31] will retry after 4.131806684s: waiting for machine to come up
	I0816 13:44:17.795989   57440 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.976923514s)
	I0816 13:44:17.796013   57440 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0816 13:44:17.796040   57440 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0816 13:44:17.796088   57440 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0816 13:44:19.147815   57440 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.351703003s)
	I0816 13:44:19.147843   57440 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0816 13:44:19.147869   57440 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0816 13:44:19.147919   57440 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0816 13:44:19.791370   57440 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0816 13:44:19.791414   57440 cache_images.go:123] Successfully loaded all cached images
	I0816 13:44:19.791421   57440 cache_images.go:92] duration metric: took 14.945842963s to LoadCachedImages
	I0816 13:44:19.791440   57440 kubeadm.go:934] updating node { 192.168.61.116 8443 v1.31.0 crio true true} ...
	I0816 13:44:19.791593   57440 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-311070 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-311070 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 13:44:19.791681   57440 ssh_runner.go:195] Run: crio config
	I0816 13:44:19.843963   57440 cni.go:84] Creating CNI manager for ""
	I0816 13:44:19.843984   57440 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:44:19.844003   57440 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 13:44:19.844029   57440 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.116 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-311070 NodeName:no-preload-311070 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 13:44:19.844189   57440 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.116
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-311070"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 13:44:19.844250   57440 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 13:44:19.854942   57440 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 13:44:19.855014   57440 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 13:44:19.864794   57440 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0816 13:44:19.881210   57440 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 13:44:19.897450   57440 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0816 13:44:19.916038   57440 ssh_runner.go:195] Run: grep 192.168.61.116	control-plane.minikube.internal$ /etc/hosts
	I0816 13:44:19.919995   57440 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 13:44:19.934081   57440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:44:20.077422   57440 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 13:44:20.093846   57440 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070 for IP: 192.168.61.116
	I0816 13:44:20.093864   57440 certs.go:194] generating shared ca certs ...
	I0816 13:44:20.093881   57440 certs.go:226] acquiring lock for ca certs: {Name:mkdb46755373c135acf8239d2d4352f4b0b3d1d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:44:20.094055   57440 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key
	I0816 13:44:20.094120   57440 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key
	I0816 13:44:20.094135   57440 certs.go:256] generating profile certs ...
	I0816 13:44:20.094236   57440 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070/client.key
	I0816 13:44:20.094325   57440 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070/apiserver.key.000c4f90
	I0816 13:44:20.094391   57440 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070/proxy-client.key
	I0816 13:44:20.094529   57440 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem (1338 bytes)
	W0816 13:44:20.094571   57440 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149_empty.pem, impossibly tiny 0 bytes
	I0816 13:44:20.094584   57440 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 13:44:20.094621   57440 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem (1082 bytes)
	I0816 13:44:20.094654   57440 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem (1123 bytes)
	I0816 13:44:20.094795   57440 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem (1675 bytes)
	I0816 13:44:20.094874   57440 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem (1708 bytes)
	I0816 13:44:20.096132   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 13:44:20.130987   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 13:44:20.160701   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 13:44:20.187948   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 13:44:20.217162   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0816 13:44:20.242522   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 13:44:20.273582   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 13:44:20.300613   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 13:44:20.328363   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /usr/share/ca-certificates/111492.pem (1708 bytes)
	I0816 13:44:20.353396   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 13:44:20.377770   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem --> /usr/share/ca-certificates/11149.pem (1338 bytes)
	I0816 13:44:20.401760   57440 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 13:44:20.418302   57440 ssh_runner.go:195] Run: openssl version
	I0816 13:44:20.424065   57440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111492.pem && ln -fs /usr/share/ca-certificates/111492.pem /etc/ssl/certs/111492.pem"
	I0816 13:44:20.434841   57440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111492.pem
	I0816 13:44:20.439352   57440 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 12:33 /usr/share/ca-certificates/111492.pem
	I0816 13:44:20.439398   57440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111492.pem
	I0816 13:44:20.445210   57440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111492.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 13:44:20.455727   57440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 13:44:20.466095   57440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:44:20.470528   57440 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 12:22 /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:44:20.470568   57440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:44:20.476080   57440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 13:44:20.486189   57440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11149.pem && ln -fs /usr/share/ca-certificates/11149.pem /etc/ssl/certs/11149.pem"
	I0816 13:44:20.496373   57440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11149.pem
	I0816 13:44:20.500696   57440 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 12:33 /usr/share/ca-certificates/11149.pem
	I0816 13:44:20.500737   57440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11149.pem
	I0816 13:44:20.506426   57440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11149.pem /etc/ssl/certs/51391683.0"
	I0816 13:44:20.517130   57440 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 13:44:20.521664   57440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 13:44:20.527604   57440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 13:44:20.533478   57440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 13:44:20.539285   57440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 13:44:20.545042   57440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 13:44:20.550681   57440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 13:44:20.556239   57440 kubeadm.go:392] StartCluster: {Name:no-preload-311070 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-311070 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 13:44:20.556314   57440 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 13:44:20.556391   57440 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 13:44:20.594069   57440 cri.go:89] found id: ""
	I0816 13:44:20.594128   57440 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 13:44:20.604067   57440 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 13:44:20.604085   57440 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 13:44:20.604131   57440 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 13:44:20.614182   57440 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 13:44:20.615072   57440 kubeconfig.go:125] found "no-preload-311070" server: "https://192.168.61.116:8443"
	I0816 13:44:20.617096   57440 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 13:44:20.626046   57440 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.116
	I0816 13:44:20.626069   57440 kubeadm.go:1160] stopping kube-system containers ...
	I0816 13:44:20.626078   57440 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 13:44:20.626114   57440 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 13:44:20.659889   57440 cri.go:89] found id: ""
	I0816 13:44:20.659954   57440 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 13:44:20.676977   57440 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 13:44:20.686930   57440 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 13:44:20.686946   57440 kubeadm.go:157] found existing configuration files:
	
	I0816 13:44:20.686985   57440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 13:44:20.696144   57440 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 13:44:20.696222   57440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 13:44:20.705550   57440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 13:44:20.714350   57440 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 13:44:20.714399   57440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 13:44:20.723636   57440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 13:44:20.732287   57440 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 13:44:20.732329   57440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 13:44:20.741390   57440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 13:44:20.749913   57440 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 13:44:20.749956   57440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 13:44:20.758968   57440 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 13:44:20.768054   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:20.872847   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:21.933273   57440 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.060394194s)
	I0816 13:44:21.933303   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:22.130462   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:23.689897   58430 start.go:364] duration metric: took 2m7.587518205s to acquireMachinesLock for "default-k8s-diff-port-893736"
	I0816 13:44:23.689958   58430 start.go:96] Skipping create...Using existing machine configuration
	I0816 13:44:23.689965   58430 fix.go:54] fixHost starting: 
	I0816 13:44:23.690363   58430 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:23.690401   58430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:23.707766   58430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40097
	I0816 13:44:23.708281   58430 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:23.709439   58430 main.go:141] libmachine: Using API Version  1
	I0816 13:44:23.709462   58430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:23.709757   58430 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:23.709906   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:44:23.710017   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetState
	I0816 13:44:23.711612   58430 fix.go:112] recreateIfNeeded on default-k8s-diff-port-893736: state=Stopped err=<nil>
	I0816 13:44:23.711655   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	W0816 13:44:23.711797   58430 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 13:44:23.713600   58430 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-893736" ...
	I0816 13:44:22.413954   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.414552   57945 main.go:141] libmachine: (old-k8s-version-882237) Found IP for machine: 192.168.72.105
	I0816 13:44:22.414575   57945 main.go:141] libmachine: (old-k8s-version-882237) Reserving static IP address...
	I0816 13:44:22.414591   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has current primary IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.415085   57945 main.go:141] libmachine: (old-k8s-version-882237) Reserved static IP address: 192.168.72.105
	I0816 13:44:22.415142   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "old-k8s-version-882237", mac: "52:54:00:ce:02:bd", ip: "192.168.72.105"} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:22.415157   57945 main.go:141] libmachine: (old-k8s-version-882237) Waiting for SSH to be available...
	I0816 13:44:22.415183   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | skip adding static IP to network mk-old-k8s-version-882237 - found existing host DHCP lease matching {name: "old-k8s-version-882237", mac: "52:54:00:ce:02:bd", ip: "192.168.72.105"}
	I0816 13:44:22.415195   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | Getting to WaitForSSH function...
	I0816 13:44:22.417524   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.417844   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:22.417875   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.417987   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | Using SSH client type: external
	I0816 13:44:22.418014   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/old-k8s-version-882237/id_rsa (-rw-------)
	I0816 13:44:22.418052   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.105 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-3966/.minikube/machines/old-k8s-version-882237/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 13:44:22.418072   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | About to run SSH command:
	I0816 13:44:22.418086   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | exit 0
	I0816 13:44:22.536890   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | SSH cmd err, output: <nil>: 
	I0816 13:44:22.537284   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetConfigRaw
	I0816 13:44:22.537843   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetIP
	I0816 13:44:22.540100   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.540454   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:22.540490   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.540683   57945 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/config.json ...
	I0816 13:44:22.540939   57945 machine.go:93] provisionDockerMachine start ...
	I0816 13:44:22.540960   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:44:22.541184   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:22.543102   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.543385   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:22.543413   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.543505   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:44:22.543664   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:22.543798   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:22.543991   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:44:22.544177   57945 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:22.544497   57945 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.105 22 <nil> <nil>}
	I0816 13:44:22.544519   57945 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 13:44:22.641319   57945 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 13:44:22.641355   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetMachineName
	I0816 13:44:22.641606   57945 buildroot.go:166] provisioning hostname "old-k8s-version-882237"
	I0816 13:44:22.641630   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetMachineName
	I0816 13:44:22.641820   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:22.644657   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.645053   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:22.645085   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.645279   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:44:22.645476   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:22.645656   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:22.645827   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:44:22.646013   57945 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:22.646233   57945 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.105 22 <nil> <nil>}
	I0816 13:44:22.646248   57945 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-882237 && echo "old-k8s-version-882237" | sudo tee /etc/hostname
	I0816 13:44:22.759488   57945 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-882237
	
	I0816 13:44:22.759526   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:22.762382   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.762774   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:22.762811   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.762959   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:44:22.763163   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:22.763353   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:22.763534   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:44:22.763738   57945 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:22.763967   57945 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.105 22 <nil> <nil>}
	I0816 13:44:22.763995   57945 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-882237' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-882237/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-882237' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 13:44:22.878120   57945 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 13:44:22.878158   57945 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-3966/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-3966/.minikube}
	I0816 13:44:22.878215   57945 buildroot.go:174] setting up certificates
	I0816 13:44:22.878230   57945 provision.go:84] configureAuth start
	I0816 13:44:22.878244   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetMachineName
	I0816 13:44:22.878581   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetIP
	I0816 13:44:22.881426   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.881808   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:22.881843   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.881971   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:22.884352   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.884750   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:22.884778   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.884932   57945 provision.go:143] copyHostCerts
	I0816 13:44:22.884994   57945 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem, removing ...
	I0816 13:44:22.885016   57945 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem
	I0816 13:44:22.885084   57945 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem (1082 bytes)
	I0816 13:44:22.885230   57945 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem, removing ...
	I0816 13:44:22.885242   57945 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem
	I0816 13:44:22.885276   57945 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem (1123 bytes)
	I0816 13:44:22.885374   57945 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem, removing ...
	I0816 13:44:22.885383   57945 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem
	I0816 13:44:22.885415   57945 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem (1675 bytes)
	I0816 13:44:22.885503   57945 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-882237 san=[127.0.0.1 192.168.72.105 localhost minikube old-k8s-version-882237]
	I0816 13:44:23.017446   57945 provision.go:177] copyRemoteCerts
	I0816 13:44:23.017519   57945 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 13:44:23.017555   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:23.020030   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.020423   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:23.020460   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.020678   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:44:23.020888   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:23.021076   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:44:23.021199   57945 sshutil.go:53] new ssh client: &{IP:192.168.72.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/old-k8s-version-882237/id_rsa Username:docker}
	I0816 13:44:23.100006   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0816 13:44:23.128795   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 13:44:23.157542   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 13:44:23.182619   57945 provision.go:87] duration metric: took 304.375843ms to configureAuth
	I0816 13:44:23.182652   57945 buildroot.go:189] setting minikube options for container-runtime
	I0816 13:44:23.182882   57945 config.go:182] Loaded profile config "old-k8s-version-882237": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0816 13:44:23.182984   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:23.186043   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.186441   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:23.186474   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.186648   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:44:23.186844   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:23.187015   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:23.187196   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:44:23.187383   57945 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:23.187566   57945 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.105 22 <nil> <nil>}
	I0816 13:44:23.187587   57945 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 13:44:23.459221   57945 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 13:44:23.459248   57945 machine.go:96] duration metric: took 918.295024ms to provisionDockerMachine
	I0816 13:44:23.459261   57945 start.go:293] postStartSetup for "old-k8s-version-882237" (driver="kvm2")
	I0816 13:44:23.459275   57945 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 13:44:23.459305   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:44:23.459614   57945 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 13:44:23.459649   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:23.462624   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.463010   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:23.463033   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.463210   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:44:23.463405   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:23.463584   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:44:23.463715   57945 sshutil.go:53] new ssh client: &{IP:192.168.72.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/old-k8s-version-882237/id_rsa Username:docker}
	I0816 13:44:23.550785   57945 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 13:44:23.554984   57945 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 13:44:23.555009   57945 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/addons for local assets ...
	I0816 13:44:23.555078   57945 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/files for local assets ...
	I0816 13:44:23.555171   57945 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem -> 111492.pem in /etc/ssl/certs
	I0816 13:44:23.555290   57945 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 13:44:23.564655   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /etc/ssl/certs/111492.pem (1708 bytes)
	I0816 13:44:23.588471   57945 start.go:296] duration metric: took 129.196791ms for postStartSetup
	I0816 13:44:23.588515   57945 fix.go:56] duration metric: took 20.198590598s for fixHost
	I0816 13:44:23.588544   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:23.591443   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.591805   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:23.591835   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.591959   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:44:23.592145   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:23.592354   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:23.592492   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:44:23.592668   57945 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:23.592868   57945 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.105 22 <nil> <nil>}
	I0816 13:44:23.592885   57945 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 13:44:23.689724   57945 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723815863.663875328
	
	I0816 13:44:23.689760   57945 fix.go:216] guest clock: 1723815863.663875328
	I0816 13:44:23.689771   57945 fix.go:229] Guest: 2024-08-16 13:44:23.663875328 +0000 UTC Remote: 2024-08-16 13:44:23.588520483 +0000 UTC m=+233.521229154 (delta=75.354845ms)
	I0816 13:44:23.689796   57945 fix.go:200] guest clock delta is within tolerance: 75.354845ms
	I0816 13:44:23.689806   57945 start.go:83] releasing machines lock for "old-k8s-version-882237", held for 20.299922092s
	I0816 13:44:23.689839   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:44:23.690115   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetIP
	I0816 13:44:23.692683   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.693079   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:23.693108   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.693268   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:44:23.693753   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:44:23.693926   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:44:23.694009   57945 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 13:44:23.694062   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:23.694142   57945 ssh_runner.go:195] Run: cat /version.json
	I0816 13:44:23.694167   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:23.696872   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.696897   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.697247   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:23.697281   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:23.697309   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.697340   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.697622   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:44:23.697801   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:44:23.697830   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:23.697974   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:44:23.698010   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:23.698144   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:44:23.698155   57945 sshutil.go:53] new ssh client: &{IP:192.168.72.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/old-k8s-version-882237/id_rsa Username:docker}
	I0816 13:44:23.698312   57945 sshutil.go:53] new ssh client: &{IP:192.168.72.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/old-k8s-version-882237/id_rsa Username:docker}
	I0816 13:44:23.774706   57945 ssh_runner.go:195] Run: systemctl --version
	I0816 13:44:23.802788   57945 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 13:44:23.955361   57945 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 13:44:23.963291   57945 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 13:44:23.963363   57945 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 13:44:23.979542   57945 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 13:44:23.979579   57945 start.go:495] detecting cgroup driver to use...
	I0816 13:44:23.979645   57945 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 13:44:24.002509   57945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 13:44:24.019715   57945 docker.go:217] disabling cri-docker service (if available) ...
	I0816 13:44:24.019773   57945 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 13:44:24.033677   57945 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 13:44:24.049195   57945 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 13:44:24.168789   57945 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 13:44:24.346709   57945 docker.go:233] disabling docker service ...
	I0816 13:44:24.346772   57945 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 13:44:24.363739   57945 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 13:44:24.378785   57945 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 13:44:24.547705   57945 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 13:44:24.738866   57945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 13:44:24.756139   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 13:44:24.775999   57945 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0816 13:44:24.776060   57945 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:24.786682   57945 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 13:44:24.786783   57945 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:24.797385   57945 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:24.807825   57945 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:24.817919   57945 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 13:44:24.828884   57945 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 13:44:24.838725   57945 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 13:44:24.838782   57945 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 13:44:24.852544   57945 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 13:44:24.868302   57945 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:44:24.980614   57945 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 13:44:25.122584   57945 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 13:44:25.122660   57945 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 13:44:25.128622   57945 start.go:563] Will wait 60s for crictl version
	I0816 13:44:25.128694   57945 ssh_runner.go:195] Run: which crictl
	I0816 13:44:25.133726   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 13:44:25.188714   57945 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 13:44:25.188801   57945 ssh_runner.go:195] Run: crio --version
	I0816 13:44:25.223719   57945 ssh_runner.go:195] Run: crio --version
	I0816 13:44:25.263894   57945 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0816 13:44:23.714877   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .Start
	I0816 13:44:23.715069   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Ensuring networks are active...
	I0816 13:44:23.715788   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Ensuring network default is active
	I0816 13:44:23.716164   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Ensuring network mk-default-k8s-diff-port-893736 is active
	I0816 13:44:23.716648   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Getting domain xml...
	I0816 13:44:23.717424   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Creating domain...
	I0816 13:44:24.979917   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting to get IP...
	I0816 13:44:24.980942   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:24.981375   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:24.981448   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:24.981363   59082 retry.go:31] will retry after 199.038336ms: waiting for machine to come up
	I0816 13:44:25.181886   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:25.182350   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:25.182374   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:25.182330   59082 retry.go:31] will retry after 297.566018ms: waiting for machine to come up
	I0816 13:44:25.481811   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:25.482271   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:25.482296   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:25.482234   59082 retry.go:31] will retry after 297.833233ms: waiting for machine to come up
	I0816 13:44:25.781831   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:25.782445   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:25.782479   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:25.782400   59082 retry.go:31] will retry after 459.810978ms: waiting for machine to come up
	I0816 13:44:22.220022   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:22.317717   57440 api_server.go:52] waiting for apiserver process to appear ...
	I0816 13:44:22.317800   57440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:22.818025   57440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:23.318171   57440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:23.354996   57440 api_server.go:72] duration metric: took 1.037294965s to wait for apiserver process to appear ...
	I0816 13:44:23.355023   57440 api_server.go:88] waiting for apiserver healthz status ...
	I0816 13:44:23.355043   57440 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0816 13:44:23.355677   57440 api_server.go:269] stopped: https://192.168.61.116:8443/healthz: Get "https://192.168.61.116:8443/healthz": dial tcp 192.168.61.116:8443: connect: connection refused
	I0816 13:44:23.855190   57440 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0816 13:44:26.719152   57440 api_server.go:279] https://192.168.61.116:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 13:44:26.719184   57440 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 13:44:26.719204   57440 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0816 13:44:26.756329   57440 api_server.go:279] https://192.168.61.116:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 13:44:26.756366   57440 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 13:44:26.855581   57440 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0816 13:44:26.862856   57440 api_server.go:279] https://192.168.61.116:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 13:44:26.862885   57440 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 13:44:27.355555   57440 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0816 13:44:27.365664   57440 api_server.go:279] https://192.168.61.116:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 13:44:27.365702   57440 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 13:44:27.855844   57440 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0816 13:44:27.863185   57440 api_server.go:279] https://192.168.61.116:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 13:44:27.863227   57440 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 13:44:28.355490   57440 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0816 13:44:28.361410   57440 api_server.go:279] https://192.168.61.116:8443/healthz returned 200:
	ok
	I0816 13:44:28.374558   57440 api_server.go:141] control plane version: v1.31.0
	I0816 13:44:28.374593   57440 api_server.go:131] duration metric: took 5.019562023s to wait for apiserver health ...
	I0816 13:44:28.374604   57440 cni.go:84] Creating CNI manager for ""
	I0816 13:44:28.374613   57440 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:44:28.376749   57440 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 13:44:28.378413   57440 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 13:44:28.401199   57440 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 13:44:28.420798   57440 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 13:44:28.452605   57440 system_pods.go:59] 8 kube-system pods found
	I0816 13:44:28.452645   57440 system_pods.go:61] "coredns-6f6b679f8f-8kbs6" [e732183e-3b22-4a11-909a-246de5fc1c8a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 13:44:28.452655   57440 system_pods.go:61] "etcd-no-preload-311070" [e05deb95-06ff-4a8d-b520-0350b2332814] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 13:44:28.452663   57440 system_pods.go:61] "kube-apiserver-no-preload-311070" [06cb4989-0511-4c14-a3ec-e966829cf2ec] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 13:44:28.452671   57440 system_pods.go:61] "kube-controller-manager-no-preload-311070" [baa9f379-4e14-4873-a456-42e90a816b0b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 13:44:28.452680   57440 system_pods.go:61] "kube-proxy-b8d5b" [9ed1c33b-903f-43e8-880c-b9a49c658806] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0816 13:44:28.452689   57440 system_pods.go:61] "kube-scheduler-no-preload-311070" [166c0f64-ebf6-413f-82d5-f1c32991c63a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 13:44:28.452704   57440 system_pods.go:61] "metrics-server-6867b74b74-mgxhv" [e9654a8e-4db2-494d-93a7-a134b0e2bb50] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 13:44:28.452710   57440 system_pods.go:61] "storage-provisioner" [f340d2e3-2889-4200-b477-830494b827c6] Running
	I0816 13:44:28.452719   57440 system_pods.go:74] duration metric: took 31.89892ms to wait for pod list to return data ...
	I0816 13:44:28.452726   57440 node_conditions.go:102] verifying NodePressure condition ...
	I0816 13:44:28.463229   57440 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 13:44:28.463262   57440 node_conditions.go:123] node cpu capacity is 2
	I0816 13:44:28.463275   57440 node_conditions.go:105] duration metric: took 10.544476ms to run NodePressure ...
	I0816 13:44:28.463296   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:28.809304   57440 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0816 13:44:28.819091   57440 kubeadm.go:739] kubelet initialised
	I0816 13:44:28.819115   57440 kubeadm.go:740] duration metric: took 9.779672ms waiting for restarted kubelet to initialise ...
	I0816 13:44:28.819124   57440 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:44:28.827828   57440 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-8kbs6" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:28.840277   57440 pod_ready.go:98] node "no-preload-311070" hosting pod "coredns-6f6b679f8f-8kbs6" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:28.840310   57440 pod_ready.go:82] duration metric: took 12.450089ms for pod "coredns-6f6b679f8f-8kbs6" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:28.840322   57440 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-311070" hosting pod "coredns-6f6b679f8f-8kbs6" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:28.840333   57440 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:28.847012   57440 pod_ready.go:98] node "no-preload-311070" hosting pod "etcd-no-preload-311070" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:28.847036   57440 pod_ready.go:82] duration metric: took 6.692927ms for pod "etcd-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:28.847045   57440 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-311070" hosting pod "etcd-no-preload-311070" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:28.847050   57440 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:28.861358   57440 pod_ready.go:98] node "no-preload-311070" hosting pod "kube-apiserver-no-preload-311070" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:28.861404   57440 pod_ready.go:82] duration metric: took 14.346379ms for pod "kube-apiserver-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:28.861417   57440 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-311070" hosting pod "kube-apiserver-no-preload-311070" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:28.861428   57440 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:28.870641   57440 pod_ready.go:98] node "no-preload-311070" hosting pod "kube-controller-manager-no-preload-311070" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:28.870663   57440 pod_ready.go:82] duration metric: took 9.224713ms for pod "kube-controller-manager-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:28.870671   57440 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-311070" hosting pod "kube-controller-manager-no-preload-311070" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:28.870678   57440 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-b8d5b" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:29.224281   57440 pod_ready.go:98] node "no-preload-311070" hosting pod "kube-proxy-b8d5b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:29.224310   57440 pod_ready.go:82] duration metric: took 353.622663ms for pod "kube-proxy-b8d5b" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:29.224322   57440 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-311070" hosting pod "kube-proxy-b8d5b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:29.224331   57440 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:29.624518   57440 pod_ready.go:98] node "no-preload-311070" hosting pod "kube-scheduler-no-preload-311070" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:29.624552   57440 pod_ready.go:82] duration metric: took 400.212041ms for pod "kube-scheduler-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:29.624567   57440 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-311070" hosting pod "kube-scheduler-no-preload-311070" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:29.624577   57440 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:30.030291   57440 pod_ready.go:98] node "no-preload-311070" hosting pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:30.030327   57440 pod_ready.go:82] duration metric: took 405.73495ms for pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:30.030341   57440 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-311070" hosting pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:30.030352   57440 pod_ready.go:39] duration metric: took 1.211214389s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:44:30.030372   57440 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 13:44:30.045247   57440 ops.go:34] apiserver oom_adj: -16
	I0816 13:44:30.045279   57440 kubeadm.go:597] duration metric: took 9.441179951s to restartPrimaryControlPlane
	I0816 13:44:30.045291   57440 kubeadm.go:394] duration metric: took 9.489057744s to StartCluster
	I0816 13:44:30.045312   57440 settings.go:142] acquiring lock: {Name:mk96ffc9331ceacd8f3c1c33a59b38e047898a0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:44:30.045410   57440 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 13:44:30.047053   57440 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/kubeconfig: {Name:mk102d7d0e1fecb6a50b9d6c1ee82dcff7f7a898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:44:30.047310   57440 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 13:44:30.047415   57440 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 13:44:30.047486   57440 addons.go:69] Setting storage-provisioner=true in profile "no-preload-311070"
	I0816 13:44:30.047521   57440 addons.go:234] Setting addon storage-provisioner=true in "no-preload-311070"
	W0816 13:44:30.047534   57440 addons.go:243] addon storage-provisioner should already be in state true
	I0816 13:44:30.047569   57440 host.go:66] Checking if "no-preload-311070" exists ...
	I0816 13:44:30.048048   57440 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:30.048079   57440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:30.048302   57440 addons.go:69] Setting default-storageclass=true in profile "no-preload-311070"
	I0816 13:44:30.048339   57440 addons.go:69] Setting metrics-server=true in profile "no-preload-311070"
	I0816 13:44:30.048377   57440 addons.go:234] Setting addon metrics-server=true in "no-preload-311070"
	W0816 13:44:30.048387   57440 addons.go:243] addon metrics-server should already be in state true
	I0816 13:44:30.048424   57440 host.go:66] Checking if "no-preload-311070" exists ...
	I0816 13:44:30.048343   57440 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-311070"
	I0816 13:44:30.048812   57440 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:30.048834   57440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:30.048933   57440 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:30.048957   57440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:30.049282   57440 config.go:182] Loaded profile config "no-preload-311070": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 13:44:30.050905   57440 out.go:177] * Verifying Kubernetes components...
	I0816 13:44:30.052478   57440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:44:30.069405   57440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40065
	I0816 13:44:30.069463   57440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33057
	I0816 13:44:30.069735   57440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46331
	I0816 13:44:30.069949   57440 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:30.070072   57440 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:30.070145   57440 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:30.070488   57440 main.go:141] libmachine: Using API Version  1
	I0816 13:44:30.070506   57440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:30.070586   57440 main.go:141] libmachine: Using API Version  1
	I0816 13:44:30.070598   57440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:30.070618   57440 main.go:141] libmachine: Using API Version  1
	I0816 13:44:30.070627   57440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:30.070977   57440 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:30.071006   57440 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:30.071031   57440 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:30.071212   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetState
	I0816 13:44:30.071602   57440 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:30.071602   57440 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:30.071639   57440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:30.071621   57440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:30.074680   57440 addons.go:234] Setting addon default-storageclass=true in "no-preload-311070"
	W0816 13:44:30.074699   57440 addons.go:243] addon default-storageclass should already be in state true
	I0816 13:44:30.074730   57440 host.go:66] Checking if "no-preload-311070" exists ...
	I0816 13:44:30.075073   57440 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:30.075100   57440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:30.088961   57440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46717
	I0816 13:44:30.089421   57440 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:30.089952   57440 main.go:141] libmachine: Using API Version  1
	I0816 13:44:30.089971   57440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:30.090128   57440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37475
	I0816 13:44:30.090429   57440 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:30.090491   57440 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:30.090744   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetState
	I0816 13:44:30.090933   57440 main.go:141] libmachine: Using API Version  1
	I0816 13:44:30.090950   57440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:30.091263   57440 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:30.091463   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetState
	I0816 13:44:30.093258   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	I0816 13:44:30.093571   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	I0816 13:44:25.265126   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetIP
	I0816 13:44:25.268186   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:25.268630   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:25.268662   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:25.268927   57945 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0816 13:44:25.274101   57945 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 13:44:25.288155   57945 kubeadm.go:883] updating cluster {Name:old-k8s-version-882237 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-882237 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.105 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 13:44:25.288260   57945 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0816 13:44:25.288311   57945 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 13:44:25.342303   57945 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0816 13:44:25.342377   57945 ssh_runner.go:195] Run: which lz4
	I0816 13:44:25.346641   57945 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 13:44:25.350761   57945 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 13:44:25.350793   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0816 13:44:27.052140   57945 crio.go:462] duration metric: took 1.705504554s to copy over tarball
	I0816 13:44:27.052223   57945 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 13:44:30.094479   57440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35323
	I0816 13:44:30.094965   57440 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:30.095482   57440 main.go:141] libmachine: Using API Version  1
	I0816 13:44:30.095502   57440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:30.095857   57440 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:30.096322   57440 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:30.096361   57440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:30.128555   57440 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:30.128676   57440 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0816 13:44:26.244353   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:26.245158   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:26.245183   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:26.245062   59082 retry.go:31] will retry after 680.176025ms: waiting for machine to come up
	I0816 13:44:26.926654   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:26.927139   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:26.927183   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:26.927106   59082 retry.go:31] will retry after 720.530442ms: waiting for machine to come up
	I0816 13:44:27.648858   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:27.649342   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:27.649367   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:27.649289   59082 retry.go:31] will retry after 930.752133ms: waiting for machine to come up
	I0816 13:44:28.581283   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:28.581684   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:28.581709   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:28.581635   59082 retry.go:31] will retry after 972.791503ms: waiting for machine to come up
	I0816 13:44:29.556168   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:29.556563   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:29.556583   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:29.556525   59082 retry.go:31] will retry after 1.18129541s: waiting for machine to come up
	I0816 13:44:30.739498   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:30.739951   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:30.739978   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:30.739883   59082 retry.go:31] will retry after 2.27951459s: waiting for machine to come up
	I0816 13:44:30.133959   57440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39625
	I0816 13:44:30.134516   57440 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:30.135080   57440 main.go:141] libmachine: Using API Version  1
	I0816 13:44:30.135105   57440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:30.135463   57440 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:30.135598   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetState
	I0816 13:44:30.137494   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	I0816 13:44:30.137805   57440 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 13:44:30.137824   57440 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 13:44:30.137839   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:30.141006   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:30.141509   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:30.141544   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:30.141772   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:30.141952   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:30.142150   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:30.142305   57440 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/no-preload-311070/id_rsa Username:docker}
	I0816 13:44:30.164598   57440 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 13:44:30.164627   57440 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 13:44:30.164653   57440 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 13:44:30.164654   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:30.164662   57440 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 13:44:30.164687   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:30.168935   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:30.169259   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:30.169588   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:30.169615   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:30.169806   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:30.169828   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:30.169859   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:30.169953   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:30.170096   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:30.170103   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:30.170243   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:30.170241   57440 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/no-preload-311070/id_rsa Username:docker}
	I0816 13:44:30.170389   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:30.170505   57440 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/no-preload-311070/id_rsa Username:docker}
	I0816 13:44:30.285806   57440 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 13:44:30.312267   57440 node_ready.go:35] waiting up to 6m0s for node "no-preload-311070" to be "Ready" ...
	I0816 13:44:30.406371   57440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 13:44:30.409491   57440 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 13:44:30.409515   57440 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0816 13:44:30.440485   57440 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 13:44:30.440508   57440 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 13:44:30.480735   57440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 13:44:30.484549   57440 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 13:44:30.484573   57440 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 13:44:30.541485   57440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 13:44:32.535406   57440 node_ready.go:53] node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:33.204746   57440 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.723973086s)
	I0816 13:44:33.204802   57440 main.go:141] libmachine: Making call to close driver server
	I0816 13:44:33.204817   57440 main.go:141] libmachine: (no-preload-311070) Calling .Close
	I0816 13:44:33.204843   57440 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.798437569s)
	I0816 13:44:33.204877   57440 main.go:141] libmachine: Making call to close driver server
	I0816 13:44:33.204889   57440 main.go:141] libmachine: (no-preload-311070) Calling .Close
	I0816 13:44:33.205092   57440 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:44:33.205116   57440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:44:33.205126   57440 main.go:141] libmachine: Making call to close driver server
	I0816 13:44:33.205134   57440 main.go:141] libmachine: (no-preload-311070) Calling .Close
	I0816 13:44:33.205357   57440 main.go:141] libmachine: (no-preload-311070) DBG | Closing plugin on server side
	I0816 13:44:33.205359   57440 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:44:33.205379   57440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:44:33.205387   57440 main.go:141] libmachine: Making call to close driver server
	I0816 13:44:33.205395   57440 main.go:141] libmachine: (no-preload-311070) Calling .Close
	I0816 13:44:33.205408   57440 main.go:141] libmachine: (no-preload-311070) DBG | Closing plugin on server side
	I0816 13:44:33.205445   57440 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:44:33.205454   57440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:44:33.205593   57440 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:44:33.205605   57440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:44:33.214075   57440 main.go:141] libmachine: Making call to close driver server
	I0816 13:44:33.214095   57440 main.go:141] libmachine: (no-preload-311070) Calling .Close
	I0816 13:44:33.214307   57440 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:44:33.214320   57440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:44:33.259136   57440 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.717608276s)
	I0816 13:44:33.259188   57440 main.go:141] libmachine: Making call to close driver server
	I0816 13:44:33.259212   57440 main.go:141] libmachine: (no-preload-311070) Calling .Close
	I0816 13:44:33.259468   57440 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:44:33.259485   57440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:44:33.259495   57440 main.go:141] libmachine: Making call to close driver server
	I0816 13:44:33.259503   57440 main.go:141] libmachine: (no-preload-311070) Calling .Close
	I0816 13:44:33.259988   57440 main.go:141] libmachine: (no-preload-311070) DBG | Closing plugin on server side
	I0816 13:44:33.260004   57440 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:44:33.260016   57440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:44:33.260026   57440 addons.go:475] Verifying addon metrics-server=true in "no-preload-311070"
	I0816 13:44:33.262190   57440 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0816 13:44:30.191146   57945 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.138885293s)
	I0816 13:44:30.191188   57945 crio.go:469] duration metric: took 3.139020745s to extract the tarball
	I0816 13:44:30.191198   57945 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 13:44:30.249011   57945 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 13:44:30.285826   57945 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0816 13:44:30.285847   57945 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0816 13:44:30.285918   57945 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:30.285940   57945 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:44:30.285947   57945 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0816 13:44:30.285922   57945 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:44:30.285971   57945 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0816 13:44:30.286019   57945 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:44:30.285922   57945 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:44:30.285979   57945 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0816 13:44:30.288208   57945 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:44:30.288272   57945 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:30.288275   57945 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0816 13:44:30.288205   57945 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0816 13:44:30.288303   57945 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0816 13:44:30.288320   57945 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:44:30.288211   57945 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:44:30.288207   57945 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:44:30.434593   57945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:44:30.434847   57945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:44:30.438852   57945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:44:30.449704   57945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0816 13:44:30.451130   57945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:44:30.454848   57945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0816 13:44:30.513569   57945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0816 13:44:30.594404   57945 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0816 13:44:30.594453   57945 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:44:30.594509   57945 ssh_runner.go:195] Run: which crictl
	I0816 13:44:30.612653   57945 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0816 13:44:30.612699   57945 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:44:30.612746   57945 ssh_runner.go:195] Run: which crictl
	I0816 13:44:30.652117   57945 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0816 13:44:30.652162   57945 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:44:30.652214   57945 ssh_runner.go:195] Run: which crictl
	I0816 13:44:30.681057   57945 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0816 13:44:30.681116   57945 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0816 13:44:30.681163   57945 ssh_runner.go:195] Run: which crictl
	I0816 13:44:30.681239   57945 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0816 13:44:30.681296   57945 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:44:30.681341   57945 ssh_runner.go:195] Run: which crictl
	I0816 13:44:30.688696   57945 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0816 13:44:30.688739   57945 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0816 13:44:30.688785   57945 ssh_runner.go:195] Run: which crictl
	I0816 13:44:30.706749   57945 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0816 13:44:30.706802   57945 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0816 13:44:30.706816   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:44:30.706843   57945 ssh_runner.go:195] Run: which crictl
	I0816 13:44:30.706911   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:44:30.706938   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:44:30.706987   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 13:44:30.707031   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:44:30.707045   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 13:44:30.913446   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 13:44:30.913520   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 13:44:30.913548   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:44:30.913611   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:44:30.913653   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 13:44:30.913675   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:44:30.913813   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:44:31.079066   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:44:31.079100   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 13:44:31.079140   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:44:31.103707   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:44:31.103890   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 13:44:31.106587   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:44:31.106723   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 13:44:31.210359   57945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:31.226549   57945 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0816 13:44:31.226605   57945 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0816 13:44:31.226648   57945 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0816 13:44:31.266238   57945 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0816 13:44:31.266256   57945 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0816 13:44:31.269423   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 13:44:31.270551   57945 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0816 13:44:31.399144   57945 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0816 13:44:31.399220   57945 cache_images.go:92] duration metric: took 1.113354806s to LoadCachedImages
	W0816 13:44:31.399297   57945 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0816 13:44:31.399311   57945 kubeadm.go:934] updating node { 192.168.72.105 8443 v1.20.0 crio true true} ...
	I0816 13:44:31.399426   57945 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-882237 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.105
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-882237 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 13:44:31.399515   57945 ssh_runner.go:195] Run: crio config
	I0816 13:44:31.459182   57945 cni.go:84] Creating CNI manager for ""
	I0816 13:44:31.459226   57945 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:44:31.459244   57945 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 13:44:31.459270   57945 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.105 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-882237 NodeName:old-k8s-version-882237 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.105"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.105 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0816 13:44:31.459439   57945 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.105
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-882237"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.105
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.105"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 13:44:31.459521   57945 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0816 13:44:31.470415   57945 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 13:44:31.470500   57945 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 13:44:31.480890   57945 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0816 13:44:31.498797   57945 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 13:44:31.516425   57945 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0816 13:44:31.536528   57945 ssh_runner.go:195] Run: grep 192.168.72.105	control-plane.minikube.internal$ /etc/hosts
	I0816 13:44:31.540569   57945 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.105	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 13:44:31.553530   57945 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:44:31.693191   57945 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 13:44:31.711162   57945 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237 for IP: 192.168.72.105
	I0816 13:44:31.711190   57945 certs.go:194] generating shared ca certs ...
	I0816 13:44:31.711209   57945 certs.go:226] acquiring lock for ca certs: {Name:mkdb46755373c135acf8239d2d4352f4b0b3d1d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:44:31.711382   57945 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key
	I0816 13:44:31.711465   57945 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key
	I0816 13:44:31.711478   57945 certs.go:256] generating profile certs ...
	I0816 13:44:31.711596   57945 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/client.key
	I0816 13:44:31.711676   57945 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/apiserver.key.e63f19d8
	I0816 13:44:31.711739   57945 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/proxy-client.key
	I0816 13:44:31.711906   57945 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem (1338 bytes)
	W0816 13:44:31.711969   57945 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149_empty.pem, impossibly tiny 0 bytes
	I0816 13:44:31.711984   57945 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 13:44:31.712019   57945 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem (1082 bytes)
	I0816 13:44:31.712058   57945 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem (1123 bytes)
	I0816 13:44:31.712089   57945 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem (1675 bytes)
	I0816 13:44:31.712146   57945 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem (1708 bytes)
	I0816 13:44:31.713101   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 13:44:31.748701   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 13:44:31.789308   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 13:44:31.814410   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 13:44:31.841281   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0816 13:44:31.867939   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 13:44:31.894410   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 13:44:31.921591   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 13:44:31.952356   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /usr/share/ca-certificates/111492.pem (1708 bytes)
	I0816 13:44:31.982171   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 13:44:32.008849   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem --> /usr/share/ca-certificates/11149.pem (1338 bytes)
	I0816 13:44:32.034750   57945 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 13:44:32.051812   57945 ssh_runner.go:195] Run: openssl version
	I0816 13:44:32.057774   57945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11149.pem && ln -fs /usr/share/ca-certificates/11149.pem /etc/ssl/certs/11149.pem"
	I0816 13:44:32.068553   57945 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11149.pem
	I0816 13:44:32.073022   57945 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 12:33 /usr/share/ca-certificates/11149.pem
	I0816 13:44:32.073095   57945 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11149.pem
	I0816 13:44:32.079239   57945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11149.pem /etc/ssl/certs/51391683.0"
	I0816 13:44:32.089825   57945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111492.pem && ln -fs /usr/share/ca-certificates/111492.pem /etc/ssl/certs/111492.pem"
	I0816 13:44:32.100630   57945 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111492.pem
	I0816 13:44:32.105792   57945 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 12:33 /usr/share/ca-certificates/111492.pem
	I0816 13:44:32.105851   57945 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111492.pem
	I0816 13:44:32.112004   57945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111492.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 13:44:32.122723   57945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 13:44:32.133560   57945 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:44:32.138215   57945 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 12:22 /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:44:32.138260   57945 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:44:32.144059   57945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 13:44:32.155210   57945 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 13:44:32.159746   57945 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 13:44:32.165984   57945 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 13:44:32.171617   57945 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 13:44:32.177778   57945 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 13:44:32.183623   57945 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 13:44:32.189537   57945 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 13:44:32.195627   57945 kubeadm.go:392] StartCluster: {Name:old-k8s-version-882237 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-882237 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.105 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 13:44:32.195706   57945 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 13:44:32.195741   57945 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 13:44:32.235910   57945 cri.go:89] found id: ""
	I0816 13:44:32.235978   57945 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 13:44:32.248201   57945 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 13:44:32.248223   57945 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 13:44:32.248286   57945 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 13:44:32.258917   57945 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 13:44:32.260386   57945 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-882237" does not appear in /home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 13:44:32.261475   57945 kubeconfig.go:62] /home/jenkins/minikube-integration/19423-3966/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-882237" cluster setting kubeconfig missing "old-k8s-version-882237" context setting]
	I0816 13:44:32.263041   57945 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/kubeconfig: {Name:mk102d7d0e1fecb6a50b9d6c1ee82dcff7f7a898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:44:32.335150   57945 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 13:44:32.346103   57945 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.105
	I0816 13:44:32.346141   57945 kubeadm.go:1160] stopping kube-system containers ...
	I0816 13:44:32.346155   57945 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 13:44:32.346212   57945 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 13:44:32.390110   57945 cri.go:89] found id: ""
	I0816 13:44:32.390197   57945 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 13:44:32.408685   57945 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 13:44:32.419119   57945 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 13:44:32.419146   57945 kubeadm.go:157] found existing configuration files:
	
	I0816 13:44:32.419227   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 13:44:32.429282   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 13:44:32.429352   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 13:44:32.439444   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 13:44:32.449342   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 13:44:32.449409   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 13:44:32.459836   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 13:44:32.469581   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 13:44:32.469653   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 13:44:32.479655   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 13:44:32.489139   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 13:44:32.489204   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 13:44:32.499439   57945 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 13:44:32.509706   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:32.672388   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:33.787722   57945 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.115294487s)
	I0816 13:44:33.787763   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:34.027016   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:34.141852   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:34.247190   57945 api_server.go:52] waiting for apiserver process to appear ...
	I0816 13:44:34.247286   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:34.747781   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:33.022378   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:33.023000   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:33.023028   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:33.022950   59082 retry.go:31] will retry after 1.906001247s: waiting for machine to come up
	I0816 13:44:34.930169   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:34.930674   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:34.930702   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:34.930612   59082 retry.go:31] will retry after 2.809719622s: waiting for machine to come up
	I0816 13:44:33.263780   57440 addons.go:510] duration metric: took 3.216351591s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0816 13:44:34.816280   57440 node_ready.go:53] node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:36.817474   57440 node_ready.go:53] node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:35.248075   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:35.747575   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:36.247693   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:36.748219   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:37.247519   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:37.748189   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:38.248143   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:38.748193   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:39.247412   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:39.748043   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:37.742122   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:37.742506   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:37.742545   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:37.742464   59082 retry.go:31] will retry after 4.139761236s: waiting for machine to come up
	I0816 13:44:37.815407   57440 node_ready.go:49] node "no-preload-311070" has status "Ready":"True"
	I0816 13:44:37.815428   57440 node_ready.go:38] duration metric: took 7.503128864s for node "no-preload-311070" to be "Ready" ...
	I0816 13:44:37.815437   57440 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:44:37.820318   57440 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-8kbs6" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:37.825460   57440 pod_ready.go:93] pod "coredns-6f6b679f8f-8kbs6" in "kube-system" namespace has status "Ready":"True"
	I0816 13:44:37.825478   57440 pod_ready.go:82] duration metric: took 5.136508ms for pod "coredns-6f6b679f8f-8kbs6" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:37.825486   57440 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:37.829609   57440 pod_ready.go:93] pod "etcd-no-preload-311070" in "kube-system" namespace has status "Ready":"True"
	I0816 13:44:37.829628   57440 pod_ready.go:82] duration metric: took 4.133294ms for pod "etcd-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:37.829635   57440 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:39.835973   57440 pod_ready.go:103] pod "kube-apiserver-no-preload-311070" in "kube-system" namespace has status "Ready":"False"
	I0816 13:44:40.335270   57440 pod_ready.go:93] pod "kube-apiserver-no-preload-311070" in "kube-system" namespace has status "Ready":"True"
	I0816 13:44:40.335289   57440 pod_ready.go:82] duration metric: took 2.505647853s for pod "kube-apiserver-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:40.335298   57440 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:43.233555   57240 start.go:364] duration metric: took 55.654362151s to acquireMachinesLock for "embed-certs-302520"
	I0816 13:44:43.233638   57240 start.go:96] Skipping create...Using existing machine configuration
	I0816 13:44:43.233649   57240 fix.go:54] fixHost starting: 
	I0816 13:44:43.234047   57240 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:43.234078   57240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:43.253929   57240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34851
	I0816 13:44:43.254405   57240 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:43.254878   57240 main.go:141] libmachine: Using API Version  1
	I0816 13:44:43.254900   57240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:43.255235   57240 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:43.255400   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	I0816 13:44:43.255578   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetState
	I0816 13:44:43.257434   57240 fix.go:112] recreateIfNeeded on embed-certs-302520: state=Stopped err=<nil>
	I0816 13:44:43.257472   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	W0816 13:44:43.257637   57240 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 13:44:43.259743   57240 out.go:177] * Restarting existing kvm2 VM for "embed-certs-302520" ...
	I0816 13:44:41.885729   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:41.886143   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Found IP for machine: 192.168.50.186
	I0816 13:44:41.886162   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Reserving static IP address...
	I0816 13:44:41.886178   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has current primary IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:41.886540   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-893736", mac: "52:54:00:5f:b2:25", ip: "192.168.50.186"} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:41.886570   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | skip adding static IP to network mk-default-k8s-diff-port-893736 - found existing host DHCP lease matching {name: "default-k8s-diff-port-893736", mac: "52:54:00:5f:b2:25", ip: "192.168.50.186"}
	I0816 13:44:41.886585   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Reserved static IP address: 192.168.50.186
	I0816 13:44:41.886600   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for SSH to be available...
	I0816 13:44:41.886617   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | Getting to WaitForSSH function...
	I0816 13:44:41.888671   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:41.889003   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:41.889047   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:41.889118   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | Using SSH client type: external
	I0816 13:44:41.889142   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/default-k8s-diff-port-893736/id_rsa (-rw-------)
	I0816 13:44:41.889181   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.186 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-3966/.minikube/machines/default-k8s-diff-port-893736/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 13:44:41.889201   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | About to run SSH command:
	I0816 13:44:41.889215   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | exit 0
	I0816 13:44:42.017010   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | SSH cmd err, output: <nil>: 
	I0816 13:44:42.017374   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetConfigRaw
	I0816 13:44:42.017979   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetIP
	I0816 13:44:42.020580   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.020958   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:42.020992   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.021174   58430 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/default-k8s-diff-port-893736/config.json ...
	I0816 13:44:42.021342   58430 machine.go:93] provisionDockerMachine start ...
	I0816 13:44:42.021356   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:44:42.021521   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:42.023732   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.024033   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:42.024057   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.024201   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:42.024354   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:42.024526   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:42.024667   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:42.024811   58430 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:42.024994   58430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.186 22 <nil> <nil>}
	I0816 13:44:42.025005   58430 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 13:44:42.137459   58430 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 13:44:42.137495   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetMachineName
	I0816 13:44:42.137722   58430 buildroot.go:166] provisioning hostname "default-k8s-diff-port-893736"
	I0816 13:44:42.137745   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetMachineName
	I0816 13:44:42.137925   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:42.140599   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.140987   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:42.141017   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.141148   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:42.141309   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:42.141430   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:42.141536   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:42.141677   58430 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:42.141843   58430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.186 22 <nil> <nil>}
	I0816 13:44:42.141855   58430 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-893736 && echo "default-k8s-diff-port-893736" | sudo tee /etc/hostname
	I0816 13:44:42.267643   58430 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-893736
	
	I0816 13:44:42.267670   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:42.270489   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.270834   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:42.270867   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.271089   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:42.271266   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:42.271405   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:42.271527   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:42.271675   58430 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:42.271829   58430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.186 22 <nil> <nil>}
	I0816 13:44:42.271847   58430 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-893736' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-893736/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-893736' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 13:44:42.398010   58430 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 13:44:42.398057   58430 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-3966/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-3966/.minikube}
	I0816 13:44:42.398122   58430 buildroot.go:174] setting up certificates
	I0816 13:44:42.398139   58430 provision.go:84] configureAuth start
	I0816 13:44:42.398157   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetMachineName
	I0816 13:44:42.398484   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetIP
	I0816 13:44:42.401217   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.401566   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:42.401587   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.401749   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:42.404082   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.404380   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:42.404425   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.404541   58430 provision.go:143] copyHostCerts
	I0816 13:44:42.404596   58430 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem, removing ...
	I0816 13:44:42.404606   58430 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem
	I0816 13:44:42.404666   58430 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem (1123 bytes)
	I0816 13:44:42.404758   58430 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem, removing ...
	I0816 13:44:42.404767   58430 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem
	I0816 13:44:42.404788   58430 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem (1675 bytes)
	I0816 13:44:42.404850   58430 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem, removing ...
	I0816 13:44:42.404857   58430 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem
	I0816 13:44:42.404873   58430 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem (1082 bytes)
	I0816 13:44:42.404965   58430 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-893736 san=[127.0.0.1 192.168.50.186 default-k8s-diff-port-893736 localhost minikube]
	I0816 13:44:42.551867   58430 provision.go:177] copyRemoteCerts
	I0816 13:44:42.551928   58430 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 13:44:42.551954   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:42.554945   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.555276   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:42.555316   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.555517   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:42.555699   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:42.555838   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:42.555964   58430 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/default-k8s-diff-port-893736/id_rsa Username:docker}
	I0816 13:44:42.643591   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 13:44:42.667108   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0816 13:44:42.690852   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 13:44:42.714001   58430 provision.go:87] duration metric: took 315.84846ms to configureAuth
	I0816 13:44:42.714030   58430 buildroot.go:189] setting minikube options for container-runtime
	I0816 13:44:42.714189   58430 config.go:182] Loaded profile config "default-k8s-diff-port-893736": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 13:44:42.714263   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:42.716726   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.717082   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:42.717110   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.717282   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:42.717486   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:42.717621   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:42.717740   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:42.717883   58430 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:42.718038   58430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.186 22 <nil> <nil>}
	I0816 13:44:42.718055   58430 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 13:44:42.988769   58430 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 13:44:42.988798   58430 machine.go:96] duration metric: took 967.444538ms to provisionDockerMachine
	I0816 13:44:42.988814   58430 start.go:293] postStartSetup for "default-k8s-diff-port-893736" (driver="kvm2")
	I0816 13:44:42.988833   58430 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 13:44:42.988864   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:44:42.989226   58430 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 13:44:42.989261   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:42.991868   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.992162   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:42.992184   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.992364   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:42.992537   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:42.992689   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:42.992838   58430 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/default-k8s-diff-port-893736/id_rsa Username:docker}
	I0816 13:44:43.079199   58430 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 13:44:43.083277   58430 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 13:44:43.083296   58430 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/addons for local assets ...
	I0816 13:44:43.083357   58430 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/files for local assets ...
	I0816 13:44:43.083459   58430 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem -> 111492.pem in /etc/ssl/certs
	I0816 13:44:43.083576   58430 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 13:44:43.092684   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /etc/ssl/certs/111492.pem (1708 bytes)
	I0816 13:44:43.115693   58430 start.go:296] duration metric: took 126.86489ms for postStartSetup
	I0816 13:44:43.115735   58430 fix.go:56] duration metric: took 19.425768942s for fixHost
	I0816 13:44:43.115761   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:43.118597   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:43.118915   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:43.118947   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:43.119100   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:43.119306   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:43.119442   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:43.119563   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:43.119683   58430 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:43.119840   58430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.186 22 <nil> <nil>}
	I0816 13:44:43.119850   58430 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 13:44:43.233362   58430 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723815883.193133132
	
	I0816 13:44:43.233394   58430 fix.go:216] guest clock: 1723815883.193133132
	I0816 13:44:43.233406   58430 fix.go:229] Guest: 2024-08-16 13:44:43.193133132 +0000 UTC Remote: 2024-08-16 13:44:43.115740856 +0000 UTC m=+147.151935383 (delta=77.392276ms)
	I0816 13:44:43.233479   58430 fix.go:200] guest clock delta is within tolerance: 77.392276ms
	I0816 13:44:43.233486   58430 start.go:83] releasing machines lock for "default-k8s-diff-port-893736", held for 19.543554553s
	I0816 13:44:43.233517   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:44:43.233783   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetIP
	I0816 13:44:43.236492   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:43.236875   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:43.236901   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:43.237136   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:44:43.237703   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:44:43.237943   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:44:43.238074   58430 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 13:44:43.238153   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:43.238182   58430 ssh_runner.go:195] Run: cat /version.json
	I0816 13:44:43.238215   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:43.240639   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:43.241000   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:43.241029   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:43.241052   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:43.241193   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:43.241360   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:43.241573   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:43.241581   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:43.241601   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:43.241733   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:43.241732   58430 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/default-k8s-diff-port-893736/id_rsa Username:docker}
	I0816 13:44:43.241895   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:43.242052   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:43.242178   58430 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/default-k8s-diff-port-893736/id_rsa Username:docker}
	I0816 13:44:43.352903   58430 ssh_runner.go:195] Run: systemctl --version
	I0816 13:44:43.359071   58430 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 13:44:43.509233   58430 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 13:44:43.516592   58430 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 13:44:43.516666   58430 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 13:44:43.534069   58430 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 13:44:43.534096   58430 start.go:495] detecting cgroup driver to use...
	I0816 13:44:43.534167   58430 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 13:44:43.553305   58430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 13:44:43.569958   58430 docker.go:217] disabling cri-docker service (if available) ...
	I0816 13:44:43.570007   58430 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 13:44:43.590642   58430 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 13:44:43.606411   58430 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 13:44:43.733331   58430 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 13:44:43.882032   58430 docker.go:233] disabling docker service ...
	I0816 13:44:43.882110   58430 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 13:44:43.896780   58430 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 13:44:43.909702   58430 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 13:44:44.044071   58430 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 13:44:44.170798   58430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 13:44:44.184421   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 13:44:44.203201   58430 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 13:44:44.203269   58430 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:44.213647   58430 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 13:44:44.213708   58430 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:44.224261   58430 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:44.235295   58430 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:44.247670   58430 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 13:44:44.264065   58430 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:44.276212   58430 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:44.296049   58430 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:44.307920   58430 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 13:44:44.319689   58430 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 13:44:44.319746   58430 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 13:44:44.335735   58430 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 13:44:44.352364   58430 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:44:44.476754   58430 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 13:44:44.618847   58430 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 13:44:44.618914   58430 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 13:44:44.623946   58430 start.go:563] Will wait 60s for crictl version
	I0816 13:44:44.624004   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:44:44.627796   58430 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 13:44:44.666274   58430 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 13:44:44.666350   58430 ssh_runner.go:195] Run: crio --version
	I0816 13:44:44.694476   58430 ssh_runner.go:195] Run: crio --version
	I0816 13:44:44.723937   58430 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 13:44:43.261237   57240 main.go:141] libmachine: (embed-certs-302520) Calling .Start
	I0816 13:44:43.261399   57240 main.go:141] libmachine: (embed-certs-302520) Ensuring networks are active...
	I0816 13:44:43.262183   57240 main.go:141] libmachine: (embed-certs-302520) Ensuring network default is active
	I0816 13:44:43.262591   57240 main.go:141] libmachine: (embed-certs-302520) Ensuring network mk-embed-certs-302520 is active
	I0816 13:44:43.263044   57240 main.go:141] libmachine: (embed-certs-302520) Getting domain xml...
	I0816 13:44:43.263849   57240 main.go:141] libmachine: (embed-certs-302520) Creating domain...
	I0816 13:44:44.565632   57240 main.go:141] libmachine: (embed-certs-302520) Waiting to get IP...
	I0816 13:44:44.566705   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:44.567120   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:44.567211   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:44.567113   59274 retry.go:31] will retry after 259.265867ms: waiting for machine to come up
	I0816 13:44:44.827603   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:44.828117   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:44.828152   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:44.828043   59274 retry.go:31] will retry after 271.270487ms: waiting for machine to come up
	I0816 13:44:40.247541   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:40.747938   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:41.247408   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:41.747777   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:42.248295   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:42.747393   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:43.247508   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:43.748151   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:44.247958   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:44.747978   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:44.725112   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetIP
	I0816 13:44:44.728077   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:44.728446   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:44.728469   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:44.728728   58430 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0816 13:44:44.733365   58430 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 13:44:44.746196   58430 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-893736 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-893736 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.186 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 13:44:44.746325   58430 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 13:44:44.746385   58430 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 13:44:44.787402   58430 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 13:44:44.787481   58430 ssh_runner.go:195] Run: which lz4
	I0816 13:44:44.791755   58430 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 13:44:44.797290   58430 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 13:44:44.797320   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0816 13:44:42.342663   57440 pod_ready.go:93] pod "kube-controller-manager-no-preload-311070" in "kube-system" namespace has status "Ready":"True"
	I0816 13:44:42.342685   57440 pod_ready.go:82] duration metric: took 2.007381193s for pod "kube-controller-manager-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:42.342694   57440 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-b8d5b" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:42.346807   57440 pod_ready.go:93] pod "kube-proxy-b8d5b" in "kube-system" namespace has status "Ready":"True"
	I0816 13:44:42.346824   57440 pod_ready.go:82] duration metric: took 4.124529ms for pod "kube-proxy-b8d5b" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:42.346832   57440 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:42.351010   57440 pod_ready.go:93] pod "kube-scheduler-no-preload-311070" in "kube-system" namespace has status "Ready":"True"
	I0816 13:44:42.351025   57440 pod_ready.go:82] duration metric: took 4.186812ms for pod "kube-scheduler-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:42.351032   57440 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:44.358663   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:44:46.359708   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:44:45.100554   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:45.101150   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:45.101265   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:45.101207   59274 retry.go:31] will retry after 309.469795ms: waiting for machine to come up
	I0816 13:44:45.412518   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:45.413009   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:45.413040   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:45.412975   59274 retry.go:31] will retry after 502.564219ms: waiting for machine to come up
	I0816 13:44:45.917731   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:45.918284   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:45.918316   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:45.918235   59274 retry.go:31] will retry after 723.442166ms: waiting for machine to come up
	I0816 13:44:46.642971   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:46.643467   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:46.643498   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:46.643400   59274 retry.go:31] will retry after 600.365383ms: waiting for machine to come up
	I0816 13:44:47.245233   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:47.245756   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:47.245785   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:47.245710   59274 retry.go:31] will retry after 1.06438693s: waiting for machine to come up
	I0816 13:44:48.312043   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:48.312842   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:48.312886   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:48.312840   59274 retry.go:31] will retry after 903.877948ms: waiting for machine to come up
	I0816 13:44:49.218419   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:49.218805   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:49.218835   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:49.218758   59274 retry.go:31] will retry after 1.73892963s: waiting for machine to come up
	I0816 13:44:45.247523   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:45.747694   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:46.248397   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:46.747660   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:47.247382   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:47.748220   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:48.248130   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:48.747818   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:49.248360   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:49.747962   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:46.230345   58430 crio.go:462] duration metric: took 1.438624377s to copy over tarball
	I0816 13:44:46.230429   58430 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 13:44:48.358060   58430 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.127589486s)
	I0816 13:44:48.358131   58430 crio.go:469] duration metric: took 2.127754698s to extract the tarball
	I0816 13:44:48.358145   58430 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 13:44:48.398054   58430 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 13:44:48.449391   58430 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 13:44:48.449416   58430 cache_images.go:84] Images are preloaded, skipping loading
	I0816 13:44:48.449425   58430 kubeadm.go:934] updating node { 192.168.50.186 8444 v1.31.0 crio true true} ...
	I0816 13:44:48.449576   58430 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-893736 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.186
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-893736 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 13:44:48.449662   58430 ssh_runner.go:195] Run: crio config
	I0816 13:44:48.499389   58430 cni.go:84] Creating CNI manager for ""
	I0816 13:44:48.499413   58430 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:44:48.499424   58430 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 13:44:48.499452   58430 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.186 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-893736 NodeName:default-k8s-diff-port-893736 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.186"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.186 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 13:44:48.499576   58430 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.186
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-893736"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.186
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.186"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 13:44:48.499653   58430 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 13:44:48.509639   58430 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 13:44:48.509706   58430 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 13:44:48.519099   58430 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0816 13:44:48.535866   58430 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 13:44:48.552977   58430 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0816 13:44:48.571198   58430 ssh_runner.go:195] Run: grep 192.168.50.186	control-plane.minikube.internal$ /etc/hosts
	I0816 13:44:48.575881   58430 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.186	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 13:44:48.587850   58430 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:44:48.703848   58430 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 13:44:48.721449   58430 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/default-k8s-diff-port-893736 for IP: 192.168.50.186
	I0816 13:44:48.721476   58430 certs.go:194] generating shared ca certs ...
	I0816 13:44:48.721496   58430 certs.go:226] acquiring lock for ca certs: {Name:mkdb46755373c135acf8239d2d4352f4b0b3d1d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:44:48.721677   58430 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key
	I0816 13:44:48.721731   58430 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key
	I0816 13:44:48.721745   58430 certs.go:256] generating profile certs ...
	I0816 13:44:48.721843   58430 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/default-k8s-diff-port-893736/client.key
	I0816 13:44:48.721926   58430 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/default-k8s-diff-port-893736/apiserver.key.64c9b41b
	I0816 13:44:48.721980   58430 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/default-k8s-diff-port-893736/proxy-client.key
	I0816 13:44:48.722107   58430 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem (1338 bytes)
	W0816 13:44:48.722138   58430 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149_empty.pem, impossibly tiny 0 bytes
	I0816 13:44:48.722149   58430 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 13:44:48.722182   58430 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem (1082 bytes)
	I0816 13:44:48.722204   58430 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem (1123 bytes)
	I0816 13:44:48.722225   58430 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem (1675 bytes)
	I0816 13:44:48.722258   58430 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem (1708 bytes)
	I0816 13:44:48.722818   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 13:44:48.779462   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 13:44:48.814653   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 13:44:48.887435   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 13:44:48.913644   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/default-k8s-diff-port-893736/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0816 13:44:48.937536   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/default-k8s-diff-port-893736/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 13:44:48.960729   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/default-k8s-diff-port-893736/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 13:44:48.984375   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/default-k8s-diff-port-893736/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 13:44:49.007997   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 13:44:49.031631   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem --> /usr/share/ca-certificates/11149.pem (1338 bytes)
	I0816 13:44:49.054333   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /usr/share/ca-certificates/111492.pem (1708 bytes)
	I0816 13:44:49.076566   58430 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 13:44:49.092986   58430 ssh_runner.go:195] Run: openssl version
	I0816 13:44:49.098555   58430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111492.pem && ln -fs /usr/share/ca-certificates/111492.pem /etc/ssl/certs/111492.pem"
	I0816 13:44:49.109454   58430 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111492.pem
	I0816 13:44:49.114868   58430 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 12:33 /usr/share/ca-certificates/111492.pem
	I0816 13:44:49.114934   58430 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111492.pem
	I0816 13:44:49.120811   58430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111492.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 13:44:49.131829   58430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 13:44:49.142825   58430 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:44:49.147276   58430 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 12:22 /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:44:49.147322   58430 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:44:49.152678   58430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 13:44:49.163622   58430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11149.pem && ln -fs /usr/share/ca-certificates/11149.pem /etc/ssl/certs/11149.pem"
	I0816 13:44:49.174426   58430 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11149.pem
	I0816 13:44:49.179353   58430 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 12:33 /usr/share/ca-certificates/11149.pem
	I0816 13:44:49.179406   58430 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11149.pem
	I0816 13:44:49.185129   58430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11149.pem /etc/ssl/certs/51391683.0"
	I0816 13:44:49.196668   58430 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 13:44:49.201447   58430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 13:44:49.207718   58430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 13:44:49.213869   58430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 13:44:49.220325   58430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 13:44:49.226220   58430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 13:44:49.231971   58430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 13:44:49.238080   58430 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-893736 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-893736 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.186 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 13:44:49.238178   58430 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 13:44:49.238231   58430 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 13:44:49.276621   58430 cri.go:89] found id: ""
	I0816 13:44:49.276719   58430 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 13:44:49.287765   58430 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 13:44:49.287785   58430 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 13:44:49.287829   58430 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 13:44:49.298038   58430 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 13:44:49.299171   58430 kubeconfig.go:125] found "default-k8s-diff-port-893736" server: "https://192.168.50.186:8444"
	I0816 13:44:49.301521   58430 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 13:44:49.311800   58430 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.186
	I0816 13:44:49.311833   58430 kubeadm.go:1160] stopping kube-system containers ...
	I0816 13:44:49.311845   58430 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 13:44:49.311899   58430 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 13:44:49.363716   58430 cri.go:89] found id: ""
	I0816 13:44:49.363784   58430 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 13:44:49.381053   58430 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 13:44:49.391306   58430 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 13:44:49.391322   58430 kubeadm.go:157] found existing configuration files:
	
	I0816 13:44:49.391370   58430 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0816 13:44:49.400770   58430 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 13:44:49.400829   58430 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 13:44:49.410252   58430 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0816 13:44:49.419405   58430 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 13:44:49.419481   58430 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 13:44:49.429330   58430 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0816 13:44:49.438521   58430 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 13:44:49.438587   58430 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 13:44:49.448144   58430 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0816 13:44:49.456744   58430 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 13:44:49.456805   58430 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 13:44:49.466062   58430 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 13:44:49.476159   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:49.597639   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:50.673182   58430 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.075495766s)
	I0816 13:44:50.673218   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:50.887802   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:50.958384   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:48.858145   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:44:51.358083   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:44:50.959807   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:50.960217   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:50.960236   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:50.960188   59274 retry.go:31] will retry after 2.32558417s: waiting for machine to come up
	I0816 13:44:53.287947   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:53.288441   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:53.288470   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:53.288388   59274 retry.go:31] will retry after 1.85414625s: waiting for machine to come up
	I0816 13:44:50.247710   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:50.747741   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:51.248099   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:51.748052   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:52.247958   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:52.748141   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:53.247751   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:53.747353   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:54.247624   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:54.747699   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:51.054015   58430 api_server.go:52] waiting for apiserver process to appear ...
	I0816 13:44:51.054101   58430 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:51.554846   58430 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:52.055178   58430 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:52.082087   58430 api_server.go:72] duration metric: took 1.028080423s to wait for apiserver process to appear ...
	I0816 13:44:52.082114   58430 api_server.go:88] waiting for apiserver healthz status ...
	I0816 13:44:52.082133   58430 api_server.go:253] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 13:44:52.082624   58430 api_server.go:269] stopped: https://192.168.50.186:8444/healthz: Get "https://192.168.50.186:8444/healthz": dial tcp 192.168.50.186:8444: connect: connection refused
	I0816 13:44:52.582261   58430 api_server.go:253] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 13:44:55.336530   58430 api_server.go:279] https://192.168.50.186:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 13:44:55.336565   58430 api_server.go:103] status: https://192.168.50.186:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 13:44:55.336580   58430 api_server.go:253] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 13:44:55.374699   58430 api_server.go:279] https://192.168.50.186:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 13:44:55.374733   58430 api_server.go:103] status: https://192.168.50.186:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 13:44:55.583112   58430 api_server.go:253] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 13:44:55.588756   58430 api_server.go:279] https://192.168.50.186:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 13:44:55.588782   58430 api_server.go:103] status: https://192.168.50.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 13:44:56.082182   58430 api_server.go:253] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 13:44:56.088062   58430 api_server.go:279] https://192.168.50.186:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 13:44:56.088108   58430 api_server.go:103] status: https://192.168.50.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 13:44:56.582273   58430 api_server.go:253] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 13:44:56.587049   58430 api_server.go:279] https://192.168.50.186:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 13:44:56.587087   58430 api_server.go:103] status: https://192.168.50.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 13:44:57.082664   58430 api_server.go:253] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 13:44:57.092562   58430 api_server.go:279] https://192.168.50.186:8444/healthz returned 200:
	ok
	I0816 13:44:57.100740   58430 api_server.go:141] control plane version: v1.31.0
	I0816 13:44:57.100767   58430 api_server.go:131] duration metric: took 5.018647278s to wait for apiserver health ...
	I0816 13:44:57.100777   58430 cni.go:84] Creating CNI manager for ""
	I0816 13:44:57.100784   58430 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:44:57.102775   58430 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 13:44:53.358390   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:44:55.358437   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:44:57.104079   58430 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 13:44:57.115212   58430 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 13:44:57.137445   58430 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 13:44:57.150376   58430 system_pods.go:59] 8 kube-system pods found
	I0816 13:44:57.150412   58430 system_pods.go:61] "coredns-6f6b679f8f-xdwhx" [66987c52-9a8c-4ddd-a6cf-ac84172d8c8c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 13:44:57.150422   58430 system_pods.go:61] "etcd-default-k8s-diff-port-893736" [54f5bb47-00f5-4c65-88ae-460571872fd1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 13:44:57.150435   58430 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-893736" [c8c57619-3a4c-4cc9-b637-7842194071ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 13:44:57.150448   58430 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-893736" [e553aced-34bd-4d30-b473-082001fe2948] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 13:44:57.150454   58430 system_pods.go:61] "kube-proxy-btq6r" [a2b7b283-da62-4cb8-a039-07a509491e5e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0816 13:44:57.150458   58430 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-893736" [0a30cbd0-a2ac-4ca9-bddc-674e6fc9ae77] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 13:44:57.150463   58430 system_pods.go:61] "metrics-server-6867b74b74-j9tqh" [ef077e6d-f368-4872-bb87-9e031d3ea764] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 13:44:57.150472   58430 system_pods.go:61] "storage-provisioner" [e2fbf16a-3bc7-4300-8023-5dbb20ba70bc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0816 13:44:57.150481   58430 system_pods.go:74] duration metric: took 13.019757ms to wait for pod list to return data ...
	I0816 13:44:57.150489   58430 node_conditions.go:102] verifying NodePressure condition ...
	I0816 13:44:57.153699   58430 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 13:44:57.153721   58430 node_conditions.go:123] node cpu capacity is 2
	I0816 13:44:57.153731   58430 node_conditions.go:105] duration metric: took 3.237407ms to run NodePressure ...
	I0816 13:44:57.153752   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:57.439130   58430 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0816 13:44:57.446848   58430 kubeadm.go:739] kubelet initialised
	I0816 13:44:57.446876   58430 kubeadm.go:740] duration metric: took 7.718113ms waiting for restarted kubelet to initialise ...
	I0816 13:44:57.446885   58430 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:44:57.452263   58430 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-xdwhx" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:57.459002   58430 pod_ready.go:98] node "default-k8s-diff-port-893736" hosting pod "coredns-6f6b679f8f-xdwhx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:57.459024   58430 pod_ready.go:82] duration metric: took 6.735487ms for pod "coredns-6f6b679f8f-xdwhx" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:57.459033   58430 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-893736" hosting pod "coredns-6f6b679f8f-xdwhx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:57.459039   58430 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:57.463723   58430 pod_ready.go:98] node "default-k8s-diff-port-893736" hosting pod "etcd-default-k8s-diff-port-893736" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:57.463742   58430 pod_ready.go:82] duration metric: took 4.695932ms for pod "etcd-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:57.463751   58430 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-893736" hosting pod "etcd-default-k8s-diff-port-893736" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:57.463756   58430 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:57.468593   58430 pod_ready.go:98] node "default-k8s-diff-port-893736" hosting pod "kube-apiserver-default-k8s-diff-port-893736" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:57.468619   58430 pod_ready.go:82] duration metric: took 4.856498ms for pod "kube-apiserver-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:57.468632   58430 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-893736" hosting pod "kube-apiserver-default-k8s-diff-port-893736" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:57.468643   58430 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:57.541251   58430 pod_ready.go:98] node "default-k8s-diff-port-893736" hosting pod "kube-controller-manager-default-k8s-diff-port-893736" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:57.541278   58430 pod_ready.go:82] duration metric: took 72.626413ms for pod "kube-controller-manager-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:57.541290   58430 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-893736" hosting pod "kube-controller-manager-default-k8s-diff-port-893736" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:57.541296   58430 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-btq6r" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:57.940580   58430 pod_ready.go:98] node "default-k8s-diff-port-893736" hosting pod "kube-proxy-btq6r" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:57.940616   58430 pod_ready.go:82] duration metric: took 399.312571ms for pod "kube-proxy-btq6r" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:57.940627   58430 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-893736" hosting pod "kube-proxy-btq6r" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:57.940635   58430 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:58.340647   58430 pod_ready.go:98] node "default-k8s-diff-port-893736" hosting pod "kube-scheduler-default-k8s-diff-port-893736" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:58.340671   58430 pod_ready.go:82] duration metric: took 400.026004ms for pod "kube-scheduler-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:58.340683   58430 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-893736" hosting pod "kube-scheduler-default-k8s-diff-port-893736" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:58.340694   58430 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:58.750549   58430 pod_ready.go:98] node "default-k8s-diff-port-893736" hosting pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:58.750573   58430 pod_ready.go:82] duration metric: took 409.872187ms for pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:58.750588   58430 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-893736" hosting pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:58.750598   58430 pod_ready.go:39] duration metric: took 1.303702313s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:44:58.750626   58430 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 13:44:58.766462   58430 ops.go:34] apiserver oom_adj: -16
	I0816 13:44:58.766482   58430 kubeadm.go:597] duration metric: took 9.478690644s to restartPrimaryControlPlane
	I0816 13:44:58.766491   58430 kubeadm.go:394] duration metric: took 9.528416258s to StartCluster
	I0816 13:44:58.766509   58430 settings.go:142] acquiring lock: {Name:mk96ffc9331ceacd8f3c1c33a59b38e047898a0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:44:58.766572   58430 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 13:44:58.770737   58430 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/kubeconfig: {Name:mk102d7d0e1fecb6a50b9d6c1ee82dcff7f7a898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:44:58.771036   58430 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.186 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 13:44:58.771138   58430 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 13:44:58.771218   58430 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-893736"
	I0816 13:44:58.771232   58430 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-893736"
	I0816 13:44:58.771245   58430 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-893736"
	I0816 13:44:58.771281   58430 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-893736"
	I0816 13:44:58.771252   58430 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-893736"
	W0816 13:44:58.771337   58430 addons.go:243] addon storage-provisioner should already be in state true
	I0816 13:44:58.771371   58430 host.go:66] Checking if "default-k8s-diff-port-893736" exists ...
	I0816 13:44:58.771285   58430 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-893736"
	W0816 13:44:58.771447   58430 addons.go:243] addon metrics-server should already be in state true
	I0816 13:44:58.771485   58430 host.go:66] Checking if "default-k8s-diff-port-893736" exists ...
	I0816 13:44:58.771231   58430 config.go:182] Loaded profile config "default-k8s-diff-port-893736": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 13:44:58.771653   58430 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:58.771682   58430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:58.771750   58430 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:58.771781   58430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:58.771839   58430 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:58.771886   58430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:58.772665   58430 out.go:177] * Verifying Kubernetes components...
	I0816 13:44:58.773992   58430 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:44:58.788717   58430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41949
	I0816 13:44:58.789233   58430 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:58.789833   58430 main.go:141] libmachine: Using API Version  1
	I0816 13:44:58.789859   58430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:58.790269   58430 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:58.790882   58430 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:58.790913   58430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:58.791553   58430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35753
	I0816 13:44:58.791556   58430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36985
	I0816 13:44:58.791945   58430 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:58.791979   58430 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:58.792413   58430 main.go:141] libmachine: Using API Version  1
	I0816 13:44:58.792440   58430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:58.792813   58430 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:58.792963   58430 main.go:141] libmachine: Using API Version  1
	I0816 13:44:58.792986   58430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:58.793013   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetState
	I0816 13:44:58.793374   58430 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:58.793940   58430 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:58.793986   58430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:58.796723   58430 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-893736"
	W0816 13:44:58.796740   58430 addons.go:243] addon default-storageclass should already be in state true
	I0816 13:44:58.796763   58430 host.go:66] Checking if "default-k8s-diff-port-893736" exists ...
	I0816 13:44:58.797138   58430 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:58.797184   58430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:58.806753   58430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41511
	I0816 13:44:58.807162   58430 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:58.807605   58430 main.go:141] libmachine: Using API Version  1
	I0816 13:44:58.807624   58430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:58.807984   58430 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:58.808229   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetState
	I0816 13:44:58.809833   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:44:58.811642   58430 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:58.812716   58430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39567
	I0816 13:44:58.812888   58430 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 13:44:58.812902   58430 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 13:44:58.812937   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:58.813184   58430 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:58.813668   58430 main.go:141] libmachine: Using API Version  1
	I0816 13:44:58.813695   58430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:58.813725   58430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40851
	I0816 13:44:58.814101   58430 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:58.814207   58430 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:58.814696   58430 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:58.814715   58430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:58.814948   58430 main.go:141] libmachine: Using API Version  1
	I0816 13:44:58.814961   58430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:58.815304   58430 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:58.815518   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetState
	I0816 13:44:58.816936   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:58.817482   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:44:58.817529   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:58.817543   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:58.817871   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:58.818057   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:58.818219   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:58.818397   58430 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/default-k8s-diff-port-893736/id_rsa Username:docker}
	I0816 13:44:58.819251   58430 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0816 13:44:55.143862   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:55.144403   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:55.144433   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:55.144354   59274 retry.go:31] will retry after 3.573850343s: waiting for machine to come up
	I0816 13:44:58.720104   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:58.720571   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:58.720606   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:58.720510   59274 retry.go:31] will retry after 4.52867767s: waiting for machine to come up
	I0816 13:44:55.248021   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:55.747406   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:56.247470   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:56.747399   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:57.247462   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:57.747637   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:58.248194   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:58.747381   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:59.247772   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:59.748373   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:58.820720   58430 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 13:44:58.820733   58430 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 13:44:58.820747   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:58.823868   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:58.824290   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:58.824305   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:58.824489   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:58.824629   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:58.824764   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:58.824860   58430 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/default-k8s-diff-port-893736/id_rsa Username:docker}
	I0816 13:44:58.830530   58430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38427
	I0816 13:44:58.830894   58430 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:58.831274   58430 main.go:141] libmachine: Using API Version  1
	I0816 13:44:58.831294   58430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:58.831583   58430 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:58.831729   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetState
	I0816 13:44:58.833321   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:44:58.833512   58430 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 13:44:58.833526   58430 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 13:44:58.833543   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:58.836244   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:58.836626   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:58.836649   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:58.836762   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:58.836947   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:58.837098   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:58.837234   58430 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/default-k8s-diff-port-893736/id_rsa Username:docker}
	I0816 13:44:58.973561   58430 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 13:44:58.995763   58430 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-893736" to be "Ready" ...
	I0816 13:44:59.118558   58430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 13:44:59.126100   58430 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 13:44:59.126125   58430 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0816 13:44:59.154048   58430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 13:44:59.162623   58430 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 13:44:59.162649   58430 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 13:44:59.213614   58430 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 13:44:59.213635   58430 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 13:44:59.233653   58430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 13:44:59.485000   58430 main.go:141] libmachine: Making call to close driver server
	I0816 13:44:59.485030   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .Close
	I0816 13:44:59.485329   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | Closing plugin on server side
	I0816 13:44:59.485384   58430 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:44:59.485397   58430 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:44:59.485406   58430 main.go:141] libmachine: Making call to close driver server
	I0816 13:44:59.485414   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .Close
	I0816 13:44:59.485736   58430 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:44:59.485777   58430 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:44:59.485741   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | Closing plugin on server side
	I0816 13:44:59.491692   58430 main.go:141] libmachine: Making call to close driver server
	I0816 13:44:59.491711   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .Close
	I0816 13:44:59.491938   58430 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:44:59.491957   58430 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:45:00.273964   58430 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.04027784s)
	I0816 13:45:00.274018   58430 main.go:141] libmachine: Making call to close driver server
	I0816 13:45:00.274036   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .Close
	I0816 13:45:00.274032   58430 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.119945545s)
	I0816 13:45:00.274065   58430 main.go:141] libmachine: Making call to close driver server
	I0816 13:45:00.274080   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .Close
	I0816 13:45:00.274373   58430 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:45:00.274388   58430 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:45:00.274398   58430 main.go:141] libmachine: Making call to close driver server
	I0816 13:45:00.274406   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .Close
	I0816 13:45:00.274441   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | Closing plugin on server side
	I0816 13:45:00.274481   58430 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:45:00.274499   58430 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:45:00.274513   58430 main.go:141] libmachine: Making call to close driver server
	I0816 13:45:00.274526   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .Close
	I0816 13:45:00.274620   58430 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:45:00.274633   58430 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:45:00.274643   58430 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-893736"
	I0816 13:45:00.274749   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | Closing plugin on server side
	I0816 13:45:00.274842   58430 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:45:00.274851   58430 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:45:00.276747   58430 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0816 13:45:00.278150   58430 addons.go:510] duration metric: took 1.506994649s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0816 13:44:57.858846   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:00.357028   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:03.253913   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.254379   57240 main.go:141] libmachine: (embed-certs-302520) Found IP for machine: 192.168.39.125
	I0816 13:45:03.254401   57240 main.go:141] libmachine: (embed-certs-302520) Reserving static IP address...
	I0816 13:45:03.254418   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has current primary IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.254776   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "embed-certs-302520", mac: "52:54:00:15:a3:1b", ip: "192.168.39.125"} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:03.254804   57240 main.go:141] libmachine: (embed-certs-302520) Reserved static IP address: 192.168.39.125
	I0816 13:45:03.254822   57240 main.go:141] libmachine: (embed-certs-302520) DBG | skip adding static IP to network mk-embed-certs-302520 - found existing host DHCP lease matching {name: "embed-certs-302520", mac: "52:54:00:15:a3:1b", ip: "192.168.39.125"}
	I0816 13:45:03.254840   57240 main.go:141] libmachine: (embed-certs-302520) DBG | Getting to WaitForSSH function...
	I0816 13:45:03.254848   57240 main.go:141] libmachine: (embed-certs-302520) Waiting for SSH to be available...
	I0816 13:45:03.256951   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.257302   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:03.257327   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.257462   57240 main.go:141] libmachine: (embed-certs-302520) DBG | Using SSH client type: external
	I0816 13:45:03.257483   57240 main.go:141] libmachine: (embed-certs-302520) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/embed-certs-302520/id_rsa (-rw-------)
	I0816 13:45:03.257519   57240 main.go:141] libmachine: (embed-certs-302520) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.125 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-3966/.minikube/machines/embed-certs-302520/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 13:45:03.257528   57240 main.go:141] libmachine: (embed-certs-302520) DBG | About to run SSH command:
	I0816 13:45:03.257537   57240 main.go:141] libmachine: (embed-certs-302520) DBG | exit 0
	I0816 13:45:03.389262   57240 main.go:141] libmachine: (embed-certs-302520) DBG | SSH cmd err, output: <nil>: 
	I0816 13:45:03.389630   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetConfigRaw
	I0816 13:45:03.390305   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetIP
	I0816 13:45:03.392462   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.392767   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:03.392795   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.393012   57240 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/embed-certs-302520/config.json ...
	I0816 13:45:03.393212   57240 machine.go:93] provisionDockerMachine start ...
	I0816 13:45:03.393230   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	I0816 13:45:03.393453   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:45:03.395589   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.395949   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:03.395971   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.396086   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:45:03.396258   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:03.396447   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:03.396589   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:45:03.396785   57240 main.go:141] libmachine: Using SSH client type: native
	I0816 13:45:03.397004   57240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0816 13:45:03.397042   57240 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 13:45:03.513624   57240 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 13:45:03.513655   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetMachineName
	I0816 13:45:03.513954   57240 buildroot.go:166] provisioning hostname "embed-certs-302520"
	I0816 13:45:03.513976   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetMachineName
	I0816 13:45:03.514199   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:45:03.517138   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.517499   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:03.517520   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.517672   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:45:03.517867   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:03.518007   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:03.518168   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:45:03.518364   57240 main.go:141] libmachine: Using SSH client type: native
	I0816 13:45:03.518583   57240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0816 13:45:03.518599   57240 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-302520 && echo "embed-certs-302520" | sudo tee /etc/hostname
	I0816 13:45:03.647799   57240 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-302520
	
	I0816 13:45:03.647840   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:45:03.650491   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.650846   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:03.650880   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.651103   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:45:03.651301   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:03.651469   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:03.651614   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:45:03.651778   57240 main.go:141] libmachine: Using SSH client type: native
	I0816 13:45:03.651935   57240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0816 13:45:03.651951   57240 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-302520' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-302520/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-302520' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 13:45:03.778350   57240 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 13:45:03.778381   57240 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-3966/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-3966/.minikube}
	I0816 13:45:03.778411   57240 buildroot.go:174] setting up certificates
	I0816 13:45:03.778423   57240 provision.go:84] configureAuth start
	I0816 13:45:03.778435   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetMachineName
	I0816 13:45:03.778689   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetIP
	I0816 13:45:03.781319   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.781673   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:03.781695   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.781829   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:45:03.783724   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.784035   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:03.784064   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.784180   57240 provision.go:143] copyHostCerts
	I0816 13:45:03.784243   57240 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem, removing ...
	I0816 13:45:03.784262   57240 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem
	I0816 13:45:03.784335   57240 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem (1082 bytes)
	I0816 13:45:03.784462   57240 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem, removing ...
	I0816 13:45:03.784474   57240 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem
	I0816 13:45:03.784503   57240 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem (1123 bytes)
	I0816 13:45:03.784568   57240 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem, removing ...
	I0816 13:45:03.784578   57240 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem
	I0816 13:45:03.784600   57240 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem (1675 bytes)
	I0816 13:45:03.784647   57240 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem org=jenkins.embed-certs-302520 san=[127.0.0.1 192.168.39.125 embed-certs-302520 localhost minikube]
	I0816 13:45:03.901261   57240 provision.go:177] copyRemoteCerts
	I0816 13:45:03.901314   57240 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 13:45:03.901339   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:45:03.904505   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.904893   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:03.904933   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.905118   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:45:03.905329   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:03.905499   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:45:03.905650   57240 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/embed-certs-302520/id_rsa Username:docker}
	I0816 13:45:03.996083   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 13:45:04.024594   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0816 13:45:04.054080   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 13:45:04.079810   57240 provision.go:87] duration metric: took 301.374056ms to configureAuth
	I0816 13:45:04.079865   57240 buildroot.go:189] setting minikube options for container-runtime
	I0816 13:45:04.080048   57240 config.go:182] Loaded profile config "embed-certs-302520": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 13:45:04.080116   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:45:04.082649   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.083037   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:04.083090   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.083239   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:45:04.083430   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:04.083598   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:04.083775   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:45:04.083951   57240 main.go:141] libmachine: Using SSH client type: native
	I0816 13:45:04.084149   57240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0816 13:45:04.084171   57240 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 13:45:04.404121   57240 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 13:45:04.404150   57240 machine.go:96] duration metric: took 1.010924979s to provisionDockerMachine
	I0816 13:45:04.404163   57240 start.go:293] postStartSetup for "embed-certs-302520" (driver="kvm2")
	I0816 13:45:04.404182   57240 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 13:45:04.404202   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	I0816 13:45:04.404542   57240 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 13:45:04.404574   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:45:04.407763   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.408118   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:04.408145   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.408311   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:45:04.408508   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:04.408685   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:45:04.408864   57240 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/embed-certs-302520/id_rsa Username:docker}
	I0816 13:45:04.496519   57240 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 13:45:04.501262   57240 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 13:45:04.501282   57240 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/addons for local assets ...
	I0816 13:45:04.501352   57240 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/files for local assets ...
	I0816 13:45:04.501440   57240 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem -> 111492.pem in /etc/ssl/certs
	I0816 13:45:04.501554   57240 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 13:45:04.511338   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /etc/ssl/certs/111492.pem (1708 bytes)
	I0816 13:45:04.535372   57240 start.go:296] duration metric: took 131.188411ms for postStartSetup
	I0816 13:45:04.535411   57240 fix.go:56] duration metric: took 21.301761751s for fixHost
	I0816 13:45:04.535435   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:45:04.538286   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.538651   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:04.538676   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.538868   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:45:04.539069   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:04.539208   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:04.539344   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:45:04.539504   57240 main.go:141] libmachine: Using SSH client type: native
	I0816 13:45:04.539702   57240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0816 13:45:04.539715   57240 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 13:45:04.653529   57240 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723815904.606422212
	
	I0816 13:45:04.653556   57240 fix.go:216] guest clock: 1723815904.606422212
	I0816 13:45:04.653566   57240 fix.go:229] Guest: 2024-08-16 13:45:04.606422212 +0000 UTC Remote: 2024-08-16 13:45:04.535416156 +0000 UTC m=+359.547804920 (delta=71.006056ms)
	I0816 13:45:04.653598   57240 fix.go:200] guest clock delta is within tolerance: 71.006056ms
	I0816 13:45:04.653605   57240 start.go:83] releasing machines lock for "embed-certs-302520", held for 21.419990329s
	I0816 13:45:04.653631   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	I0816 13:45:04.653922   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetIP
	I0816 13:45:04.656682   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.657009   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:04.657034   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.657211   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	I0816 13:45:04.657800   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	I0816 13:45:04.657981   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	I0816 13:45:04.658069   57240 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 13:45:04.658114   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:45:04.658172   57240 ssh_runner.go:195] Run: cat /version.json
	I0816 13:45:04.658193   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:45:04.660629   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.660942   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.661051   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:04.661076   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.661315   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:45:04.661433   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:04.661470   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.661474   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:04.661598   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:45:04.661646   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:45:04.661841   57240 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/embed-certs-302520/id_rsa Username:docker}
	I0816 13:45:04.661904   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:04.662054   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:45:04.662199   57240 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/embed-certs-302520/id_rsa Username:docker}
	I0816 13:45:04.767691   57240 ssh_runner.go:195] Run: systemctl --version
	I0816 13:45:04.773984   57240 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 13:45:04.925431   57240 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 13:45:04.931848   57240 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 13:45:04.931931   57240 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 13:45:04.951355   57240 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 13:45:04.951381   57240 start.go:495] detecting cgroup driver to use...
	I0816 13:45:04.951442   57240 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 13:45:04.972903   57240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 13:45:04.987531   57240 docker.go:217] disabling cri-docker service (if available) ...
	I0816 13:45:04.987600   57240 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 13:45:05.001880   57240 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 13:45:05.018403   57240 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 13:45:00.247513   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:00.748342   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:01.248179   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:01.747757   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:02.247789   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:02.748162   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:03.247936   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:03.747434   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:04.247832   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:04.747704   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:00.999833   58430 node_ready.go:53] node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:45:03.500652   58430 node_ready.go:53] node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:45:05.143662   57240 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 13:45:05.297447   57240 docker.go:233] disabling docker service ...
	I0816 13:45:05.297527   57240 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 13:45:05.313382   57240 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 13:45:05.327116   57240 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 13:45:05.486443   57240 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 13:45:05.620465   57240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 13:45:05.634813   57240 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 13:45:05.653822   57240 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 13:45:05.653887   57240 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:45:05.664976   57240 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 13:45:05.665045   57240 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:45:05.676414   57240 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:45:05.688631   57240 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:45:05.700400   57240 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 13:45:05.712822   57240 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:45:05.724573   57240 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:45:05.742934   57240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:45:05.755669   57240 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 13:45:05.766837   57240 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 13:45:05.766890   57240 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 13:45:05.782296   57240 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 13:45:05.793695   57240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:45:05.919559   57240 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 13:45:06.057480   57240 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 13:45:06.057543   57240 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 13:45:06.062348   57240 start.go:563] Will wait 60s for crictl version
	I0816 13:45:06.062414   57240 ssh_runner.go:195] Run: which crictl
	I0816 13:45:06.066456   57240 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 13:45:06.104075   57240 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 13:45:06.104156   57240 ssh_runner.go:195] Run: crio --version
	I0816 13:45:06.132406   57240 ssh_runner.go:195] Run: crio --version
	I0816 13:45:06.161878   57240 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 13:45:02.357119   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:04.361365   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:06.859546   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:06.163233   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetIP
	I0816 13:45:06.165924   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:06.166310   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:06.166333   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:06.166529   57240 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0816 13:45:06.170722   57240 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 13:45:06.183152   57240 kubeadm.go:883] updating cluster {Name:embed-certs-302520 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-302520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 13:45:06.183256   57240 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 13:45:06.183306   57240 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 13:45:06.223405   57240 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 13:45:06.223489   57240 ssh_runner.go:195] Run: which lz4
	I0816 13:45:06.227851   57240 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 13:45:06.232132   57240 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 13:45:06.232156   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0816 13:45:07.642616   57240 crio.go:462] duration metric: took 1.414789512s to copy over tarball
	I0816 13:45:07.642698   57240 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 13:45:09.794329   57240 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.151601472s)
	I0816 13:45:09.794359   57240 crio.go:469] duration metric: took 2.151717024s to extract the tarball
	I0816 13:45:09.794369   57240 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 13:45:09.833609   57240 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 13:45:09.878781   57240 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 13:45:09.878806   57240 cache_images.go:84] Images are preloaded, skipping loading
	I0816 13:45:09.878815   57240 kubeadm.go:934] updating node { 192.168.39.125 8443 v1.31.0 crio true true} ...
	I0816 13:45:09.878944   57240 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-302520 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.125
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-302520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 13:45:09.879032   57240 ssh_runner.go:195] Run: crio config
	I0816 13:45:09.924876   57240 cni.go:84] Creating CNI manager for ""
	I0816 13:45:09.924900   57240 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:45:09.924927   57240 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 13:45:09.924958   57240 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.125 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-302520 NodeName:embed-certs-302520 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.125"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.125 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 13:45:09.925150   57240 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.125
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-302520"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.125
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.125"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 13:45:09.925226   57240 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 13:45:09.935204   57240 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 13:45:09.935280   57240 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 13:45:09.945366   57240 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0816 13:45:09.961881   57240 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 13:45:09.978495   57240 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0816 13:45:09.995664   57240 ssh_runner.go:195] Run: grep 192.168.39.125	control-plane.minikube.internal$ /etc/hosts
	I0816 13:45:10.000132   57240 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.125	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 13:45:10.013039   57240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:45:05.247343   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:05.747420   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:06.247801   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:06.747978   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:07.248393   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:07.747801   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:08.248388   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:08.747624   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:09.247530   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:09.748311   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:06.000553   58430 node_ready.go:49] node "default-k8s-diff-port-893736" has status "Ready":"True"
	I0816 13:45:06.000579   58430 node_ready.go:38] duration metric: took 7.004778161s for node "default-k8s-diff-port-893736" to be "Ready" ...
	I0816 13:45:06.000590   58430 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:45:06.006987   58430 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-xdwhx" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:06.012552   58430 pod_ready.go:93] pod "coredns-6f6b679f8f-xdwhx" in "kube-system" namespace has status "Ready":"True"
	I0816 13:45:06.012577   58430 pod_ready.go:82] duration metric: took 5.565882ms for pod "coredns-6f6b679f8f-xdwhx" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:06.012588   58430 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:06.519889   58430 pod_ready.go:93] pod "etcd-default-k8s-diff-port-893736" in "kube-system" namespace has status "Ready":"True"
	I0816 13:45:06.519919   58430 pod_ready.go:82] duration metric: took 507.322547ms for pod "etcd-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:06.519932   58430 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:08.527411   58430 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-893736" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:09.527923   58430 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-893736" in "kube-system" namespace has status "Ready":"True"
	I0816 13:45:09.527950   58430 pod_ready.go:82] duration metric: took 3.008009418s for pod "kube-apiserver-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:09.527963   58430 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:09.534422   58430 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-893736" in "kube-system" namespace has status "Ready":"True"
	I0816 13:45:09.534460   58430 pod_ready.go:82] duration metric: took 6.488169ms for pod "kube-controller-manager-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:09.534476   58430 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-btq6r" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:09.538660   58430 pod_ready.go:93] pod "kube-proxy-btq6r" in "kube-system" namespace has status "Ready":"True"
	I0816 13:45:09.538688   58430 pod_ready.go:82] duration metric: took 4.202597ms for pod "kube-proxy-btq6r" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:09.538700   58430 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:09.600350   58430 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-893736" in "kube-system" namespace has status "Ready":"True"
	I0816 13:45:09.600377   58430 pod_ready.go:82] duration metric: took 61.666987ms for pod "kube-scheduler-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:09.600391   58430 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:09.361968   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:11.859112   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:10.143519   57240 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 13:45:10.160358   57240 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/embed-certs-302520 for IP: 192.168.39.125
	I0816 13:45:10.160381   57240 certs.go:194] generating shared ca certs ...
	I0816 13:45:10.160400   57240 certs.go:226] acquiring lock for ca certs: {Name:mkdb46755373c135acf8239d2d4352f4b0b3d1d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:45:10.160591   57240 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key
	I0816 13:45:10.160646   57240 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key
	I0816 13:45:10.160656   57240 certs.go:256] generating profile certs ...
	I0816 13:45:10.160767   57240 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/embed-certs-302520/client.key
	I0816 13:45:10.160845   57240 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/embed-certs-302520/apiserver.key.f0c5f9ff
	I0816 13:45:10.160893   57240 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/embed-certs-302520/proxy-client.key
	I0816 13:45:10.161075   57240 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem (1338 bytes)
	W0816 13:45:10.161133   57240 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149_empty.pem, impossibly tiny 0 bytes
	I0816 13:45:10.161148   57240 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 13:45:10.161182   57240 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem (1082 bytes)
	I0816 13:45:10.161213   57240 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem (1123 bytes)
	I0816 13:45:10.161243   57240 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem (1675 bytes)
	I0816 13:45:10.161298   57240 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem (1708 bytes)
	I0816 13:45:10.161944   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 13:45:10.202268   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 13:45:10.242684   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 13:45:10.287223   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 13:45:10.316762   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/embed-certs-302520/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0816 13:45:10.343352   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/embed-certs-302520/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 13:45:10.371042   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/embed-certs-302520/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 13:45:10.394922   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/embed-certs-302520/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 13:45:10.419358   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /usr/share/ca-certificates/111492.pem (1708 bytes)
	I0816 13:45:10.442301   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 13:45:10.465266   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem --> /usr/share/ca-certificates/11149.pem (1338 bytes)
	I0816 13:45:10.487647   57240 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 13:45:10.504713   57240 ssh_runner.go:195] Run: openssl version
	I0816 13:45:10.510493   57240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111492.pem && ln -fs /usr/share/ca-certificates/111492.pem /etc/ssl/certs/111492.pem"
	I0816 13:45:10.521818   57240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111492.pem
	I0816 13:45:10.526637   57240 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 12:33 /usr/share/ca-certificates/111492.pem
	I0816 13:45:10.526681   57240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111492.pem
	I0816 13:45:10.532660   57240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111492.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 13:45:10.543403   57240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 13:45:10.554344   57240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:45:10.559089   57240 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 12:22 /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:45:10.559149   57240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:45:10.564982   57240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 13:45:10.576074   57240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11149.pem && ln -fs /usr/share/ca-certificates/11149.pem /etc/ssl/certs/11149.pem"
	I0816 13:45:10.586596   57240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11149.pem
	I0816 13:45:10.591586   57240 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 12:33 /usr/share/ca-certificates/11149.pem
	I0816 13:45:10.591637   57240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11149.pem
	I0816 13:45:10.597624   57240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11149.pem /etc/ssl/certs/51391683.0"
	I0816 13:45:10.608838   57240 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 13:45:10.613785   57240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 13:45:10.619902   57240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 13:45:10.625554   57240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 13:45:10.631526   57240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 13:45:10.637251   57240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 13:45:10.643210   57240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 13:45:10.649187   57240 kubeadm.go:392] StartCluster: {Name:embed-certs-302520 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-302520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 13:45:10.649298   57240 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 13:45:10.649349   57240 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 13:45:10.686074   57240 cri.go:89] found id: ""
	I0816 13:45:10.686153   57240 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 13:45:10.696504   57240 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 13:45:10.696527   57240 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 13:45:10.696581   57240 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 13:45:10.706447   57240 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 13:45:10.707413   57240 kubeconfig.go:125] found "embed-certs-302520" server: "https://192.168.39.125:8443"
	I0816 13:45:10.710045   57240 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 13:45:10.719563   57240 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.125
	I0816 13:45:10.719599   57240 kubeadm.go:1160] stopping kube-system containers ...
	I0816 13:45:10.719613   57240 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 13:45:10.719665   57240 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 13:45:10.759584   57240 cri.go:89] found id: ""
	I0816 13:45:10.759661   57240 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 13:45:10.776355   57240 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 13:45:10.786187   57240 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 13:45:10.786205   57240 kubeadm.go:157] found existing configuration files:
	
	I0816 13:45:10.786244   57240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 13:45:10.795644   57240 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 13:45:10.795723   57240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 13:45:10.807988   57240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 13:45:10.817234   57240 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 13:45:10.817299   57240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 13:45:10.826601   57240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 13:45:10.835702   57240 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 13:45:10.835763   57240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 13:45:10.845160   57240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 13:45:10.855522   57240 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 13:45:10.855578   57240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 13:45:10.865445   57240 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 13:45:10.875429   57240 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:45:10.988958   57240 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:45:12.195215   57240 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.206217359s)
	I0816 13:45:12.195241   57240 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:45:12.432322   57240 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:45:12.514631   57240 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:45:12.606133   57240 api_server.go:52] waiting for apiserver process to appear ...
	I0816 13:45:12.606238   57240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:13.106823   57240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:13.606856   57240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:13.624866   57240 api_server.go:72] duration metric: took 1.018743147s to wait for apiserver process to appear ...
	I0816 13:45:13.624897   57240 api_server.go:88] waiting for apiserver healthz status ...
	I0816 13:45:13.624930   57240 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0816 13:45:13.625953   57240 api_server.go:269] stopped: https://192.168.39.125:8443/healthz: Get "https://192.168.39.125:8443/healthz": dial tcp 192.168.39.125:8443: connect: connection refused
	I0816 13:45:14.124979   57240 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0816 13:45:10.247689   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:10.747756   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:11.247963   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:11.747523   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:12.247397   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:12.748146   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:13.247976   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:13.748109   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:14.247662   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:14.748041   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:11.607443   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:14.107647   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:14.357916   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:16.358986   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:16.404020   57240 api_server.go:279] https://192.168.39.125:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 13:45:16.404049   57240 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 13:45:16.404062   57240 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0816 13:45:16.462649   57240 api_server.go:279] https://192.168.39.125:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 13:45:16.462685   57240 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 13:45:16.625998   57240 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0816 13:45:16.632560   57240 api_server.go:279] https://192.168.39.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 13:45:16.632586   57240 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 13:45:17.124984   57240 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0816 13:45:17.133533   57240 api_server.go:279] https://192.168.39.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 13:45:17.133563   57240 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 13:45:17.624993   57240 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0816 13:45:17.629720   57240 api_server.go:279] https://192.168.39.125:8443/healthz returned 200:
	ok
	I0816 13:45:17.635848   57240 api_server.go:141] control plane version: v1.31.0
	I0816 13:45:17.635874   57240 api_server.go:131] duration metric: took 4.010970063s to wait for apiserver health ...
	I0816 13:45:17.635885   57240 cni.go:84] Creating CNI manager for ""
	I0816 13:45:17.635892   57240 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:45:17.637609   57240 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 13:45:17.638828   57240 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 13:45:17.650034   57240 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 13:45:17.681352   57240 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 13:45:17.691752   57240 system_pods.go:59] 8 kube-system pods found
	I0816 13:45:17.691784   57240 system_pods.go:61] "coredns-6f6b679f8f-phxht" [df7bd896-d1c6-4a0e-aead-e3db36e915da] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 13:45:17.691792   57240 system_pods.go:61] "etcd-embed-certs-302520" [ef7bae1c-7cd3-4d8e-b2fc-e5837f4c5a1e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 13:45:17.691801   57240 system_pods.go:61] "kube-apiserver-embed-certs-302520" [957ba8ec-91ae-4cea-902f-81a286e35659] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 13:45:17.691806   57240 system_pods.go:61] "kube-controller-manager-embed-certs-302520" [afbfc2da-5435-4ebb-ada0-e0edc9d09a7d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 13:45:17.691817   57240 system_pods.go:61] "kube-proxy-nnc6b" [ec8b820d-6f1d-4777-9f76-7efffb4e6e79] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0816 13:45:17.691824   57240 system_pods.go:61] "kube-scheduler-embed-certs-302520" [077024c8-3dfd-4e8c-850a-333b63d3f23b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 13:45:17.691832   57240 system_pods.go:61] "metrics-server-6867b74b74-9277d" [5d7ee9e5-b40c-4840-9fb4-0b7b8be9faca] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 13:45:17.691837   57240 system_pods.go:61] "storage-provisioner" [6f3dc7f6-a3e0-4bc3-b362-e1d97633d0eb] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0816 13:45:17.691854   57240 system_pods.go:74] duration metric: took 10.481601ms to wait for pod list to return data ...
	I0816 13:45:17.691861   57240 node_conditions.go:102] verifying NodePressure condition ...
	I0816 13:45:17.695253   57240 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 13:45:17.695278   57240 node_conditions.go:123] node cpu capacity is 2
	I0816 13:45:17.695292   57240 node_conditions.go:105] duration metric: took 3.4236ms to run NodePressure ...
	I0816 13:45:17.695311   57240 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:45:17.996024   57240 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0816 13:45:17.999887   57240 kubeadm.go:739] kubelet initialised
	I0816 13:45:17.999906   57240 kubeadm.go:740] duration metric: took 3.859222ms waiting for restarted kubelet to initialise ...
	I0816 13:45:17.999913   57240 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:45:18.004476   57240 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-phxht" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:18.009142   57240 pod_ready.go:98] node "embed-certs-302520" hosting pod "coredns-6f6b679f8f-phxht" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-302520" has status "Ready":"False"
	I0816 13:45:18.009162   57240 pod_ready.go:82] duration metric: took 4.665087ms for pod "coredns-6f6b679f8f-phxht" in "kube-system" namespace to be "Ready" ...
	E0816 13:45:18.009170   57240 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-302520" hosting pod "coredns-6f6b679f8f-phxht" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-302520" has status "Ready":"False"
	I0816 13:45:18.009175   57240 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:18.014083   57240 pod_ready.go:98] node "embed-certs-302520" hosting pod "etcd-embed-certs-302520" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-302520" has status "Ready":"False"
	I0816 13:45:18.014102   57240 pod_ready.go:82] duration metric: took 4.91913ms for pod "etcd-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	E0816 13:45:18.014118   57240 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-302520" hosting pod "etcd-embed-certs-302520" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-302520" has status "Ready":"False"
	I0816 13:45:18.014124   57240 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:18.018257   57240 pod_ready.go:98] node "embed-certs-302520" hosting pod "kube-apiserver-embed-certs-302520" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-302520" has status "Ready":"False"
	I0816 13:45:18.018276   57240 pod_ready.go:82] duration metric: took 4.14471ms for pod "kube-apiserver-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	E0816 13:45:18.018283   57240 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-302520" hosting pod "kube-apiserver-embed-certs-302520" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-302520" has status "Ready":"False"
	I0816 13:45:18.018288   57240 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:18.085229   57240 pod_ready.go:98] node "embed-certs-302520" hosting pod "kube-controller-manager-embed-certs-302520" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-302520" has status "Ready":"False"
	I0816 13:45:18.085257   57240 pod_ready.go:82] duration metric: took 66.95357ms for pod "kube-controller-manager-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	E0816 13:45:18.085269   57240 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-302520" hosting pod "kube-controller-manager-embed-certs-302520" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-302520" has status "Ready":"False"
	I0816 13:45:18.085276   57240 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-nnc6b" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:18.485094   57240 pod_ready.go:93] pod "kube-proxy-nnc6b" in "kube-system" namespace has status "Ready":"True"
	I0816 13:45:18.485124   57240 pod_ready.go:82] duration metric: took 399.831747ms for pod "kube-proxy-nnc6b" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:18.485135   57240 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:15.248141   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:15.747452   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:16.247654   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:16.747569   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:17.248203   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:17.747951   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:18.248147   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:18.747490   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:19.248135   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:19.748201   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:16.107986   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:18.606838   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:18.857109   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:20.858242   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:20.491635   57240 pod_ready.go:103] pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:22.492484   57240 pod_ready.go:103] pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:24.992054   57240 pod_ready.go:103] pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:20.247741   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:20.747432   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:21.247600   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:21.748309   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:22.247438   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:22.748379   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:23.247577   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:23.747950   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:24.247733   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:24.748079   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:21.107371   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:23.607589   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:23.357770   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:25.358102   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:26.992544   57240 pod_ready.go:103] pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:29.491552   57240 pod_ready.go:103] pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:25.247402   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:25.747623   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:26.248101   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:26.747403   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:27.248040   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:27.747380   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:28.247857   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:28.748374   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:29.247819   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:29.747331   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:26.106454   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:28.107564   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:30.115954   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:27.358671   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:29.857631   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:31.862487   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:30.491291   57240 pod_ready.go:93] pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace has status "Ready":"True"
	I0816 13:45:30.491320   57240 pod_ready.go:82] duration metric: took 12.006175772s for pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:30.491333   57240 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:32.497481   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:34.500397   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:30.247771   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:30.747706   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:31.247762   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:31.748013   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:32.247551   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:32.748020   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:33.247432   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:33.747594   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:34.247750   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:45:34.247831   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:45:34.295412   57945 cri.go:89] found id: ""
	I0816 13:45:34.295439   57945 logs.go:276] 0 containers: []
	W0816 13:45:34.295461   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:45:34.295468   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:45:34.295529   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:45:34.332061   57945 cri.go:89] found id: ""
	I0816 13:45:34.332085   57945 logs.go:276] 0 containers: []
	W0816 13:45:34.332093   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:45:34.332100   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:45:34.332158   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:45:34.369512   57945 cri.go:89] found id: ""
	I0816 13:45:34.369535   57945 logs.go:276] 0 containers: []
	W0816 13:45:34.369546   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:45:34.369553   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:45:34.369617   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:45:34.406324   57945 cri.go:89] found id: ""
	I0816 13:45:34.406351   57945 logs.go:276] 0 containers: []
	W0816 13:45:34.406362   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:45:34.406370   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:45:34.406436   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:45:34.442193   57945 cri.go:89] found id: ""
	I0816 13:45:34.442229   57945 logs.go:276] 0 containers: []
	W0816 13:45:34.442239   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:45:34.442244   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:45:34.442301   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:45:34.476563   57945 cri.go:89] found id: ""
	I0816 13:45:34.476600   57945 logs.go:276] 0 containers: []
	W0816 13:45:34.476616   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:45:34.476622   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:45:34.476670   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:45:34.515841   57945 cri.go:89] found id: ""
	I0816 13:45:34.515869   57945 logs.go:276] 0 containers: []
	W0816 13:45:34.515877   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:45:34.515883   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:45:34.515940   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:45:34.551242   57945 cri.go:89] found id: ""
	I0816 13:45:34.551276   57945 logs.go:276] 0 containers: []
	W0816 13:45:34.551288   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:45:34.551305   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:45:34.551321   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:45:34.564902   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:45:34.564944   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:45:34.694323   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:45:34.694349   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:45:34.694366   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:45:34.770548   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:45:34.770589   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:45:34.818339   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:45:34.818366   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:45:32.606912   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:34.607600   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:34.358649   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:36.856727   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:37.003939   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:39.498178   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:37.370390   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:37.383474   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:45:37.383558   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:45:37.419911   57945 cri.go:89] found id: ""
	I0816 13:45:37.419943   57945 logs.go:276] 0 containers: []
	W0816 13:45:37.419954   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:45:37.419961   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:45:37.420027   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:45:37.453845   57945 cri.go:89] found id: ""
	I0816 13:45:37.453876   57945 logs.go:276] 0 containers: []
	W0816 13:45:37.453884   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:45:37.453889   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:45:37.453949   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:45:37.489053   57945 cri.go:89] found id: ""
	I0816 13:45:37.489088   57945 logs.go:276] 0 containers: []
	W0816 13:45:37.489099   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:45:37.489106   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:45:37.489176   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:45:37.525631   57945 cri.go:89] found id: ""
	I0816 13:45:37.525664   57945 logs.go:276] 0 containers: []
	W0816 13:45:37.525676   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:45:37.525684   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:45:37.525743   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:45:37.560064   57945 cri.go:89] found id: ""
	I0816 13:45:37.560089   57945 logs.go:276] 0 containers: []
	W0816 13:45:37.560101   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:45:37.560109   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:45:37.560168   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:45:37.593856   57945 cri.go:89] found id: ""
	I0816 13:45:37.593888   57945 logs.go:276] 0 containers: []
	W0816 13:45:37.593899   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:45:37.593907   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:45:37.593969   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:45:37.627775   57945 cri.go:89] found id: ""
	I0816 13:45:37.627808   57945 logs.go:276] 0 containers: []
	W0816 13:45:37.627818   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:45:37.627828   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:45:37.627888   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:45:37.660926   57945 cri.go:89] found id: ""
	I0816 13:45:37.660962   57945 logs.go:276] 0 containers: []
	W0816 13:45:37.660973   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:45:37.660991   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:45:37.661008   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:45:37.738954   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:45:37.738993   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:45:37.778976   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:45:37.779006   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:45:37.831361   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:45:37.831397   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:45:37.845096   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:45:37.845122   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:45:37.930797   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:45:37.106303   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:39.107343   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:38.857564   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:40.858908   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:41.998945   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:43.999474   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:40.431616   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:40.445298   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:45:40.445365   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:45:40.478229   57945 cri.go:89] found id: ""
	I0816 13:45:40.478252   57945 logs.go:276] 0 containers: []
	W0816 13:45:40.478259   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:45:40.478265   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:45:40.478313   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:45:40.514721   57945 cri.go:89] found id: ""
	I0816 13:45:40.514744   57945 logs.go:276] 0 containers: []
	W0816 13:45:40.514754   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:45:40.514761   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:45:40.514819   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:45:40.550604   57945 cri.go:89] found id: ""
	I0816 13:45:40.550629   57945 logs.go:276] 0 containers: []
	W0816 13:45:40.550637   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:45:40.550644   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:45:40.550700   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:45:40.589286   57945 cri.go:89] found id: ""
	I0816 13:45:40.589312   57945 logs.go:276] 0 containers: []
	W0816 13:45:40.589320   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:45:40.589326   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:45:40.589382   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:45:40.622689   57945 cri.go:89] found id: ""
	I0816 13:45:40.622709   57945 logs.go:276] 0 containers: []
	W0816 13:45:40.622717   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:45:40.622722   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:45:40.622778   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:45:40.660872   57945 cri.go:89] found id: ""
	I0816 13:45:40.660897   57945 logs.go:276] 0 containers: []
	W0816 13:45:40.660915   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:45:40.660925   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:45:40.660986   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:45:40.697369   57945 cri.go:89] found id: ""
	I0816 13:45:40.697395   57945 logs.go:276] 0 containers: []
	W0816 13:45:40.697404   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:45:40.697415   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:45:40.697521   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:45:40.733565   57945 cri.go:89] found id: ""
	I0816 13:45:40.733594   57945 logs.go:276] 0 containers: []
	W0816 13:45:40.733604   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:45:40.733615   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:45:40.733629   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:45:40.770951   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:45:40.770993   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:45:40.824983   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:45:40.825025   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:45:40.838846   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:45:40.838876   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:45:40.915687   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:45:40.915718   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:45:40.915733   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:45:43.496409   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:43.511419   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:45:43.511485   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:45:43.556996   57945 cri.go:89] found id: ""
	I0816 13:45:43.557031   57945 logs.go:276] 0 containers: []
	W0816 13:45:43.557042   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:45:43.557050   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:45:43.557102   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:45:43.609200   57945 cri.go:89] found id: ""
	I0816 13:45:43.609228   57945 logs.go:276] 0 containers: []
	W0816 13:45:43.609237   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:45:43.609244   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:45:43.609305   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:45:43.648434   57945 cri.go:89] found id: ""
	I0816 13:45:43.648458   57945 logs.go:276] 0 containers: []
	W0816 13:45:43.648467   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:45:43.648474   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:45:43.648538   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:45:43.687179   57945 cri.go:89] found id: ""
	I0816 13:45:43.687214   57945 logs.go:276] 0 containers: []
	W0816 13:45:43.687222   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:45:43.687228   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:45:43.687293   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:45:43.721723   57945 cri.go:89] found id: ""
	I0816 13:45:43.721751   57945 logs.go:276] 0 containers: []
	W0816 13:45:43.721762   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:45:43.721769   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:45:43.721847   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:45:43.756469   57945 cri.go:89] found id: ""
	I0816 13:45:43.756492   57945 logs.go:276] 0 containers: []
	W0816 13:45:43.756501   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:45:43.756506   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:45:43.756560   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:45:43.790241   57945 cri.go:89] found id: ""
	I0816 13:45:43.790267   57945 logs.go:276] 0 containers: []
	W0816 13:45:43.790275   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:45:43.790281   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:45:43.790329   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:45:43.828620   57945 cri.go:89] found id: ""
	I0816 13:45:43.828646   57945 logs.go:276] 0 containers: []
	W0816 13:45:43.828654   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:45:43.828662   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:45:43.828677   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:45:43.879573   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:45:43.879607   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:45:43.893813   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:45:43.893842   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:45:43.975188   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:45:43.975209   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:45:43.975220   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:45:44.054231   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:45:44.054266   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:45:41.609813   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:44.116781   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:43.358670   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:45.857710   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:46.497146   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:48.498302   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:46.593190   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:46.607472   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:45:46.607568   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:45:46.642764   57945 cri.go:89] found id: ""
	I0816 13:45:46.642787   57945 logs.go:276] 0 containers: []
	W0816 13:45:46.642795   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:45:46.642800   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:45:46.642848   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:45:46.678965   57945 cri.go:89] found id: ""
	I0816 13:45:46.678992   57945 logs.go:276] 0 containers: []
	W0816 13:45:46.679000   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:45:46.679005   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:45:46.679051   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:45:46.717632   57945 cri.go:89] found id: ""
	I0816 13:45:46.717657   57945 logs.go:276] 0 containers: []
	W0816 13:45:46.717666   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:45:46.717671   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:45:46.717720   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:45:46.758359   57945 cri.go:89] found id: ""
	I0816 13:45:46.758407   57945 logs.go:276] 0 containers: []
	W0816 13:45:46.758419   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:45:46.758427   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:45:46.758487   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:45:46.798405   57945 cri.go:89] found id: ""
	I0816 13:45:46.798437   57945 logs.go:276] 0 containers: []
	W0816 13:45:46.798448   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:45:46.798472   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:45:46.798547   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:45:46.834977   57945 cri.go:89] found id: ""
	I0816 13:45:46.835008   57945 logs.go:276] 0 containers: []
	W0816 13:45:46.835019   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:45:46.835026   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:45:46.835077   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:45:46.873589   57945 cri.go:89] found id: ""
	I0816 13:45:46.873622   57945 logs.go:276] 0 containers: []
	W0816 13:45:46.873631   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:45:46.873638   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:45:46.873689   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:45:46.912649   57945 cri.go:89] found id: ""
	I0816 13:45:46.912680   57945 logs.go:276] 0 containers: []
	W0816 13:45:46.912691   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:45:46.912701   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:45:46.912720   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:45:46.966998   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:45:46.967038   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:45:46.980897   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:45:46.980937   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:45:47.053055   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:45:47.053079   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:45:47.053091   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:45:47.136251   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:45:47.136291   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:45:49.678283   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:49.691134   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:45:49.691244   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:45:49.726598   57945 cri.go:89] found id: ""
	I0816 13:45:49.726644   57945 logs.go:276] 0 containers: []
	W0816 13:45:49.726656   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:45:49.726665   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:45:49.726729   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:45:49.760499   57945 cri.go:89] found id: ""
	I0816 13:45:49.760526   57945 logs.go:276] 0 containers: []
	W0816 13:45:49.760536   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:45:49.760543   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:45:49.760602   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:45:49.794064   57945 cri.go:89] found id: ""
	I0816 13:45:49.794087   57945 logs.go:276] 0 containers: []
	W0816 13:45:49.794094   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:45:49.794099   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:45:49.794162   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:45:49.830016   57945 cri.go:89] found id: ""
	I0816 13:45:49.830045   57945 logs.go:276] 0 containers: []
	W0816 13:45:49.830057   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:45:49.830071   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:45:49.830125   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:45:49.865230   57945 cri.go:89] found id: ""
	I0816 13:45:49.865248   57945 logs.go:276] 0 containers: []
	W0816 13:45:49.865255   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:45:49.865261   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:45:49.865310   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:45:49.898715   57945 cri.go:89] found id: ""
	I0816 13:45:49.898743   57945 logs.go:276] 0 containers: []
	W0816 13:45:49.898752   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:45:49.898758   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:45:49.898807   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:45:49.932831   57945 cri.go:89] found id: ""
	I0816 13:45:49.932857   57945 logs.go:276] 0 containers: []
	W0816 13:45:49.932868   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:45:49.932875   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:45:49.932948   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:45:49.965580   57945 cri.go:89] found id: ""
	I0816 13:45:49.965609   57945 logs.go:276] 0 containers: []
	W0816 13:45:49.965617   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:45:49.965626   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:45:49.965642   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:45:50.058462   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:45:50.058516   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:45:46.606815   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:49.107387   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:47.858274   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:49.861382   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:50.999007   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:53.497248   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:50.111179   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:45:50.111206   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:45:50.162529   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:45:50.162561   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:45:50.176552   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:45:50.176579   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:45:50.243818   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:45:52.744808   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:52.757430   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:45:52.757513   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:45:52.793177   57945 cri.go:89] found id: ""
	I0816 13:45:52.793209   57945 logs.go:276] 0 containers: []
	W0816 13:45:52.793217   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:45:52.793224   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:45:52.793276   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:45:52.827846   57945 cri.go:89] found id: ""
	I0816 13:45:52.827874   57945 logs.go:276] 0 containers: []
	W0816 13:45:52.827886   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:45:52.827894   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:45:52.827959   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:45:52.864662   57945 cri.go:89] found id: ""
	I0816 13:45:52.864693   57945 logs.go:276] 0 containers: []
	W0816 13:45:52.864705   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:45:52.864711   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:45:52.864761   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:45:52.901124   57945 cri.go:89] found id: ""
	I0816 13:45:52.901154   57945 logs.go:276] 0 containers: []
	W0816 13:45:52.901166   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:45:52.901174   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:45:52.901234   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:45:52.939763   57945 cri.go:89] found id: ""
	I0816 13:45:52.939791   57945 logs.go:276] 0 containers: []
	W0816 13:45:52.939799   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:45:52.939805   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:45:52.939858   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:45:52.975045   57945 cri.go:89] found id: ""
	I0816 13:45:52.975075   57945 logs.go:276] 0 containers: []
	W0816 13:45:52.975086   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:45:52.975092   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:45:52.975141   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:45:53.014686   57945 cri.go:89] found id: ""
	I0816 13:45:53.014714   57945 logs.go:276] 0 containers: []
	W0816 13:45:53.014725   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:45:53.014732   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:45:53.014794   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:45:53.049445   57945 cri.go:89] found id: ""
	I0816 13:45:53.049466   57945 logs.go:276] 0 containers: []
	W0816 13:45:53.049473   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:45:53.049482   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:45:53.049492   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:45:53.101819   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:45:53.101850   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:45:53.116165   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:45:53.116191   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:45:53.191022   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:45:53.191047   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:45:53.191062   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:45:53.268901   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:45:53.268952   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:45:51.607047   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:54.106991   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:52.363317   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:54.857924   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:55.497520   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:57.498597   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:59.997729   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:55.814862   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:55.828817   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:45:55.828875   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:45:55.877556   57945 cri.go:89] found id: ""
	I0816 13:45:55.877586   57945 logs.go:276] 0 containers: []
	W0816 13:45:55.877595   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:45:55.877606   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:45:55.877667   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:45:55.912820   57945 cri.go:89] found id: ""
	I0816 13:45:55.912848   57945 logs.go:276] 0 containers: []
	W0816 13:45:55.912855   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:45:55.912862   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:45:55.912918   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:45:55.947419   57945 cri.go:89] found id: ""
	I0816 13:45:55.947449   57945 logs.go:276] 0 containers: []
	W0816 13:45:55.947460   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:45:55.947467   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:45:55.947532   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:45:55.980964   57945 cri.go:89] found id: ""
	I0816 13:45:55.980990   57945 logs.go:276] 0 containers: []
	W0816 13:45:55.981001   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:45:55.981008   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:45:55.981068   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:45:56.019021   57945 cri.go:89] found id: ""
	I0816 13:45:56.019045   57945 logs.go:276] 0 containers: []
	W0816 13:45:56.019053   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:45:56.019059   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:45:56.019116   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:45:56.054950   57945 cri.go:89] found id: ""
	I0816 13:45:56.054974   57945 logs.go:276] 0 containers: []
	W0816 13:45:56.054985   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:45:56.054992   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:45:56.055057   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:45:56.091165   57945 cri.go:89] found id: ""
	I0816 13:45:56.091192   57945 logs.go:276] 0 containers: []
	W0816 13:45:56.091202   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:45:56.091211   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:45:56.091268   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:45:56.125748   57945 cri.go:89] found id: ""
	I0816 13:45:56.125775   57945 logs.go:276] 0 containers: []
	W0816 13:45:56.125787   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:45:56.125797   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:45:56.125811   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:45:56.174836   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:45:56.174870   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:45:56.188501   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:45:56.188529   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:45:56.266017   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:45:56.266038   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:45:56.266050   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:45:56.346482   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:45:56.346519   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:45:58.887176   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:58.900464   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:45:58.900531   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:45:58.939526   57945 cri.go:89] found id: ""
	I0816 13:45:58.939558   57945 logs.go:276] 0 containers: []
	W0816 13:45:58.939568   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:45:58.939576   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:45:58.939639   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:45:58.975256   57945 cri.go:89] found id: ""
	I0816 13:45:58.975281   57945 logs.go:276] 0 containers: []
	W0816 13:45:58.975289   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:45:58.975294   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:45:58.975350   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:45:59.012708   57945 cri.go:89] found id: ""
	I0816 13:45:59.012736   57945 logs.go:276] 0 containers: []
	W0816 13:45:59.012746   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:45:59.012754   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:45:59.012820   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:45:59.049385   57945 cri.go:89] found id: ""
	I0816 13:45:59.049417   57945 logs.go:276] 0 containers: []
	W0816 13:45:59.049430   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:45:59.049438   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:45:59.049505   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:45:59.084750   57945 cri.go:89] found id: ""
	I0816 13:45:59.084773   57945 logs.go:276] 0 containers: []
	W0816 13:45:59.084781   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:45:59.084786   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:45:59.084834   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:45:59.129464   57945 cri.go:89] found id: ""
	I0816 13:45:59.129495   57945 logs.go:276] 0 containers: []
	W0816 13:45:59.129506   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:45:59.129514   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:45:59.129578   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:45:59.166772   57945 cri.go:89] found id: ""
	I0816 13:45:59.166794   57945 logs.go:276] 0 containers: []
	W0816 13:45:59.166802   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:45:59.166808   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:45:59.166867   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:45:59.203843   57945 cri.go:89] found id: ""
	I0816 13:45:59.203876   57945 logs.go:276] 0 containers: []
	W0816 13:45:59.203886   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:45:59.203897   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:45:59.203911   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:45:59.285798   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:45:59.285837   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:45:59.324704   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:45:59.324729   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:45:59.377532   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:45:59.377566   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:45:59.391209   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:45:59.391236   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:45:59.463420   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:45:56.107187   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:58.606550   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:57.358875   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:59.857940   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:01.859677   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:01.998260   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:04.498473   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:01.964395   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:01.977380   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:01.977452   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:02.014480   57945 cri.go:89] found id: ""
	I0816 13:46:02.014504   57945 logs.go:276] 0 containers: []
	W0816 13:46:02.014511   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:02.014517   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:02.014578   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:02.057233   57945 cri.go:89] found id: ""
	I0816 13:46:02.057262   57945 logs.go:276] 0 containers: []
	W0816 13:46:02.057270   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:02.057277   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:02.057326   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:02.095936   57945 cri.go:89] found id: ""
	I0816 13:46:02.095962   57945 logs.go:276] 0 containers: []
	W0816 13:46:02.095970   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:02.095976   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:02.096020   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:02.136949   57945 cri.go:89] found id: ""
	I0816 13:46:02.136980   57945 logs.go:276] 0 containers: []
	W0816 13:46:02.136992   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:02.136998   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:02.137047   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:02.172610   57945 cri.go:89] found id: ""
	I0816 13:46:02.172648   57945 logs.go:276] 0 containers: []
	W0816 13:46:02.172658   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:02.172666   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:02.172729   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:02.211216   57945 cri.go:89] found id: ""
	I0816 13:46:02.211247   57945 logs.go:276] 0 containers: []
	W0816 13:46:02.211257   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:02.211266   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:02.211334   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:02.245705   57945 cri.go:89] found id: ""
	I0816 13:46:02.245735   57945 logs.go:276] 0 containers: []
	W0816 13:46:02.245746   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:02.245753   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:02.245823   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:02.281057   57945 cri.go:89] found id: ""
	I0816 13:46:02.281082   57945 logs.go:276] 0 containers: []
	W0816 13:46:02.281093   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:02.281103   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:02.281128   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:02.333334   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:02.333377   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:02.347520   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:02.347546   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:02.427543   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:02.427572   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:02.427587   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:02.514871   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:02.514908   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:05.057817   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:05.070491   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:05.070554   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:01.106533   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:03.107325   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:05.107629   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:04.359077   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:06.857557   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:06.997606   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:08.998915   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:05.108262   57945 cri.go:89] found id: ""
	I0816 13:46:05.108290   57945 logs.go:276] 0 containers: []
	W0816 13:46:05.108301   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:05.108308   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:05.108361   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:05.143962   57945 cri.go:89] found id: ""
	I0816 13:46:05.143995   57945 logs.go:276] 0 containers: []
	W0816 13:46:05.144005   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:05.144011   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:05.144067   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:05.180032   57945 cri.go:89] found id: ""
	I0816 13:46:05.180058   57945 logs.go:276] 0 containers: []
	W0816 13:46:05.180068   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:05.180076   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:05.180128   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:05.214077   57945 cri.go:89] found id: ""
	I0816 13:46:05.214107   57945 logs.go:276] 0 containers: []
	W0816 13:46:05.214115   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:05.214121   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:05.214171   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:05.250887   57945 cri.go:89] found id: ""
	I0816 13:46:05.250920   57945 logs.go:276] 0 containers: []
	W0816 13:46:05.250930   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:05.250937   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:05.251000   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:05.285592   57945 cri.go:89] found id: ""
	I0816 13:46:05.285615   57945 logs.go:276] 0 containers: []
	W0816 13:46:05.285623   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:05.285629   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:05.285675   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:05.325221   57945 cri.go:89] found id: ""
	I0816 13:46:05.325248   57945 logs.go:276] 0 containers: []
	W0816 13:46:05.325258   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:05.325264   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:05.325307   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:05.364025   57945 cri.go:89] found id: ""
	I0816 13:46:05.364047   57945 logs.go:276] 0 containers: []
	W0816 13:46:05.364055   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:05.364062   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:05.364074   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:05.413364   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:05.413395   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:05.427328   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:05.427358   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:05.504040   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:05.504067   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:05.504086   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:05.580975   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:05.581010   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:08.123111   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:08.136822   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:08.136902   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:08.169471   57945 cri.go:89] found id: ""
	I0816 13:46:08.169495   57945 logs.go:276] 0 containers: []
	W0816 13:46:08.169503   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:08.169508   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:08.169556   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:08.211041   57945 cri.go:89] found id: ""
	I0816 13:46:08.211069   57945 logs.go:276] 0 containers: []
	W0816 13:46:08.211081   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:08.211087   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:08.211148   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:08.247564   57945 cri.go:89] found id: ""
	I0816 13:46:08.247590   57945 logs.go:276] 0 containers: []
	W0816 13:46:08.247600   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:08.247607   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:08.247670   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:08.284283   57945 cri.go:89] found id: ""
	I0816 13:46:08.284312   57945 logs.go:276] 0 containers: []
	W0816 13:46:08.284324   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:08.284332   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:08.284384   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:08.320287   57945 cri.go:89] found id: ""
	I0816 13:46:08.320311   57945 logs.go:276] 0 containers: []
	W0816 13:46:08.320319   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:08.320325   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:08.320371   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:08.358294   57945 cri.go:89] found id: ""
	I0816 13:46:08.358324   57945 logs.go:276] 0 containers: []
	W0816 13:46:08.358342   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:08.358356   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:08.358423   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:08.394386   57945 cri.go:89] found id: ""
	I0816 13:46:08.394414   57945 logs.go:276] 0 containers: []
	W0816 13:46:08.394424   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:08.394432   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:08.394502   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:08.439608   57945 cri.go:89] found id: ""
	I0816 13:46:08.439635   57945 logs.go:276] 0 containers: []
	W0816 13:46:08.439643   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:08.439653   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:08.439668   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:08.493878   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:08.493918   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:08.508080   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:08.508114   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:08.584703   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:08.584727   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:08.584745   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:08.663741   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:08.663776   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:07.606112   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:09.608137   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:09.357201   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:11.359055   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:11.497851   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:13.998849   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:11.204946   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:11.218720   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:11.218800   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:11.257825   57945 cri.go:89] found id: ""
	I0816 13:46:11.257852   57945 logs.go:276] 0 containers: []
	W0816 13:46:11.257862   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:11.257870   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:11.257930   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:11.293910   57945 cri.go:89] found id: ""
	I0816 13:46:11.293946   57945 logs.go:276] 0 containers: []
	W0816 13:46:11.293958   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:11.293966   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:11.294023   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:11.330005   57945 cri.go:89] found id: ""
	I0816 13:46:11.330031   57945 logs.go:276] 0 containers: []
	W0816 13:46:11.330039   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:11.330045   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:11.330101   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:11.365057   57945 cri.go:89] found id: ""
	I0816 13:46:11.365083   57945 logs.go:276] 0 containers: []
	W0816 13:46:11.365093   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:11.365101   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:11.365159   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:11.401440   57945 cri.go:89] found id: ""
	I0816 13:46:11.401467   57945 logs.go:276] 0 containers: []
	W0816 13:46:11.401475   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:11.401481   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:11.401532   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:11.435329   57945 cri.go:89] found id: ""
	I0816 13:46:11.435354   57945 logs.go:276] 0 containers: []
	W0816 13:46:11.435361   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:11.435368   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:11.435427   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:11.468343   57945 cri.go:89] found id: ""
	I0816 13:46:11.468373   57945 logs.go:276] 0 containers: []
	W0816 13:46:11.468393   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:11.468401   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:11.468465   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:11.503326   57945 cri.go:89] found id: ""
	I0816 13:46:11.503347   57945 logs.go:276] 0 containers: []
	W0816 13:46:11.503362   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:11.503370   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:11.503386   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:11.554681   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:11.554712   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:11.568056   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:11.568087   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:11.646023   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:11.646049   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:11.646060   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:11.726154   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:11.726191   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:14.266008   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:14.280328   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:14.280408   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:14.316359   57945 cri.go:89] found id: ""
	I0816 13:46:14.316388   57945 logs.go:276] 0 containers: []
	W0816 13:46:14.316398   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:14.316406   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:14.316470   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:14.360143   57945 cri.go:89] found id: ""
	I0816 13:46:14.360165   57945 logs.go:276] 0 containers: []
	W0816 13:46:14.360172   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:14.360183   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:14.360234   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:14.394692   57945 cri.go:89] found id: ""
	I0816 13:46:14.394717   57945 logs.go:276] 0 containers: []
	W0816 13:46:14.394724   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:14.394730   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:14.394789   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:14.431928   57945 cri.go:89] found id: ""
	I0816 13:46:14.431957   57945 logs.go:276] 0 containers: []
	W0816 13:46:14.431968   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:14.431975   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:14.432041   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:14.469223   57945 cri.go:89] found id: ""
	I0816 13:46:14.469253   57945 logs.go:276] 0 containers: []
	W0816 13:46:14.469265   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:14.469272   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:14.469334   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:14.506893   57945 cri.go:89] found id: ""
	I0816 13:46:14.506917   57945 logs.go:276] 0 containers: []
	W0816 13:46:14.506925   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:14.506931   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:14.506991   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:14.544801   57945 cri.go:89] found id: ""
	I0816 13:46:14.544825   57945 logs.go:276] 0 containers: []
	W0816 13:46:14.544833   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:14.544839   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:14.544891   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:14.579489   57945 cri.go:89] found id: ""
	I0816 13:46:14.579528   57945 logs.go:276] 0 containers: []
	W0816 13:46:14.579541   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:14.579556   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:14.579572   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:14.656527   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:14.656551   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:14.656573   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:14.736792   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:14.736823   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:14.775976   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:14.776010   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:14.827804   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:14.827836   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:12.106330   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:14.106732   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:13.857302   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:15.858233   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:16.497347   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:18.497948   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:17.341506   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:17.357136   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:17.357214   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:17.397810   57945 cri.go:89] found id: ""
	I0816 13:46:17.397839   57945 logs.go:276] 0 containers: []
	W0816 13:46:17.397867   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:17.397874   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:17.397936   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:17.435170   57945 cri.go:89] found id: ""
	I0816 13:46:17.435198   57945 logs.go:276] 0 containers: []
	W0816 13:46:17.435208   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:17.435214   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:17.435260   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:17.468837   57945 cri.go:89] found id: ""
	I0816 13:46:17.468871   57945 logs.go:276] 0 containers: []
	W0816 13:46:17.468882   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:17.468891   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:17.468962   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:17.503884   57945 cri.go:89] found id: ""
	I0816 13:46:17.503910   57945 logs.go:276] 0 containers: []
	W0816 13:46:17.503921   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:17.503930   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:17.503977   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:17.541204   57945 cri.go:89] found id: ""
	I0816 13:46:17.541232   57945 logs.go:276] 0 containers: []
	W0816 13:46:17.541244   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:17.541251   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:17.541312   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:17.577007   57945 cri.go:89] found id: ""
	I0816 13:46:17.577031   57945 logs.go:276] 0 containers: []
	W0816 13:46:17.577038   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:17.577045   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:17.577092   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:17.611352   57945 cri.go:89] found id: ""
	I0816 13:46:17.611373   57945 logs.go:276] 0 containers: []
	W0816 13:46:17.611380   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:17.611386   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:17.611433   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:17.648108   57945 cri.go:89] found id: ""
	I0816 13:46:17.648147   57945 logs.go:276] 0 containers: []
	W0816 13:46:17.648155   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:17.648164   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:17.648176   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:17.720475   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:17.720500   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:17.720512   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:17.797602   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:17.797636   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:17.842985   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:17.843019   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:17.893581   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:17.893617   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:16.107456   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:18.107650   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:20.608155   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:18.357472   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:20.857964   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:20.498563   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:22.998319   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:20.408415   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:20.423303   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:20.423384   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:20.459057   57945 cri.go:89] found id: ""
	I0816 13:46:20.459083   57945 logs.go:276] 0 containers: []
	W0816 13:46:20.459091   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:20.459096   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:20.459152   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:20.496447   57945 cri.go:89] found id: ""
	I0816 13:46:20.496471   57945 logs.go:276] 0 containers: []
	W0816 13:46:20.496479   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:20.496485   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:20.496532   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:20.538508   57945 cri.go:89] found id: ""
	I0816 13:46:20.538531   57945 logs.go:276] 0 containers: []
	W0816 13:46:20.538539   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:20.538544   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:20.538600   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:20.579350   57945 cri.go:89] found id: ""
	I0816 13:46:20.579382   57945 logs.go:276] 0 containers: []
	W0816 13:46:20.579390   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:20.579396   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:20.579465   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:20.615088   57945 cri.go:89] found id: ""
	I0816 13:46:20.615118   57945 logs.go:276] 0 containers: []
	W0816 13:46:20.615130   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:20.615137   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:20.615203   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:20.650849   57945 cri.go:89] found id: ""
	I0816 13:46:20.650877   57945 logs.go:276] 0 containers: []
	W0816 13:46:20.650884   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:20.650890   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:20.650950   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:20.691439   57945 cri.go:89] found id: ""
	I0816 13:46:20.691471   57945 logs.go:276] 0 containers: []
	W0816 13:46:20.691482   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:20.691490   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:20.691553   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:20.727795   57945 cri.go:89] found id: ""
	I0816 13:46:20.727820   57945 logs.go:276] 0 containers: []
	W0816 13:46:20.727829   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:20.727836   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:20.727847   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:20.806369   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:20.806390   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:20.806402   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:20.886313   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:20.886345   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:20.926079   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:20.926104   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:20.981052   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:20.981088   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:23.496179   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:23.509918   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:23.509983   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:23.546175   57945 cri.go:89] found id: ""
	I0816 13:46:23.546214   57945 logs.go:276] 0 containers: []
	W0816 13:46:23.546224   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:23.546231   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:23.546293   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:23.581553   57945 cri.go:89] found id: ""
	I0816 13:46:23.581581   57945 logs.go:276] 0 containers: []
	W0816 13:46:23.581594   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:23.581600   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:23.581648   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:23.614559   57945 cri.go:89] found id: ""
	I0816 13:46:23.614584   57945 logs.go:276] 0 containers: []
	W0816 13:46:23.614592   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:23.614597   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:23.614651   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:23.649239   57945 cri.go:89] found id: ""
	I0816 13:46:23.649272   57945 logs.go:276] 0 containers: []
	W0816 13:46:23.649283   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:23.649291   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:23.649354   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:23.688017   57945 cri.go:89] found id: ""
	I0816 13:46:23.688044   57945 logs.go:276] 0 containers: []
	W0816 13:46:23.688054   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:23.688062   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:23.688126   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:23.723475   57945 cri.go:89] found id: ""
	I0816 13:46:23.723507   57945 logs.go:276] 0 containers: []
	W0816 13:46:23.723517   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:23.723525   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:23.723585   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:23.756028   57945 cri.go:89] found id: ""
	I0816 13:46:23.756055   57945 logs.go:276] 0 containers: []
	W0816 13:46:23.756063   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:23.756069   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:23.756121   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:23.789965   57945 cri.go:89] found id: ""
	I0816 13:46:23.789993   57945 logs.go:276] 0 containers: []
	W0816 13:46:23.790000   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:23.790009   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:23.790029   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:23.803669   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:23.803696   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:23.882614   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:23.882642   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:23.882659   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:23.957733   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:23.957773   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:23.994270   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:23.994298   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:23.106190   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:25.106765   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:23.356443   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:25.356705   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:25.496930   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:27.497933   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:29.500639   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:26.546600   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:26.560153   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:26.560221   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:26.594482   57945 cri.go:89] found id: ""
	I0816 13:46:26.594506   57945 logs.go:276] 0 containers: []
	W0816 13:46:26.594520   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:26.594528   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:26.594585   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:26.628020   57945 cri.go:89] found id: ""
	I0816 13:46:26.628051   57945 logs.go:276] 0 containers: []
	W0816 13:46:26.628061   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:26.628068   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:26.628126   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:26.664248   57945 cri.go:89] found id: ""
	I0816 13:46:26.664277   57945 logs.go:276] 0 containers: []
	W0816 13:46:26.664288   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:26.664295   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:26.664357   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:26.700365   57945 cri.go:89] found id: ""
	I0816 13:46:26.700389   57945 logs.go:276] 0 containers: []
	W0816 13:46:26.700397   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:26.700403   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:26.700464   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:26.736170   57945 cri.go:89] found id: ""
	I0816 13:46:26.736204   57945 logs.go:276] 0 containers: []
	W0816 13:46:26.736212   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:26.736219   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:26.736268   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:26.773411   57945 cri.go:89] found id: ""
	I0816 13:46:26.773441   57945 logs.go:276] 0 containers: []
	W0816 13:46:26.773449   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:26.773455   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:26.773514   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:26.811994   57945 cri.go:89] found id: ""
	I0816 13:46:26.812022   57945 logs.go:276] 0 containers: []
	W0816 13:46:26.812030   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:26.812036   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:26.812087   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:26.846621   57945 cri.go:89] found id: ""
	I0816 13:46:26.846647   57945 logs.go:276] 0 containers: []
	W0816 13:46:26.846654   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:26.846662   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:26.846680   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:26.902255   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:26.902293   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:26.916117   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:26.916148   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:26.986755   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:26.986786   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:26.986802   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:27.069607   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:27.069644   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:29.610859   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:29.624599   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:29.624654   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:29.660421   57945 cri.go:89] found id: ""
	I0816 13:46:29.660454   57945 logs.go:276] 0 containers: []
	W0816 13:46:29.660465   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:29.660474   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:29.660534   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:29.694828   57945 cri.go:89] found id: ""
	I0816 13:46:29.694853   57945 logs.go:276] 0 containers: []
	W0816 13:46:29.694861   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:29.694867   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:29.694933   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:29.734054   57945 cri.go:89] found id: ""
	I0816 13:46:29.734083   57945 logs.go:276] 0 containers: []
	W0816 13:46:29.734093   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:29.734100   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:29.734159   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:29.771358   57945 cri.go:89] found id: ""
	I0816 13:46:29.771383   57945 logs.go:276] 0 containers: []
	W0816 13:46:29.771391   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:29.771397   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:29.771464   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:29.806781   57945 cri.go:89] found id: ""
	I0816 13:46:29.806804   57945 logs.go:276] 0 containers: []
	W0816 13:46:29.806812   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:29.806819   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:29.806879   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:29.841716   57945 cri.go:89] found id: ""
	I0816 13:46:29.841743   57945 logs.go:276] 0 containers: []
	W0816 13:46:29.841754   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:29.841762   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:29.841827   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:29.880115   57945 cri.go:89] found id: ""
	I0816 13:46:29.880144   57945 logs.go:276] 0 containers: []
	W0816 13:46:29.880152   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:29.880158   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:29.880226   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:29.916282   57945 cri.go:89] found id: ""
	I0816 13:46:29.916311   57945 logs.go:276] 0 containers: []
	W0816 13:46:29.916321   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:29.916331   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:29.916347   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:29.996027   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:29.996059   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:30.035284   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:30.035315   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:30.085336   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:30.085368   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:30.099534   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:30.099562   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 13:46:27.606739   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:29.606870   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:27.357970   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:29.861012   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:31.998584   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:34.497511   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	W0816 13:46:30.174105   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:32.674746   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:32.688631   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:32.688699   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:32.722967   57945 cri.go:89] found id: ""
	I0816 13:46:32.722997   57945 logs.go:276] 0 containers: []
	W0816 13:46:32.723007   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:32.723014   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:32.723075   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:32.757223   57945 cri.go:89] found id: ""
	I0816 13:46:32.757257   57945 logs.go:276] 0 containers: []
	W0816 13:46:32.757267   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:32.757275   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:32.757342   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:32.793773   57945 cri.go:89] found id: ""
	I0816 13:46:32.793795   57945 logs.go:276] 0 containers: []
	W0816 13:46:32.793804   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:32.793811   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:32.793879   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:32.829541   57945 cri.go:89] found id: ""
	I0816 13:46:32.829565   57945 logs.go:276] 0 containers: []
	W0816 13:46:32.829573   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:32.829578   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:32.829626   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:32.864053   57945 cri.go:89] found id: ""
	I0816 13:46:32.864079   57945 logs.go:276] 0 containers: []
	W0816 13:46:32.864090   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:32.864097   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:32.864155   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:32.901420   57945 cri.go:89] found id: ""
	I0816 13:46:32.901451   57945 logs.go:276] 0 containers: []
	W0816 13:46:32.901459   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:32.901466   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:32.901522   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:32.933082   57945 cri.go:89] found id: ""
	I0816 13:46:32.933110   57945 logs.go:276] 0 containers: []
	W0816 13:46:32.933118   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:32.933125   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:32.933186   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:32.966640   57945 cri.go:89] found id: ""
	I0816 13:46:32.966664   57945 logs.go:276] 0 containers: []
	W0816 13:46:32.966672   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:32.966680   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:32.966692   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:33.048593   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:33.048627   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:33.089329   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:33.089366   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:33.144728   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:33.144764   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:33.158639   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:33.158666   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:33.227076   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:32.106718   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:34.606961   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:32.357555   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:34.857062   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:36.857679   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:36.997085   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:38.999741   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:35.727465   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:35.740850   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:35.740940   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:35.777294   57945 cri.go:89] found id: ""
	I0816 13:46:35.777317   57945 logs.go:276] 0 containers: []
	W0816 13:46:35.777325   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:35.777330   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:35.777394   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:35.815582   57945 cri.go:89] found id: ""
	I0816 13:46:35.815604   57945 logs.go:276] 0 containers: []
	W0816 13:46:35.815612   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:35.815618   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:35.815672   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:35.848338   57945 cri.go:89] found id: ""
	I0816 13:46:35.848363   57945 logs.go:276] 0 containers: []
	W0816 13:46:35.848370   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:35.848376   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:35.848432   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:35.884834   57945 cri.go:89] found id: ""
	I0816 13:46:35.884862   57945 logs.go:276] 0 containers: []
	W0816 13:46:35.884870   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:35.884876   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:35.884953   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:35.919022   57945 cri.go:89] found id: ""
	I0816 13:46:35.919046   57945 logs.go:276] 0 containers: []
	W0816 13:46:35.919058   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:35.919063   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:35.919150   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:35.953087   57945 cri.go:89] found id: ""
	I0816 13:46:35.953111   57945 logs.go:276] 0 containers: []
	W0816 13:46:35.953119   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:35.953124   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:35.953182   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:35.984776   57945 cri.go:89] found id: ""
	I0816 13:46:35.984804   57945 logs.go:276] 0 containers: []
	W0816 13:46:35.984814   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:35.984821   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:35.984882   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:36.028921   57945 cri.go:89] found id: ""
	I0816 13:46:36.028946   57945 logs.go:276] 0 containers: []
	W0816 13:46:36.028954   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:36.028964   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:36.028976   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:36.091313   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:36.091342   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:36.116881   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:36.116915   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:36.186758   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:36.186778   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:36.186791   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:36.268618   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:36.268653   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:38.808419   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:38.821646   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:38.821708   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:38.860623   57945 cri.go:89] found id: ""
	I0816 13:46:38.860647   57945 logs.go:276] 0 containers: []
	W0816 13:46:38.860655   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:38.860660   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:38.860712   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:38.894728   57945 cri.go:89] found id: ""
	I0816 13:46:38.894782   57945 logs.go:276] 0 containers: []
	W0816 13:46:38.894795   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:38.894804   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:38.894870   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:38.928945   57945 cri.go:89] found id: ""
	I0816 13:46:38.928974   57945 logs.go:276] 0 containers: []
	W0816 13:46:38.928988   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:38.928994   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:38.929048   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:38.966450   57945 cri.go:89] found id: ""
	I0816 13:46:38.966474   57945 logs.go:276] 0 containers: []
	W0816 13:46:38.966482   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:38.966487   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:38.966548   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:39.001554   57945 cri.go:89] found id: ""
	I0816 13:46:39.001577   57945 logs.go:276] 0 containers: []
	W0816 13:46:39.001589   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:39.001595   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:39.001656   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:39.036621   57945 cri.go:89] found id: ""
	I0816 13:46:39.036646   57945 logs.go:276] 0 containers: []
	W0816 13:46:39.036654   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:39.036660   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:39.036725   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:39.071244   57945 cri.go:89] found id: ""
	I0816 13:46:39.071271   57945 logs.go:276] 0 containers: []
	W0816 13:46:39.071281   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:39.071289   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:39.071355   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:39.107325   57945 cri.go:89] found id: ""
	I0816 13:46:39.107352   57945 logs.go:276] 0 containers: []
	W0816 13:46:39.107361   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:39.107371   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:39.107401   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:39.189172   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:39.189208   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:39.229060   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:39.229094   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:39.281983   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:39.282025   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:39.296515   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:39.296545   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:39.368488   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:37.113026   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:39.606526   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:38.857809   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:41.358047   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:41.497724   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:43.498815   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:41.868721   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:41.883796   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:41.883869   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:41.922181   57945 cri.go:89] found id: ""
	I0816 13:46:41.922211   57945 logs.go:276] 0 containers: []
	W0816 13:46:41.922222   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:41.922232   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:41.922297   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:41.962213   57945 cri.go:89] found id: ""
	I0816 13:46:41.962239   57945 logs.go:276] 0 containers: []
	W0816 13:46:41.962249   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:41.962257   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:41.962321   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:42.003214   57945 cri.go:89] found id: ""
	I0816 13:46:42.003243   57945 logs.go:276] 0 containers: []
	W0816 13:46:42.003251   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:42.003257   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:42.003316   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:42.038594   57945 cri.go:89] found id: ""
	I0816 13:46:42.038622   57945 logs.go:276] 0 containers: []
	W0816 13:46:42.038635   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:42.038641   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:42.038691   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:42.071377   57945 cri.go:89] found id: ""
	I0816 13:46:42.071409   57945 logs.go:276] 0 containers: []
	W0816 13:46:42.071421   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:42.071429   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:42.071489   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:42.104777   57945 cri.go:89] found id: ""
	I0816 13:46:42.104804   57945 logs.go:276] 0 containers: []
	W0816 13:46:42.104815   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:42.104823   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:42.104879   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:42.140292   57945 cri.go:89] found id: ""
	I0816 13:46:42.140324   57945 logs.go:276] 0 containers: []
	W0816 13:46:42.140335   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:42.140342   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:42.140404   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:42.174823   57945 cri.go:89] found id: ""
	I0816 13:46:42.174861   57945 logs.go:276] 0 containers: []
	W0816 13:46:42.174870   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:42.174887   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:42.174906   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:42.216308   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:42.216337   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:42.269277   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:42.269304   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:42.282347   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:42.282374   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:42.358776   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:42.358796   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:42.358807   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:44.942195   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:44.955384   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:44.955465   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:44.994181   57945 cri.go:89] found id: ""
	I0816 13:46:44.994212   57945 logs.go:276] 0 containers: []
	W0816 13:46:44.994223   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:44.994230   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:44.994286   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:45.028937   57945 cri.go:89] found id: ""
	I0816 13:46:45.028972   57945 logs.go:276] 0 containers: []
	W0816 13:46:45.028984   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:45.028991   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:45.029049   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:45.068193   57945 cri.go:89] found id: ""
	I0816 13:46:45.068223   57945 logs.go:276] 0 containers: []
	W0816 13:46:45.068237   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:45.068249   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:45.068309   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:42.108651   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:44.606597   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:43.856419   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:45.858360   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:45.998195   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:48.497584   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:45.100553   57945 cri.go:89] found id: ""
	I0816 13:46:45.100653   57945 logs.go:276] 0 containers: []
	W0816 13:46:45.100667   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:45.100674   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:45.100734   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:45.135676   57945 cri.go:89] found id: ""
	I0816 13:46:45.135704   57945 logs.go:276] 0 containers: []
	W0816 13:46:45.135714   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:45.135721   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:45.135784   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:45.174611   57945 cri.go:89] found id: ""
	I0816 13:46:45.174642   57945 logs.go:276] 0 containers: []
	W0816 13:46:45.174653   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:45.174660   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:45.174713   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:45.209544   57945 cri.go:89] found id: ""
	I0816 13:46:45.209573   57945 logs.go:276] 0 containers: []
	W0816 13:46:45.209582   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:45.209588   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:45.209649   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:45.245622   57945 cri.go:89] found id: ""
	I0816 13:46:45.245654   57945 logs.go:276] 0 containers: []
	W0816 13:46:45.245664   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:45.245677   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:45.245692   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:45.284294   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:45.284322   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:45.335720   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:45.335751   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:45.350014   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:45.350039   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:45.419816   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:45.419839   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:45.419854   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:48.005991   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:48.019754   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:48.019814   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:48.053269   57945 cri.go:89] found id: ""
	I0816 13:46:48.053331   57945 logs.go:276] 0 containers: []
	W0816 13:46:48.053344   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:48.053351   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:48.053404   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:48.086992   57945 cri.go:89] found id: ""
	I0816 13:46:48.087024   57945 logs.go:276] 0 containers: []
	W0816 13:46:48.087032   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:48.087037   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:48.087098   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:48.123008   57945 cri.go:89] found id: ""
	I0816 13:46:48.123037   57945 logs.go:276] 0 containers: []
	W0816 13:46:48.123046   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:48.123053   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:48.123110   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:48.158035   57945 cri.go:89] found id: ""
	I0816 13:46:48.158064   57945 logs.go:276] 0 containers: []
	W0816 13:46:48.158075   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:48.158082   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:48.158146   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:48.194576   57945 cri.go:89] found id: ""
	I0816 13:46:48.194605   57945 logs.go:276] 0 containers: []
	W0816 13:46:48.194616   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:48.194624   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:48.194687   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:48.232844   57945 cri.go:89] found id: ""
	I0816 13:46:48.232870   57945 logs.go:276] 0 containers: []
	W0816 13:46:48.232878   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:48.232883   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:48.232955   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:48.267525   57945 cri.go:89] found id: ""
	I0816 13:46:48.267551   57945 logs.go:276] 0 containers: []
	W0816 13:46:48.267559   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:48.267564   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:48.267629   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:48.305436   57945 cri.go:89] found id: ""
	I0816 13:46:48.305465   57945 logs.go:276] 0 containers: []
	W0816 13:46:48.305477   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:48.305487   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:48.305502   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:48.357755   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:48.357781   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:48.372672   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:48.372703   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:48.439076   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:48.439099   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:48.439114   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:48.524142   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:48.524181   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:47.106288   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:49.108117   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:48.357517   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:50.857069   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:50.501014   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:52.998618   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:51.065770   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:51.078797   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:51.078868   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:51.118864   57945 cri.go:89] found id: ""
	I0816 13:46:51.118891   57945 logs.go:276] 0 containers: []
	W0816 13:46:51.118899   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:51.118905   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:51.118964   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:51.153024   57945 cri.go:89] found id: ""
	I0816 13:46:51.153049   57945 logs.go:276] 0 containers: []
	W0816 13:46:51.153057   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:51.153062   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:51.153111   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:51.189505   57945 cri.go:89] found id: ""
	I0816 13:46:51.189531   57945 logs.go:276] 0 containers: []
	W0816 13:46:51.189542   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:51.189550   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:51.189611   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:51.228456   57945 cri.go:89] found id: ""
	I0816 13:46:51.228483   57945 logs.go:276] 0 containers: []
	W0816 13:46:51.228494   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:51.228502   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:51.228565   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:51.264436   57945 cri.go:89] found id: ""
	I0816 13:46:51.264463   57945 logs.go:276] 0 containers: []
	W0816 13:46:51.264474   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:51.264482   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:51.264542   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:51.300291   57945 cri.go:89] found id: ""
	I0816 13:46:51.300315   57945 logs.go:276] 0 containers: []
	W0816 13:46:51.300323   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:51.300329   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:51.300379   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:51.334878   57945 cri.go:89] found id: ""
	I0816 13:46:51.334902   57945 logs.go:276] 0 containers: []
	W0816 13:46:51.334909   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:51.334917   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:51.334969   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:51.376467   57945 cri.go:89] found id: ""
	I0816 13:46:51.376491   57945 logs.go:276] 0 containers: []
	W0816 13:46:51.376499   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:51.376507   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:51.376518   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:51.420168   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:51.420194   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:51.470869   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:51.470900   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:51.484877   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:51.484903   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:51.557587   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:51.557614   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:51.557631   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:54.141123   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:54.154790   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:54.154864   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:54.189468   57945 cri.go:89] found id: ""
	I0816 13:46:54.189495   57945 logs.go:276] 0 containers: []
	W0816 13:46:54.189503   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:54.189509   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:54.189562   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:54.223774   57945 cri.go:89] found id: ""
	I0816 13:46:54.223805   57945 logs.go:276] 0 containers: []
	W0816 13:46:54.223817   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:54.223826   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:54.223883   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:54.257975   57945 cri.go:89] found id: ""
	I0816 13:46:54.258004   57945 logs.go:276] 0 containers: []
	W0816 13:46:54.258014   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:54.258022   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:54.258078   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:54.296144   57945 cri.go:89] found id: ""
	I0816 13:46:54.296174   57945 logs.go:276] 0 containers: []
	W0816 13:46:54.296193   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:54.296201   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:54.296276   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:54.336734   57945 cri.go:89] found id: ""
	I0816 13:46:54.336760   57945 logs.go:276] 0 containers: []
	W0816 13:46:54.336770   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:54.336775   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:54.336839   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:54.370572   57945 cri.go:89] found id: ""
	I0816 13:46:54.370602   57945 logs.go:276] 0 containers: []
	W0816 13:46:54.370609   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:54.370615   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:54.370676   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:54.405703   57945 cri.go:89] found id: ""
	I0816 13:46:54.405735   57945 logs.go:276] 0 containers: []
	W0816 13:46:54.405745   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:54.405753   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:54.405816   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:54.441466   57945 cri.go:89] found id: ""
	I0816 13:46:54.441492   57945 logs.go:276] 0 containers: []
	W0816 13:46:54.441500   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:54.441509   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:54.441521   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:54.492539   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:54.492570   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:54.506313   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:54.506341   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:54.580127   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:54.580151   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:54.580172   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:54.658597   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:54.658633   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:51.607335   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:54.106631   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:53.357847   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:55.857456   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:55.497897   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:57.999173   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:57.198267   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:57.213292   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:57.213354   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:57.248838   57945 cri.go:89] found id: ""
	I0816 13:46:57.248862   57945 logs.go:276] 0 containers: []
	W0816 13:46:57.248870   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:57.248876   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:57.248951   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:57.283868   57945 cri.go:89] found id: ""
	I0816 13:46:57.283895   57945 logs.go:276] 0 containers: []
	W0816 13:46:57.283903   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:57.283908   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:57.283958   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:57.319363   57945 cri.go:89] found id: ""
	I0816 13:46:57.319392   57945 logs.go:276] 0 containers: []
	W0816 13:46:57.319405   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:57.319412   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:57.319465   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:57.359895   57945 cri.go:89] found id: ""
	I0816 13:46:57.359937   57945 logs.go:276] 0 containers: []
	W0816 13:46:57.359949   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:57.359957   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:57.360024   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:57.398025   57945 cri.go:89] found id: ""
	I0816 13:46:57.398057   57945 logs.go:276] 0 containers: []
	W0816 13:46:57.398068   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:57.398075   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:57.398140   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:57.436101   57945 cri.go:89] found id: ""
	I0816 13:46:57.436132   57945 logs.go:276] 0 containers: []
	W0816 13:46:57.436140   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:57.436147   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:57.436223   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:57.471737   57945 cri.go:89] found id: ""
	I0816 13:46:57.471767   57945 logs.go:276] 0 containers: []
	W0816 13:46:57.471778   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:57.471785   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:57.471845   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:57.508664   57945 cri.go:89] found id: ""
	I0816 13:46:57.508694   57945 logs.go:276] 0 containers: []
	W0816 13:46:57.508705   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:57.508716   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:57.508730   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:57.559122   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:57.559155   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:57.572504   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:57.572529   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:57.646721   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:57.646743   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:57.646756   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:57.725107   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:57.725153   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:56.107168   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:58.606805   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:00.607098   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:57.857681   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:00.357433   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:00.497738   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:02.998036   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:04.998316   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:00.269137   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:00.284285   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:00.284363   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:00.325613   57945 cri.go:89] found id: ""
	I0816 13:47:00.325645   57945 logs.go:276] 0 containers: []
	W0816 13:47:00.325654   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:00.325662   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:00.325721   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:00.361706   57945 cri.go:89] found id: ""
	I0816 13:47:00.361732   57945 logs.go:276] 0 containers: []
	W0816 13:47:00.361742   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:00.361750   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:00.361808   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:00.398453   57945 cri.go:89] found id: ""
	I0816 13:47:00.398478   57945 logs.go:276] 0 containers: []
	W0816 13:47:00.398486   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:00.398491   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:00.398544   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:00.434233   57945 cri.go:89] found id: ""
	I0816 13:47:00.434265   57945 logs.go:276] 0 containers: []
	W0816 13:47:00.434278   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:00.434286   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:00.434391   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:00.473020   57945 cri.go:89] found id: ""
	I0816 13:47:00.473042   57945 logs.go:276] 0 containers: []
	W0816 13:47:00.473050   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:00.473056   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:00.473117   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:00.511480   57945 cri.go:89] found id: ""
	I0816 13:47:00.511507   57945 logs.go:276] 0 containers: []
	W0816 13:47:00.511518   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:00.511525   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:00.511595   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:00.546166   57945 cri.go:89] found id: ""
	I0816 13:47:00.546202   57945 logs.go:276] 0 containers: []
	W0816 13:47:00.546209   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:00.546216   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:00.546263   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:00.585285   57945 cri.go:89] found id: ""
	I0816 13:47:00.585310   57945 logs.go:276] 0 containers: []
	W0816 13:47:00.585320   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:00.585329   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:00.585348   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:00.633346   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:00.633373   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:00.687904   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:00.687937   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:00.703773   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:00.703801   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:00.775179   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:00.775210   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:00.775226   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:03.354676   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:03.370107   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:03.370178   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:03.406212   57945 cri.go:89] found id: ""
	I0816 13:47:03.406245   57945 logs.go:276] 0 containers: []
	W0816 13:47:03.406256   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:03.406263   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:03.406333   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:03.442887   57945 cri.go:89] found id: ""
	I0816 13:47:03.442925   57945 logs.go:276] 0 containers: []
	W0816 13:47:03.442937   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:03.442943   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:03.443000   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:03.479225   57945 cri.go:89] found id: ""
	I0816 13:47:03.479259   57945 logs.go:276] 0 containers: []
	W0816 13:47:03.479270   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:03.479278   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:03.479340   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:03.516145   57945 cri.go:89] found id: ""
	I0816 13:47:03.516181   57945 logs.go:276] 0 containers: []
	W0816 13:47:03.516192   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:03.516203   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:03.516265   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:03.548225   57945 cri.go:89] found id: ""
	I0816 13:47:03.548252   57945 logs.go:276] 0 containers: []
	W0816 13:47:03.548260   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:03.548267   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:03.548324   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:03.582038   57945 cri.go:89] found id: ""
	I0816 13:47:03.582071   57945 logs.go:276] 0 containers: []
	W0816 13:47:03.582082   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:03.582089   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:03.582160   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:03.618693   57945 cri.go:89] found id: ""
	I0816 13:47:03.618720   57945 logs.go:276] 0 containers: []
	W0816 13:47:03.618730   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:03.618737   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:03.618793   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:03.653717   57945 cri.go:89] found id: ""
	I0816 13:47:03.653742   57945 logs.go:276] 0 containers: []
	W0816 13:47:03.653751   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:03.653759   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:03.653771   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:03.705909   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:03.705942   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:03.720727   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:03.720751   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:03.795064   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:03.795089   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:03.795104   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:03.874061   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:03.874105   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:02.607546   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:05.106955   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:02.358368   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:04.359618   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:06.858437   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:06.999109   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:09.498087   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:06.420149   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:06.437062   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:06.437124   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:06.473620   57945 cri.go:89] found id: ""
	I0816 13:47:06.473651   57945 logs.go:276] 0 containers: []
	W0816 13:47:06.473659   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:06.473664   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:06.473720   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:06.510281   57945 cri.go:89] found id: ""
	I0816 13:47:06.510307   57945 logs.go:276] 0 containers: []
	W0816 13:47:06.510315   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:06.510321   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:06.510372   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:06.546589   57945 cri.go:89] found id: ""
	I0816 13:47:06.546623   57945 logs.go:276] 0 containers: []
	W0816 13:47:06.546634   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:06.546642   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:06.546702   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:06.580629   57945 cri.go:89] found id: ""
	I0816 13:47:06.580652   57945 logs.go:276] 0 containers: []
	W0816 13:47:06.580665   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:06.580671   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:06.580718   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:06.617411   57945 cri.go:89] found id: ""
	I0816 13:47:06.617439   57945 logs.go:276] 0 containers: []
	W0816 13:47:06.617459   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:06.617468   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:06.617533   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:06.654017   57945 cri.go:89] found id: ""
	I0816 13:47:06.654045   57945 logs.go:276] 0 containers: []
	W0816 13:47:06.654057   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:06.654064   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:06.654124   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:06.695109   57945 cri.go:89] found id: ""
	I0816 13:47:06.695139   57945 logs.go:276] 0 containers: []
	W0816 13:47:06.695147   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:06.695153   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:06.695205   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:06.731545   57945 cri.go:89] found id: ""
	I0816 13:47:06.731620   57945 logs.go:276] 0 containers: []
	W0816 13:47:06.731635   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:06.731647   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:06.731668   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:06.782862   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:06.782900   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:06.797524   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:06.797550   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:06.877445   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:06.877476   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:06.877493   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:06.957932   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:06.957965   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:09.498843   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:09.513398   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:09.513468   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:09.551246   57945 cri.go:89] found id: ""
	I0816 13:47:09.551275   57945 logs.go:276] 0 containers: []
	W0816 13:47:09.551284   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:09.551290   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:09.551339   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:09.585033   57945 cri.go:89] found id: ""
	I0816 13:47:09.585059   57945 logs.go:276] 0 containers: []
	W0816 13:47:09.585066   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:09.585072   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:09.585120   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:09.623498   57945 cri.go:89] found id: ""
	I0816 13:47:09.623524   57945 logs.go:276] 0 containers: []
	W0816 13:47:09.623531   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:09.623537   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:09.623584   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:09.657476   57945 cri.go:89] found id: ""
	I0816 13:47:09.657504   57945 logs.go:276] 0 containers: []
	W0816 13:47:09.657515   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:09.657523   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:09.657578   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:09.693715   57945 cri.go:89] found id: ""
	I0816 13:47:09.693746   57945 logs.go:276] 0 containers: []
	W0816 13:47:09.693757   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:09.693765   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:09.693825   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:09.727396   57945 cri.go:89] found id: ""
	I0816 13:47:09.727426   57945 logs.go:276] 0 containers: []
	W0816 13:47:09.727437   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:09.727451   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:09.727511   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:09.764334   57945 cri.go:89] found id: ""
	I0816 13:47:09.764361   57945 logs.go:276] 0 containers: []
	W0816 13:47:09.764368   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:09.764374   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:09.764428   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:09.799460   57945 cri.go:89] found id: ""
	I0816 13:47:09.799485   57945 logs.go:276] 0 containers: []
	W0816 13:47:09.799497   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:09.799508   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:09.799521   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:09.849637   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:09.849678   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:09.869665   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:09.869702   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:09.954878   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:09.954907   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:09.954922   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:10.032473   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:10.032507   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:07.107809   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:09.606867   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:09.358384   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:11.359451   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:11.997273   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:13.998709   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:12.574303   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:12.587684   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:12.587746   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:12.625568   57945 cri.go:89] found id: ""
	I0816 13:47:12.625593   57945 logs.go:276] 0 containers: []
	W0816 13:47:12.625604   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:12.625611   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:12.625719   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:12.665018   57945 cri.go:89] found id: ""
	I0816 13:47:12.665048   57945 logs.go:276] 0 containers: []
	W0816 13:47:12.665059   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:12.665067   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:12.665128   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:12.701125   57945 cri.go:89] found id: ""
	I0816 13:47:12.701150   57945 logs.go:276] 0 containers: []
	W0816 13:47:12.701158   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:12.701163   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:12.701218   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:12.740613   57945 cri.go:89] found id: ""
	I0816 13:47:12.740644   57945 logs.go:276] 0 containers: []
	W0816 13:47:12.740654   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:12.740662   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:12.740727   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:12.779620   57945 cri.go:89] found id: ""
	I0816 13:47:12.779652   57945 logs.go:276] 0 containers: []
	W0816 13:47:12.779664   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:12.779678   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:12.779743   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:12.816222   57945 cri.go:89] found id: ""
	I0816 13:47:12.816248   57945 logs.go:276] 0 containers: []
	W0816 13:47:12.816269   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:12.816278   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:12.816327   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:12.853083   57945 cri.go:89] found id: ""
	I0816 13:47:12.853113   57945 logs.go:276] 0 containers: []
	W0816 13:47:12.853125   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:12.853133   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:12.853192   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:12.888197   57945 cri.go:89] found id: ""
	I0816 13:47:12.888223   57945 logs.go:276] 0 containers: []
	W0816 13:47:12.888232   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:12.888240   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:12.888255   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:12.941464   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:12.941502   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:12.955423   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:12.955456   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:13.025515   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:13.025537   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:13.025550   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:13.112409   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:13.112452   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:12.107421   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:14.606538   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:13.857389   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:15.857870   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:16.498127   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:18.498877   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:15.656240   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:15.669505   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:15.669568   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:15.703260   57945 cri.go:89] found id: ""
	I0816 13:47:15.703288   57945 logs.go:276] 0 containers: []
	W0816 13:47:15.703299   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:15.703306   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:15.703368   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:15.740555   57945 cri.go:89] found id: ""
	I0816 13:47:15.740580   57945 logs.go:276] 0 containers: []
	W0816 13:47:15.740590   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:15.740596   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:15.740660   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:15.776207   57945 cri.go:89] found id: ""
	I0816 13:47:15.776233   57945 logs.go:276] 0 containers: []
	W0816 13:47:15.776241   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:15.776247   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:15.776302   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:15.816845   57945 cri.go:89] found id: ""
	I0816 13:47:15.816871   57945 logs.go:276] 0 containers: []
	W0816 13:47:15.816879   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:15.816884   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:15.816953   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:15.851279   57945 cri.go:89] found id: ""
	I0816 13:47:15.851306   57945 logs.go:276] 0 containers: []
	W0816 13:47:15.851318   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:15.851325   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:15.851391   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:15.884960   57945 cri.go:89] found id: ""
	I0816 13:47:15.884987   57945 logs.go:276] 0 containers: []
	W0816 13:47:15.884997   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:15.885004   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:15.885063   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:15.922027   57945 cri.go:89] found id: ""
	I0816 13:47:15.922051   57945 logs.go:276] 0 containers: []
	W0816 13:47:15.922060   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:15.922067   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:15.922130   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:15.956774   57945 cri.go:89] found id: ""
	I0816 13:47:15.956799   57945 logs.go:276] 0 containers: []
	W0816 13:47:15.956806   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:15.956814   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:15.956828   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:16.036342   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:16.036375   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:16.079006   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:16.079033   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:16.130374   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:16.130409   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:16.144707   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:16.144740   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:16.216466   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:18.716696   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:18.729670   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:18.729731   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:18.764481   57945 cri.go:89] found id: ""
	I0816 13:47:18.764513   57945 logs.go:276] 0 containers: []
	W0816 13:47:18.764521   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:18.764527   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:18.764574   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:18.803141   57945 cri.go:89] found id: ""
	I0816 13:47:18.803172   57945 logs.go:276] 0 containers: []
	W0816 13:47:18.803183   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:18.803192   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:18.803257   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:18.847951   57945 cri.go:89] found id: ""
	I0816 13:47:18.847977   57945 logs.go:276] 0 containers: []
	W0816 13:47:18.847985   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:18.847991   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:18.848038   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:18.881370   57945 cri.go:89] found id: ""
	I0816 13:47:18.881402   57945 logs.go:276] 0 containers: []
	W0816 13:47:18.881420   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:18.881434   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:18.881491   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:18.916206   57945 cri.go:89] found id: ""
	I0816 13:47:18.916237   57945 logs.go:276] 0 containers: []
	W0816 13:47:18.916247   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:18.916253   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:18.916314   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:18.946851   57945 cri.go:89] found id: ""
	I0816 13:47:18.946873   57945 logs.go:276] 0 containers: []
	W0816 13:47:18.946883   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:18.946891   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:18.946944   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:18.980684   57945 cri.go:89] found id: ""
	I0816 13:47:18.980710   57945 logs.go:276] 0 containers: []
	W0816 13:47:18.980718   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:18.980724   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:18.980789   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:19.015762   57945 cri.go:89] found id: ""
	I0816 13:47:19.015794   57945 logs.go:276] 0 containers: []
	W0816 13:47:19.015805   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:19.015817   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:19.015837   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:19.101544   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:19.101582   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:19.143587   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:19.143621   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:19.198788   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:19.198826   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:19.212697   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:19.212723   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:19.282719   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:16.607841   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:19.107952   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:18.358184   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:20.857525   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:20.499116   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:22.996642   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:24.998888   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:21.783729   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:21.797977   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:21.798056   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:21.833944   57945 cri.go:89] found id: ""
	I0816 13:47:21.833976   57945 logs.go:276] 0 containers: []
	W0816 13:47:21.833987   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:21.833996   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:21.834053   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:21.870079   57945 cri.go:89] found id: ""
	I0816 13:47:21.870110   57945 logs.go:276] 0 containers: []
	W0816 13:47:21.870120   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:21.870128   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:21.870191   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:21.905834   57945 cri.go:89] found id: ""
	I0816 13:47:21.905864   57945 logs.go:276] 0 containers: []
	W0816 13:47:21.905872   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:21.905878   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:21.905932   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:21.943319   57945 cri.go:89] found id: ""
	I0816 13:47:21.943341   57945 logs.go:276] 0 containers: []
	W0816 13:47:21.943349   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:21.943354   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:21.943412   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:21.982065   57945 cri.go:89] found id: ""
	I0816 13:47:21.982094   57945 logs.go:276] 0 containers: []
	W0816 13:47:21.982103   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:21.982110   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:21.982268   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:22.035131   57945 cri.go:89] found id: ""
	I0816 13:47:22.035167   57945 logs.go:276] 0 containers: []
	W0816 13:47:22.035179   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:22.035186   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:22.035250   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:22.082619   57945 cri.go:89] found id: ""
	I0816 13:47:22.082647   57945 logs.go:276] 0 containers: []
	W0816 13:47:22.082655   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:22.082661   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:22.082720   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:22.128521   57945 cri.go:89] found id: ""
	I0816 13:47:22.128550   57945 logs.go:276] 0 containers: []
	W0816 13:47:22.128559   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:22.128568   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:22.128581   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:22.182794   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:22.182824   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:22.196602   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:22.196628   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:22.264434   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:22.264457   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:22.264472   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:22.343796   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:22.343832   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:24.891164   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:24.904170   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:24.904244   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:24.941046   57945 cri.go:89] found id: ""
	I0816 13:47:24.941082   57945 logs.go:276] 0 containers: []
	W0816 13:47:24.941093   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:24.941101   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:24.941177   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:24.976520   57945 cri.go:89] found id: ""
	I0816 13:47:24.976553   57945 logs.go:276] 0 containers: []
	W0816 13:47:24.976564   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:24.976572   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:24.976635   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:25.024663   57945 cri.go:89] found id: ""
	I0816 13:47:25.024692   57945 logs.go:276] 0 containers: []
	W0816 13:47:25.024704   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:25.024712   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:25.024767   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:25.063892   57945 cri.go:89] found id: ""
	I0816 13:47:25.063920   57945 logs.go:276] 0 containers: []
	W0816 13:47:25.063928   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:25.063934   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:25.064014   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:21.607247   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:23.608388   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:22.857995   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:24.858506   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:27.497595   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:29.997611   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:25.105565   57945 cri.go:89] found id: ""
	I0816 13:47:25.105600   57945 logs.go:276] 0 containers: []
	W0816 13:47:25.105612   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:25.105619   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:25.105676   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:25.150965   57945 cri.go:89] found id: ""
	I0816 13:47:25.150995   57945 logs.go:276] 0 containers: []
	W0816 13:47:25.151006   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:25.151014   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:25.151074   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:25.191170   57945 cri.go:89] found id: ""
	I0816 13:47:25.191202   57945 logs.go:276] 0 containers: []
	W0816 13:47:25.191213   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:25.191220   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:25.191280   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:25.226614   57945 cri.go:89] found id: ""
	I0816 13:47:25.226643   57945 logs.go:276] 0 containers: []
	W0816 13:47:25.226653   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:25.226664   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:25.226680   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:25.239478   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:25.239516   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:25.315450   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:25.315478   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:25.315494   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:25.394755   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:25.394792   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:25.434737   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:25.434768   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:27.984829   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:28.000304   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:28.000378   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:28.042396   57945 cri.go:89] found id: ""
	I0816 13:47:28.042430   57945 logs.go:276] 0 containers: []
	W0816 13:47:28.042447   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:28.042455   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:28.042514   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:28.094491   57945 cri.go:89] found id: ""
	I0816 13:47:28.094515   57945 logs.go:276] 0 containers: []
	W0816 13:47:28.094523   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:28.094528   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:28.094586   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:28.146228   57945 cri.go:89] found id: ""
	I0816 13:47:28.146254   57945 logs.go:276] 0 containers: []
	W0816 13:47:28.146262   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:28.146267   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:28.146314   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:28.179302   57945 cri.go:89] found id: ""
	I0816 13:47:28.179335   57945 logs.go:276] 0 containers: []
	W0816 13:47:28.179347   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:28.179355   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:28.179417   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:28.216707   57945 cri.go:89] found id: ""
	I0816 13:47:28.216737   57945 logs.go:276] 0 containers: []
	W0816 13:47:28.216749   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:28.216757   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:28.216808   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:28.253800   57945 cri.go:89] found id: ""
	I0816 13:47:28.253832   57945 logs.go:276] 0 containers: []
	W0816 13:47:28.253843   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:28.253851   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:28.253906   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:28.289403   57945 cri.go:89] found id: ""
	I0816 13:47:28.289438   57945 logs.go:276] 0 containers: []
	W0816 13:47:28.289450   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:28.289458   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:28.289520   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:28.325174   57945 cri.go:89] found id: ""
	I0816 13:47:28.325206   57945 logs.go:276] 0 containers: []
	W0816 13:47:28.325214   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:28.325222   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:28.325233   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:28.377043   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:28.377077   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:28.390991   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:28.391028   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:28.463563   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:28.463584   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:28.463598   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:28.546593   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:28.546628   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:26.107830   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:28.607294   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:30.613619   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:27.356723   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:29.358026   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:31.857750   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:32.497685   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:34.500214   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:31.084932   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:31.100742   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:31.100809   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:31.134888   57945 cri.go:89] found id: ""
	I0816 13:47:31.134914   57945 logs.go:276] 0 containers: []
	W0816 13:47:31.134921   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:31.134929   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:31.134979   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:31.169533   57945 cri.go:89] found id: ""
	I0816 13:47:31.169558   57945 logs.go:276] 0 containers: []
	W0816 13:47:31.169566   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:31.169572   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:31.169630   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:31.203888   57945 cri.go:89] found id: ""
	I0816 13:47:31.203914   57945 logs.go:276] 0 containers: []
	W0816 13:47:31.203924   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:31.203931   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:31.203993   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:31.239346   57945 cri.go:89] found id: ""
	I0816 13:47:31.239374   57945 logs.go:276] 0 containers: []
	W0816 13:47:31.239387   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:31.239393   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:31.239443   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:31.274011   57945 cri.go:89] found id: ""
	I0816 13:47:31.274038   57945 logs.go:276] 0 containers: []
	W0816 13:47:31.274046   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:31.274053   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:31.274117   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:31.308812   57945 cri.go:89] found id: ""
	I0816 13:47:31.308845   57945 logs.go:276] 0 containers: []
	W0816 13:47:31.308856   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:31.308863   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:31.308950   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:31.343041   57945 cri.go:89] found id: ""
	I0816 13:47:31.343067   57945 logs.go:276] 0 containers: []
	W0816 13:47:31.343075   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:31.343082   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:31.343143   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:31.380969   57945 cri.go:89] found id: ""
	I0816 13:47:31.380998   57945 logs.go:276] 0 containers: []
	W0816 13:47:31.381006   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:31.381015   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:31.381028   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:31.434431   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:31.434465   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:31.449374   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:31.449404   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:31.522134   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:31.522159   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:31.522174   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:31.602707   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:31.602736   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:34.142413   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:34.155531   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:34.155595   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:34.195926   57945 cri.go:89] found id: ""
	I0816 13:47:34.195953   57945 logs.go:276] 0 containers: []
	W0816 13:47:34.195964   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:34.195972   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:34.196040   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:34.230064   57945 cri.go:89] found id: ""
	I0816 13:47:34.230092   57945 logs.go:276] 0 containers: []
	W0816 13:47:34.230103   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:34.230109   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:34.230163   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:34.263973   57945 cri.go:89] found id: ""
	I0816 13:47:34.263998   57945 logs.go:276] 0 containers: []
	W0816 13:47:34.264005   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:34.264012   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:34.264069   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:34.298478   57945 cri.go:89] found id: ""
	I0816 13:47:34.298523   57945 logs.go:276] 0 containers: []
	W0816 13:47:34.298532   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:34.298539   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:34.298597   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:34.337196   57945 cri.go:89] found id: ""
	I0816 13:47:34.337225   57945 logs.go:276] 0 containers: []
	W0816 13:47:34.337233   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:34.337239   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:34.337291   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:34.374716   57945 cri.go:89] found id: ""
	I0816 13:47:34.374751   57945 logs.go:276] 0 containers: []
	W0816 13:47:34.374763   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:34.374771   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:34.374830   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:34.413453   57945 cri.go:89] found id: ""
	I0816 13:47:34.413480   57945 logs.go:276] 0 containers: []
	W0816 13:47:34.413491   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:34.413498   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:34.413563   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:34.450074   57945 cri.go:89] found id: ""
	I0816 13:47:34.450107   57945 logs.go:276] 0 containers: []
	W0816 13:47:34.450119   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:34.450156   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:34.450176   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:34.490214   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:34.490239   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:34.542861   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:34.542895   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:34.557371   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:34.557400   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:34.627976   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:34.627995   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:34.628011   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:33.106665   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:35.107026   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:34.358059   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:36.858347   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:36.998289   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:39.499047   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:37.205741   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:37.219207   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:37.219286   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:37.258254   57945 cri.go:89] found id: ""
	I0816 13:47:37.258288   57945 logs.go:276] 0 containers: []
	W0816 13:47:37.258300   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:37.258307   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:37.258359   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:37.293604   57945 cri.go:89] found id: ""
	I0816 13:47:37.293635   57945 logs.go:276] 0 containers: []
	W0816 13:47:37.293647   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:37.293654   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:37.293715   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:37.334043   57945 cri.go:89] found id: ""
	I0816 13:47:37.334072   57945 logs.go:276] 0 containers: []
	W0816 13:47:37.334084   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:37.334091   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:37.334153   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:37.369745   57945 cri.go:89] found id: ""
	I0816 13:47:37.369770   57945 logs.go:276] 0 containers: []
	W0816 13:47:37.369777   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:37.369784   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:37.369835   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:37.406277   57945 cri.go:89] found id: ""
	I0816 13:47:37.406305   57945 logs.go:276] 0 containers: []
	W0816 13:47:37.406317   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:37.406325   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:37.406407   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:37.440418   57945 cri.go:89] found id: ""
	I0816 13:47:37.440449   57945 logs.go:276] 0 containers: []
	W0816 13:47:37.440456   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:37.440463   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:37.440515   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:37.474527   57945 cri.go:89] found id: ""
	I0816 13:47:37.474561   57945 logs.go:276] 0 containers: []
	W0816 13:47:37.474572   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:37.474580   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:37.474642   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:37.513959   57945 cri.go:89] found id: ""
	I0816 13:47:37.513987   57945 logs.go:276] 0 containers: []
	W0816 13:47:37.513995   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:37.514004   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:37.514020   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:37.569561   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:37.569597   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:37.584095   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:37.584127   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:37.652289   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:37.652317   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:37.652333   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:37.737388   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:37.737434   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:37.107091   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:39.108555   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:39.358316   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:41.858946   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:41.998041   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:44.498467   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:40.281872   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:40.295704   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:40.295763   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:40.336641   57945 cri.go:89] found id: ""
	I0816 13:47:40.336667   57945 logs.go:276] 0 containers: []
	W0816 13:47:40.336678   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:40.336686   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:40.336748   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:40.373500   57945 cri.go:89] found id: ""
	I0816 13:47:40.373524   57945 logs.go:276] 0 containers: []
	W0816 13:47:40.373531   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:40.373536   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:40.373593   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:40.417553   57945 cri.go:89] found id: ""
	I0816 13:47:40.417575   57945 logs.go:276] 0 containers: []
	W0816 13:47:40.417583   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:40.417589   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:40.417645   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:40.452778   57945 cri.go:89] found id: ""
	I0816 13:47:40.452809   57945 logs.go:276] 0 containers: []
	W0816 13:47:40.452819   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:40.452827   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:40.452896   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:40.491389   57945 cri.go:89] found id: ""
	I0816 13:47:40.491424   57945 logs.go:276] 0 containers: []
	W0816 13:47:40.491436   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:40.491445   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:40.491505   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:40.529780   57945 cri.go:89] found id: ""
	I0816 13:47:40.529815   57945 logs.go:276] 0 containers: []
	W0816 13:47:40.529826   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:40.529835   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:40.529903   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:40.567724   57945 cri.go:89] found id: ""
	I0816 13:47:40.567751   57945 logs.go:276] 0 containers: []
	W0816 13:47:40.567761   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:40.567768   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:40.567825   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:40.604260   57945 cri.go:89] found id: ""
	I0816 13:47:40.604299   57945 logs.go:276] 0 containers: []
	W0816 13:47:40.604309   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:40.604319   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:40.604335   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:40.676611   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:40.676642   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:40.676659   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:40.755779   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:40.755815   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:40.793780   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:40.793811   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:40.845869   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:40.845902   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:43.361766   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:43.376247   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:43.376309   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:43.416527   57945 cri.go:89] found id: ""
	I0816 13:47:43.416559   57945 logs.go:276] 0 containers: []
	W0816 13:47:43.416567   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:43.416573   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:43.416621   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:43.458203   57945 cri.go:89] found id: ""
	I0816 13:47:43.458228   57945 logs.go:276] 0 containers: []
	W0816 13:47:43.458239   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:43.458246   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:43.458312   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:43.498122   57945 cri.go:89] found id: ""
	I0816 13:47:43.498146   57945 logs.go:276] 0 containers: []
	W0816 13:47:43.498158   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:43.498166   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:43.498231   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:43.533392   57945 cri.go:89] found id: ""
	I0816 13:47:43.533418   57945 logs.go:276] 0 containers: []
	W0816 13:47:43.533428   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:43.533436   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:43.533510   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:43.569258   57945 cri.go:89] found id: ""
	I0816 13:47:43.569294   57945 logs.go:276] 0 containers: []
	W0816 13:47:43.569301   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:43.569309   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:43.569368   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:43.603599   57945 cri.go:89] found id: ""
	I0816 13:47:43.603624   57945 logs.go:276] 0 containers: []
	W0816 13:47:43.603633   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:43.603639   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:43.603696   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:43.643204   57945 cri.go:89] found id: ""
	I0816 13:47:43.643236   57945 logs.go:276] 0 containers: []
	W0816 13:47:43.643248   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:43.643256   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:43.643343   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:43.678365   57945 cri.go:89] found id: ""
	I0816 13:47:43.678393   57945 logs.go:276] 0 containers: []
	W0816 13:47:43.678412   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:43.678424   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:43.678440   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:43.729472   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:43.729522   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:43.743714   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:43.743749   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:43.819210   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:43.819237   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:43.819252   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:43.899800   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:43.899835   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:41.606734   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:43.608097   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:44.357080   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:46.357589   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:46.503576   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:48.998084   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:46.437795   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:46.450756   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:46.450828   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:46.487036   57945 cri.go:89] found id: ""
	I0816 13:47:46.487059   57945 logs.go:276] 0 containers: []
	W0816 13:47:46.487067   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:46.487073   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:46.487119   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:46.524268   57945 cri.go:89] found id: ""
	I0816 13:47:46.524294   57945 logs.go:276] 0 containers: []
	W0816 13:47:46.524303   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:46.524308   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:46.524360   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:46.561202   57945 cri.go:89] found id: ""
	I0816 13:47:46.561232   57945 logs.go:276] 0 containers: []
	W0816 13:47:46.561244   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:46.561251   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:46.561311   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:46.596006   57945 cri.go:89] found id: ""
	I0816 13:47:46.596032   57945 logs.go:276] 0 containers: []
	W0816 13:47:46.596039   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:46.596045   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:46.596094   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:46.632279   57945 cri.go:89] found id: ""
	I0816 13:47:46.632306   57945 logs.go:276] 0 containers: []
	W0816 13:47:46.632313   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:46.632319   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:46.632372   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:46.669139   57945 cri.go:89] found id: ""
	I0816 13:47:46.669166   57945 logs.go:276] 0 containers: []
	W0816 13:47:46.669174   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:46.669179   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:46.669237   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:46.704084   57945 cri.go:89] found id: ""
	I0816 13:47:46.704115   57945 logs.go:276] 0 containers: []
	W0816 13:47:46.704126   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:46.704134   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:46.704207   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:46.740275   57945 cri.go:89] found id: ""
	I0816 13:47:46.740303   57945 logs.go:276] 0 containers: []
	W0816 13:47:46.740314   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:46.740325   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:46.740341   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:46.792777   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:46.792811   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:46.807390   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:46.807429   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:46.877563   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:46.877589   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:46.877605   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:46.954703   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:46.954737   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:49.497506   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:49.510913   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:49.511007   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:49.547461   57945 cri.go:89] found id: ""
	I0816 13:47:49.547491   57945 logs.go:276] 0 containers: []
	W0816 13:47:49.547503   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:49.547517   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:49.547579   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:49.581972   57945 cri.go:89] found id: ""
	I0816 13:47:49.582005   57945 logs.go:276] 0 containers: []
	W0816 13:47:49.582014   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:49.582021   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:49.582084   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:49.617148   57945 cri.go:89] found id: ""
	I0816 13:47:49.617176   57945 logs.go:276] 0 containers: []
	W0816 13:47:49.617185   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:49.617193   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:49.617260   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:49.652546   57945 cri.go:89] found id: ""
	I0816 13:47:49.652569   57945 logs.go:276] 0 containers: []
	W0816 13:47:49.652578   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:49.652584   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:49.652631   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:49.688040   57945 cri.go:89] found id: ""
	I0816 13:47:49.688071   57945 logs.go:276] 0 containers: []
	W0816 13:47:49.688079   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:49.688084   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:49.688154   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:49.721779   57945 cri.go:89] found id: ""
	I0816 13:47:49.721809   57945 logs.go:276] 0 containers: []
	W0816 13:47:49.721819   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:49.721827   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:49.721890   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:49.758926   57945 cri.go:89] found id: ""
	I0816 13:47:49.758953   57945 logs.go:276] 0 containers: []
	W0816 13:47:49.758960   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:49.758966   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:49.759020   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:49.796328   57945 cri.go:89] found id: ""
	I0816 13:47:49.796358   57945 logs.go:276] 0 containers: []
	W0816 13:47:49.796368   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:49.796378   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:49.796393   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:49.851818   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:49.851855   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:49.867320   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:49.867350   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:49.934885   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:49.934907   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:49.934921   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:50.018012   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:50.018055   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:46.105523   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:48.107122   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:50.606969   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:48.357769   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:50.859617   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:50.998256   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:53.498046   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:52.563101   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:52.576817   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:52.576879   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:52.613425   57945 cri.go:89] found id: ""
	I0816 13:47:52.613459   57945 logs.go:276] 0 containers: []
	W0816 13:47:52.613469   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:52.613475   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:52.613522   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:52.650086   57945 cri.go:89] found id: ""
	I0816 13:47:52.650109   57945 logs.go:276] 0 containers: []
	W0816 13:47:52.650117   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:52.650123   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:52.650186   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:52.686993   57945 cri.go:89] found id: ""
	I0816 13:47:52.687020   57945 logs.go:276] 0 containers: []
	W0816 13:47:52.687028   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:52.687034   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:52.687080   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:52.724307   57945 cri.go:89] found id: ""
	I0816 13:47:52.724337   57945 logs.go:276] 0 containers: []
	W0816 13:47:52.724349   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:52.724357   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:52.724421   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:52.759250   57945 cri.go:89] found id: ""
	I0816 13:47:52.759281   57945 logs.go:276] 0 containers: []
	W0816 13:47:52.759290   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:52.759295   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:52.759350   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:52.798634   57945 cri.go:89] found id: ""
	I0816 13:47:52.798660   57945 logs.go:276] 0 containers: []
	W0816 13:47:52.798670   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:52.798677   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:52.798741   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:52.833923   57945 cri.go:89] found id: ""
	I0816 13:47:52.833946   57945 logs.go:276] 0 containers: []
	W0816 13:47:52.833954   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:52.833960   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:52.834005   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:52.873647   57945 cri.go:89] found id: ""
	I0816 13:47:52.873671   57945 logs.go:276] 0 containers: []
	W0816 13:47:52.873679   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:52.873687   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:52.873701   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:52.887667   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:52.887697   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:52.960494   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:52.960516   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:52.960529   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:53.037132   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:53.037167   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:53.076769   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:53.076799   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:52.607529   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:55.107256   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:53.357315   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:55.357380   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:55.498193   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:57.498238   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:59.997582   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:55.625565   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:55.639296   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:55.639367   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:55.675104   57945 cri.go:89] found id: ""
	I0816 13:47:55.675137   57945 logs.go:276] 0 containers: []
	W0816 13:47:55.675149   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:55.675156   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:55.675220   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:55.710108   57945 cri.go:89] found id: ""
	I0816 13:47:55.710137   57945 logs.go:276] 0 containers: []
	W0816 13:47:55.710149   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:55.710156   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:55.710218   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:55.744190   57945 cri.go:89] found id: ""
	I0816 13:47:55.744212   57945 logs.go:276] 0 containers: []
	W0816 13:47:55.744220   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:55.744225   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:55.744288   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:55.781775   57945 cri.go:89] found id: ""
	I0816 13:47:55.781806   57945 logs.go:276] 0 containers: []
	W0816 13:47:55.781815   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:55.781821   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:55.781879   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:55.818877   57945 cri.go:89] found id: ""
	I0816 13:47:55.818907   57945 logs.go:276] 0 containers: []
	W0816 13:47:55.818915   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:55.818921   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:55.818973   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:55.858751   57945 cri.go:89] found id: ""
	I0816 13:47:55.858773   57945 logs.go:276] 0 containers: []
	W0816 13:47:55.858782   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:55.858790   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:55.858852   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:55.894745   57945 cri.go:89] found id: ""
	I0816 13:47:55.894776   57945 logs.go:276] 0 containers: []
	W0816 13:47:55.894787   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:55.894796   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:55.894854   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:55.928805   57945 cri.go:89] found id: ""
	I0816 13:47:55.928832   57945 logs.go:276] 0 containers: []
	W0816 13:47:55.928843   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:55.928853   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:55.928872   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:55.982684   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:55.982717   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:55.997319   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:55.997354   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:56.063016   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:56.063043   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:56.063059   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:56.147138   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:56.147177   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:58.686160   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:58.699135   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:58.699260   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:58.737566   57945 cri.go:89] found id: ""
	I0816 13:47:58.737597   57945 logs.go:276] 0 containers: []
	W0816 13:47:58.737606   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:58.737613   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:58.737662   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:58.778119   57945 cri.go:89] found id: ""
	I0816 13:47:58.778149   57945 logs.go:276] 0 containers: []
	W0816 13:47:58.778164   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:58.778173   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:58.778243   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:58.815003   57945 cri.go:89] found id: ""
	I0816 13:47:58.815031   57945 logs.go:276] 0 containers: []
	W0816 13:47:58.815040   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:58.815046   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:58.815094   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:58.847912   57945 cri.go:89] found id: ""
	I0816 13:47:58.847941   57945 logs.go:276] 0 containers: []
	W0816 13:47:58.847949   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:58.847955   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:58.848005   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:58.882600   57945 cri.go:89] found id: ""
	I0816 13:47:58.882623   57945 logs.go:276] 0 containers: []
	W0816 13:47:58.882631   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:58.882637   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:58.882686   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:58.920459   57945 cri.go:89] found id: ""
	I0816 13:47:58.920489   57945 logs.go:276] 0 containers: []
	W0816 13:47:58.920500   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:58.920507   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:58.920571   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:58.952411   57945 cri.go:89] found id: ""
	I0816 13:47:58.952445   57945 logs.go:276] 0 containers: []
	W0816 13:47:58.952453   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:58.952460   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:58.952570   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:58.985546   57945 cri.go:89] found id: ""
	I0816 13:47:58.985573   57945 logs.go:276] 0 containers: []
	W0816 13:47:58.985581   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:58.985589   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:58.985600   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:59.067406   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:59.067439   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:59.108076   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:59.108107   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:59.162698   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:59.162734   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:59.178734   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:59.178759   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:59.255267   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:57.606146   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:59.606603   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:57.358416   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:59.861332   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:01.998633   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:04.498646   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:01.756248   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:01.768940   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:01.769009   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:01.804884   57945 cri.go:89] found id: ""
	I0816 13:48:01.804924   57945 logs.go:276] 0 containers: []
	W0816 13:48:01.804936   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:01.804946   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:01.805000   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:01.844010   57945 cri.go:89] found id: ""
	I0816 13:48:01.844035   57945 logs.go:276] 0 containers: []
	W0816 13:48:01.844042   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:01.844051   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:01.844104   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:01.882450   57945 cri.go:89] found id: ""
	I0816 13:48:01.882488   57945 logs.go:276] 0 containers: []
	W0816 13:48:01.882500   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:01.882507   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:01.882568   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:01.916995   57945 cri.go:89] found id: ""
	I0816 13:48:01.917028   57945 logs.go:276] 0 containers: []
	W0816 13:48:01.917039   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:01.917048   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:01.917109   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:01.956289   57945 cri.go:89] found id: ""
	I0816 13:48:01.956312   57945 logs.go:276] 0 containers: []
	W0816 13:48:01.956319   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:01.956325   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:01.956378   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:01.991823   57945 cri.go:89] found id: ""
	I0816 13:48:01.991862   57945 logs.go:276] 0 containers: []
	W0816 13:48:01.991875   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:01.991882   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:01.991953   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:02.034244   57945 cri.go:89] found id: ""
	I0816 13:48:02.034272   57945 logs.go:276] 0 containers: []
	W0816 13:48:02.034282   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:02.034290   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:02.034357   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:02.067902   57945 cri.go:89] found id: ""
	I0816 13:48:02.067930   57945 logs.go:276] 0 containers: []
	W0816 13:48:02.067942   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:02.067953   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:02.067971   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:02.121170   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:02.121196   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:02.177468   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:02.177498   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:02.191721   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:02.191757   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:02.270433   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:02.270463   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:02.270500   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:04.855768   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:04.869098   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:04.869175   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:04.907817   57945 cri.go:89] found id: ""
	I0816 13:48:04.907848   57945 logs.go:276] 0 containers: []
	W0816 13:48:04.907856   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:04.907863   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:04.907919   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:04.943307   57945 cri.go:89] found id: ""
	I0816 13:48:04.943339   57945 logs.go:276] 0 containers: []
	W0816 13:48:04.943349   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:04.943356   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:04.943416   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:04.979884   57945 cri.go:89] found id: ""
	I0816 13:48:04.979914   57945 logs.go:276] 0 containers: []
	W0816 13:48:04.979922   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:04.979929   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:04.979978   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:05.021400   57945 cri.go:89] found id: ""
	I0816 13:48:05.021442   57945 logs.go:276] 0 containers: []
	W0816 13:48:05.021453   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:05.021463   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:05.021542   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:05.057780   57945 cri.go:89] found id: ""
	I0816 13:48:05.057800   57945 logs.go:276] 0 containers: []
	W0816 13:48:05.057808   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:05.057814   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:05.057864   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:05.091947   57945 cri.go:89] found id: ""
	I0816 13:48:05.091976   57945 logs.go:276] 0 containers: []
	W0816 13:48:05.091987   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:05.091995   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:05.092058   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:01.607315   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:04.107759   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:02.358142   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:04.857766   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:06.998437   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:09.496888   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:05.129740   57945 cri.go:89] found id: ""
	I0816 13:48:05.129771   57945 logs.go:276] 0 containers: []
	W0816 13:48:05.129781   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:05.129788   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:05.129857   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:05.163020   57945 cri.go:89] found id: ""
	I0816 13:48:05.163049   57945 logs.go:276] 0 containers: []
	W0816 13:48:05.163060   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:05.163070   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:05.163087   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:05.236240   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:05.236266   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:05.236281   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:05.310559   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:05.310595   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:05.351614   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:05.351646   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:05.404938   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:05.404971   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:07.921010   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:07.934181   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:07.934255   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:07.969474   57945 cri.go:89] found id: ""
	I0816 13:48:07.969502   57945 logs.go:276] 0 containers: []
	W0816 13:48:07.969512   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:07.969520   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:07.969575   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:08.007423   57945 cri.go:89] found id: ""
	I0816 13:48:08.007447   57945 logs.go:276] 0 containers: []
	W0816 13:48:08.007454   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:08.007460   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:08.007515   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:08.043981   57945 cri.go:89] found id: ""
	I0816 13:48:08.044010   57945 logs.go:276] 0 containers: []
	W0816 13:48:08.044021   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:08.044027   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:08.044076   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:08.078631   57945 cri.go:89] found id: ""
	I0816 13:48:08.078656   57945 logs.go:276] 0 containers: []
	W0816 13:48:08.078664   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:08.078669   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:08.078720   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:08.114970   57945 cri.go:89] found id: ""
	I0816 13:48:08.114998   57945 logs.go:276] 0 containers: []
	W0816 13:48:08.115010   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:08.115020   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:08.115081   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:08.149901   57945 cri.go:89] found id: ""
	I0816 13:48:08.149936   57945 logs.go:276] 0 containers: []
	W0816 13:48:08.149944   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:08.149951   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:08.150007   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:08.183104   57945 cri.go:89] found id: ""
	I0816 13:48:08.183128   57945 logs.go:276] 0 containers: []
	W0816 13:48:08.183136   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:08.183141   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:08.183189   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:08.216972   57945 cri.go:89] found id: ""
	I0816 13:48:08.217005   57945 logs.go:276] 0 containers: []
	W0816 13:48:08.217016   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:08.217027   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:08.217043   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:08.231192   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:08.231223   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:08.306779   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:08.306807   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:08.306823   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:08.388235   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:08.388274   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:08.429040   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:08.429071   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:06.110473   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:08.606467   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:07.356589   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:09.357419   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:11.357839   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:11.497754   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:13.997641   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:10.983867   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:10.997649   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:10.997722   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:11.033315   57945 cri.go:89] found id: ""
	I0816 13:48:11.033351   57945 logs.go:276] 0 containers: []
	W0816 13:48:11.033362   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:11.033370   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:11.033437   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:11.069000   57945 cri.go:89] found id: ""
	I0816 13:48:11.069030   57945 logs.go:276] 0 containers: []
	W0816 13:48:11.069038   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:11.069044   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:11.069102   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:11.100668   57945 cri.go:89] found id: ""
	I0816 13:48:11.100691   57945 logs.go:276] 0 containers: []
	W0816 13:48:11.100698   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:11.100704   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:11.100755   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:11.134753   57945 cri.go:89] found id: ""
	I0816 13:48:11.134782   57945 logs.go:276] 0 containers: []
	W0816 13:48:11.134792   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:11.134800   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:11.134857   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:11.169691   57945 cri.go:89] found id: ""
	I0816 13:48:11.169717   57945 logs.go:276] 0 containers: []
	W0816 13:48:11.169726   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:11.169734   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:11.169797   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:11.204048   57945 cri.go:89] found id: ""
	I0816 13:48:11.204077   57945 logs.go:276] 0 containers: []
	W0816 13:48:11.204088   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:11.204095   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:11.204147   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:11.237659   57945 cri.go:89] found id: ""
	I0816 13:48:11.237687   57945 logs.go:276] 0 containers: []
	W0816 13:48:11.237698   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:11.237706   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:11.237768   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:11.271886   57945 cri.go:89] found id: ""
	I0816 13:48:11.271911   57945 logs.go:276] 0 containers: []
	W0816 13:48:11.271922   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:11.271932   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:11.271946   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:11.327237   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:11.327274   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:11.343215   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:11.343256   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:11.419725   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:11.419752   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:11.419768   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:11.498221   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:11.498252   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:14.044619   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:14.057479   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:14.057537   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:14.093405   57945 cri.go:89] found id: ""
	I0816 13:48:14.093439   57945 logs.go:276] 0 containers: []
	W0816 13:48:14.093450   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:14.093459   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:14.093516   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:14.127089   57945 cri.go:89] found id: ""
	I0816 13:48:14.127111   57945 logs.go:276] 0 containers: []
	W0816 13:48:14.127120   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:14.127127   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:14.127190   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:14.165676   57945 cri.go:89] found id: ""
	I0816 13:48:14.165708   57945 logs.go:276] 0 containers: []
	W0816 13:48:14.165719   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:14.165726   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:14.165791   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:14.198630   57945 cri.go:89] found id: ""
	I0816 13:48:14.198652   57945 logs.go:276] 0 containers: []
	W0816 13:48:14.198660   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:14.198665   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:14.198717   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:14.246679   57945 cri.go:89] found id: ""
	I0816 13:48:14.246706   57945 logs.go:276] 0 containers: []
	W0816 13:48:14.246714   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:14.246719   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:14.246774   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:14.290928   57945 cri.go:89] found id: ""
	I0816 13:48:14.290960   57945 logs.go:276] 0 containers: []
	W0816 13:48:14.290972   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:14.290979   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:14.291043   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:14.342499   57945 cri.go:89] found id: ""
	I0816 13:48:14.342527   57945 logs.go:276] 0 containers: []
	W0816 13:48:14.342537   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:14.342544   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:14.342613   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:14.377858   57945 cri.go:89] found id: ""
	I0816 13:48:14.377891   57945 logs.go:276] 0 containers: []
	W0816 13:48:14.377899   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:14.377913   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:14.377928   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:14.431180   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:14.431218   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:14.445355   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:14.445381   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:14.513970   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:14.513991   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:14.514006   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:14.591381   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:14.591416   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:11.108299   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:13.612816   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:13.856979   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:15.857269   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:15.999100   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:18.497473   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:17.133406   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:17.146647   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:17.146703   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:17.180991   57945 cri.go:89] found id: ""
	I0816 13:48:17.181022   57945 logs.go:276] 0 containers: []
	W0816 13:48:17.181032   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:17.181041   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:17.181103   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:17.214862   57945 cri.go:89] found id: ""
	I0816 13:48:17.214892   57945 logs.go:276] 0 containers: []
	W0816 13:48:17.214903   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:17.214910   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:17.214971   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:17.250316   57945 cri.go:89] found id: ""
	I0816 13:48:17.250344   57945 logs.go:276] 0 containers: []
	W0816 13:48:17.250355   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:17.250362   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:17.250425   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:17.282959   57945 cri.go:89] found id: ""
	I0816 13:48:17.282991   57945 logs.go:276] 0 containers: []
	W0816 13:48:17.283001   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:17.283008   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:17.283070   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:17.316185   57945 cri.go:89] found id: ""
	I0816 13:48:17.316213   57945 logs.go:276] 0 containers: []
	W0816 13:48:17.316224   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:17.316232   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:17.316292   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:17.353383   57945 cri.go:89] found id: ""
	I0816 13:48:17.353410   57945 logs.go:276] 0 containers: []
	W0816 13:48:17.353420   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:17.353428   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:17.353487   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:17.390808   57945 cri.go:89] found id: ""
	I0816 13:48:17.390836   57945 logs.go:276] 0 containers: []
	W0816 13:48:17.390844   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:17.390850   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:17.390898   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:17.425484   57945 cri.go:89] found id: ""
	I0816 13:48:17.425517   57945 logs.go:276] 0 containers: []
	W0816 13:48:17.425529   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:17.425539   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:17.425556   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:17.439184   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:17.439220   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:17.511813   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:17.511838   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:17.511853   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:17.597415   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:17.597447   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:17.636703   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:17.636738   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:16.105992   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:18.606940   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:20.607532   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:18.357812   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:20.358351   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:20.498644   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:22.998103   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:24.999122   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:20.193694   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:20.207488   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:20.207549   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:20.246584   57945 cri.go:89] found id: ""
	I0816 13:48:20.246610   57945 logs.go:276] 0 containers: []
	W0816 13:48:20.246618   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:20.246624   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:20.246678   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:20.282030   57945 cri.go:89] found id: ""
	I0816 13:48:20.282060   57945 logs.go:276] 0 containers: []
	W0816 13:48:20.282071   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:20.282078   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:20.282142   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:20.317530   57945 cri.go:89] found id: ""
	I0816 13:48:20.317562   57945 logs.go:276] 0 containers: []
	W0816 13:48:20.317571   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:20.317578   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:20.317630   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:20.352964   57945 cri.go:89] found id: ""
	I0816 13:48:20.352990   57945 logs.go:276] 0 containers: []
	W0816 13:48:20.353000   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:20.353008   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:20.353066   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:20.388108   57945 cri.go:89] found id: ""
	I0816 13:48:20.388138   57945 logs.go:276] 0 containers: []
	W0816 13:48:20.388148   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:20.388156   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:20.388224   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:20.423627   57945 cri.go:89] found id: ""
	I0816 13:48:20.423660   57945 logs.go:276] 0 containers: []
	W0816 13:48:20.423672   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:20.423680   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:20.423741   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:20.460975   57945 cri.go:89] found id: ""
	I0816 13:48:20.461003   57945 logs.go:276] 0 containers: []
	W0816 13:48:20.461011   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:20.461017   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:20.461081   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:20.497707   57945 cri.go:89] found id: ""
	I0816 13:48:20.497728   57945 logs.go:276] 0 containers: []
	W0816 13:48:20.497735   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:20.497743   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:20.497758   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:20.584887   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:20.584939   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:20.627020   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:20.627054   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:20.680716   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:20.680756   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:20.694945   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:20.694973   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:20.770900   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:23.271654   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:23.284709   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:23.284788   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:23.324342   57945 cri.go:89] found id: ""
	I0816 13:48:23.324374   57945 logs.go:276] 0 containers: []
	W0816 13:48:23.324384   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:23.324393   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:23.324453   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:23.358846   57945 cri.go:89] found id: ""
	I0816 13:48:23.358869   57945 logs.go:276] 0 containers: []
	W0816 13:48:23.358879   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:23.358885   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:23.358943   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:23.392580   57945 cri.go:89] found id: ""
	I0816 13:48:23.392607   57945 logs.go:276] 0 containers: []
	W0816 13:48:23.392618   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:23.392626   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:23.392686   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:23.428035   57945 cri.go:89] found id: ""
	I0816 13:48:23.428066   57945 logs.go:276] 0 containers: []
	W0816 13:48:23.428076   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:23.428083   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:23.428164   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:23.470027   57945 cri.go:89] found id: ""
	I0816 13:48:23.470054   57945 logs.go:276] 0 containers: []
	W0816 13:48:23.470066   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:23.470076   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:23.470242   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:23.506497   57945 cri.go:89] found id: ""
	I0816 13:48:23.506522   57945 logs.go:276] 0 containers: []
	W0816 13:48:23.506530   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:23.506536   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:23.506588   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:23.542571   57945 cri.go:89] found id: ""
	I0816 13:48:23.542600   57945 logs.go:276] 0 containers: []
	W0816 13:48:23.542611   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:23.542619   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:23.542683   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:23.578552   57945 cri.go:89] found id: ""
	I0816 13:48:23.578584   57945 logs.go:276] 0 containers: []
	W0816 13:48:23.578592   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:23.578601   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:23.578612   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:23.633145   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:23.633181   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:23.648089   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:23.648129   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:23.724645   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:23.724663   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:23.724675   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:23.812979   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:23.813013   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:23.107986   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:25.607110   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:22.858674   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:25.358411   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:27.497538   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:29.498345   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:26.353455   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:26.367433   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:26.367504   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:26.406717   57945 cri.go:89] found id: ""
	I0816 13:48:26.406746   57945 logs.go:276] 0 containers: []
	W0816 13:48:26.406756   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:26.406764   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:26.406825   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:26.440267   57945 cri.go:89] found id: ""
	I0816 13:48:26.440298   57945 logs.go:276] 0 containers: []
	W0816 13:48:26.440309   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:26.440317   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:26.440379   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:26.479627   57945 cri.go:89] found id: ""
	I0816 13:48:26.479653   57945 logs.go:276] 0 containers: []
	W0816 13:48:26.479662   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:26.479667   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:26.479714   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:26.516608   57945 cri.go:89] found id: ""
	I0816 13:48:26.516638   57945 logs.go:276] 0 containers: []
	W0816 13:48:26.516646   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:26.516653   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:26.516713   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:26.553474   57945 cri.go:89] found id: ""
	I0816 13:48:26.553496   57945 logs.go:276] 0 containers: []
	W0816 13:48:26.553505   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:26.553510   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:26.553566   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:26.586090   57945 cri.go:89] found id: ""
	I0816 13:48:26.586147   57945 logs.go:276] 0 containers: []
	W0816 13:48:26.586160   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:26.586167   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:26.586217   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:26.621874   57945 cri.go:89] found id: ""
	I0816 13:48:26.621903   57945 logs.go:276] 0 containers: []
	W0816 13:48:26.621914   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:26.621923   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:26.621999   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:26.656643   57945 cri.go:89] found id: ""
	I0816 13:48:26.656668   57945 logs.go:276] 0 containers: []
	W0816 13:48:26.656676   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:26.656684   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:26.656694   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:26.710589   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:26.710628   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:26.724403   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:26.724423   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:26.795530   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:26.795550   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:26.795568   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:26.879670   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:26.879709   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:29.420540   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:29.434301   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:29.434368   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:29.471409   57945 cri.go:89] found id: ""
	I0816 13:48:29.471438   57945 logs.go:276] 0 containers: []
	W0816 13:48:29.471455   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:29.471464   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:29.471527   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:29.510841   57945 cri.go:89] found id: ""
	I0816 13:48:29.510865   57945 logs.go:276] 0 containers: []
	W0816 13:48:29.510873   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:29.510880   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:29.510928   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:29.546300   57945 cri.go:89] found id: ""
	I0816 13:48:29.546331   57945 logs.go:276] 0 containers: []
	W0816 13:48:29.546342   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:29.546349   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:29.546409   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:29.579324   57945 cri.go:89] found id: ""
	I0816 13:48:29.579349   57945 logs.go:276] 0 containers: []
	W0816 13:48:29.579357   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:29.579363   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:29.579414   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:29.613729   57945 cri.go:89] found id: ""
	I0816 13:48:29.613755   57945 logs.go:276] 0 containers: []
	W0816 13:48:29.613765   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:29.613772   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:29.613831   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:29.649401   57945 cri.go:89] found id: ""
	I0816 13:48:29.649428   57945 logs.go:276] 0 containers: []
	W0816 13:48:29.649439   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:29.649447   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:29.649510   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:29.685391   57945 cri.go:89] found id: ""
	I0816 13:48:29.685420   57945 logs.go:276] 0 containers: []
	W0816 13:48:29.685428   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:29.685436   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:29.685504   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:29.720954   57945 cri.go:89] found id: ""
	I0816 13:48:29.720981   57945 logs.go:276] 0 containers: []
	W0816 13:48:29.720993   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:29.721004   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:29.721019   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:29.791602   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:29.791625   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:29.791637   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:29.876595   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:29.876633   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:29.917172   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:29.917203   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:29.969511   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:29.969548   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:27.607276   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:30.106660   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:27.856585   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:29.857836   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:31.498615   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:33.999039   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:32.484186   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:32.499320   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:32.499386   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:32.537301   57945 cri.go:89] found id: ""
	I0816 13:48:32.537351   57945 logs.go:276] 0 containers: []
	W0816 13:48:32.537365   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:32.537373   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:32.537441   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:32.574324   57945 cri.go:89] found id: ""
	I0816 13:48:32.574350   57945 logs.go:276] 0 containers: []
	W0816 13:48:32.574360   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:32.574367   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:32.574445   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:32.610672   57945 cri.go:89] found id: ""
	I0816 13:48:32.610697   57945 logs.go:276] 0 containers: []
	W0816 13:48:32.610704   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:32.610709   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:32.610760   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:32.649916   57945 cri.go:89] found id: ""
	I0816 13:48:32.649941   57945 logs.go:276] 0 containers: []
	W0816 13:48:32.649949   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:32.649955   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:32.650010   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:32.684204   57945 cri.go:89] found id: ""
	I0816 13:48:32.684234   57945 logs.go:276] 0 containers: []
	W0816 13:48:32.684245   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:32.684257   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:32.684319   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:32.723735   57945 cri.go:89] found id: ""
	I0816 13:48:32.723764   57945 logs.go:276] 0 containers: []
	W0816 13:48:32.723772   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:32.723778   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:32.723838   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:32.759709   57945 cri.go:89] found id: ""
	I0816 13:48:32.759732   57945 logs.go:276] 0 containers: []
	W0816 13:48:32.759740   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:32.759746   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:32.759798   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:32.798782   57945 cri.go:89] found id: ""
	I0816 13:48:32.798807   57945 logs.go:276] 0 containers: []
	W0816 13:48:32.798815   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:32.798823   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:32.798835   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:32.876166   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:32.876188   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:32.876199   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:32.956218   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:32.956253   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:32.996625   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:32.996662   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:33.050093   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:33.050128   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:32.107363   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:34.607045   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:32.357801   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:34.856980   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:36.857321   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:36.497064   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:38.498666   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:35.565097   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:35.578582   57945 kubeadm.go:597] duration metric: took 4m3.330349632s to restartPrimaryControlPlane
	W0816 13:48:35.578670   57945 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0816 13:48:35.578704   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 13:48:36.655625   57945 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.076898816s)
	I0816 13:48:36.655703   57945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 13:48:36.670273   57945 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 13:48:36.681600   57945 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 13:48:36.691816   57945 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 13:48:36.691835   57945 kubeadm.go:157] found existing configuration files:
	
	I0816 13:48:36.691877   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 13:48:36.701841   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 13:48:36.701901   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 13:48:36.711571   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 13:48:36.720990   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 13:48:36.721055   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 13:48:36.730948   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 13:48:36.740294   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 13:48:36.740361   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 13:48:36.750725   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 13:48:36.761936   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 13:48:36.762009   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 13:48:36.772572   57945 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 13:48:37.001184   57945 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 13:48:36.608364   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:39.106585   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:38.857386   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:41.358217   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:40.997776   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:42.998819   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:44.999474   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:41.106806   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:43.107007   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:45.606716   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:42.357715   57440 pod_ready.go:82] duration metric: took 4m0.006671881s for pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace to be "Ready" ...
	E0816 13:48:42.357741   57440 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0816 13:48:42.357749   57440 pod_ready.go:39] duration metric: took 4m4.542302811s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:48:42.357762   57440 api_server.go:52] waiting for apiserver process to appear ...
	I0816 13:48:42.357787   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:42.357834   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:42.415231   57440 cri.go:89] found id: "17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1"
	I0816 13:48:42.415255   57440 cri.go:89] found id: ""
	I0816 13:48:42.415265   57440 logs.go:276] 1 containers: [17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1]
	I0816 13:48:42.415324   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:42.421713   57440 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:42.421779   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:42.462840   57440 cri.go:89] found id: "43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453"
	I0816 13:48:42.462867   57440 cri.go:89] found id: ""
	I0816 13:48:42.462878   57440 logs.go:276] 1 containers: [43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453]
	I0816 13:48:42.462940   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:42.467260   57440 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:42.467321   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:42.505423   57440 cri.go:89] found id: "1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc"
	I0816 13:48:42.505449   57440 cri.go:89] found id: ""
	I0816 13:48:42.505458   57440 logs.go:276] 1 containers: [1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc]
	I0816 13:48:42.505517   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:42.510072   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:42.510124   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:42.551873   57440 cri.go:89] found id: "db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4"
	I0816 13:48:42.551894   57440 cri.go:89] found id: ""
	I0816 13:48:42.551902   57440 logs.go:276] 1 containers: [db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4]
	I0816 13:48:42.551949   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:42.556735   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:42.556783   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:42.595853   57440 cri.go:89] found id: "ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5"
	I0816 13:48:42.595884   57440 cri.go:89] found id: ""
	I0816 13:48:42.595895   57440 logs.go:276] 1 containers: [ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5]
	I0816 13:48:42.595948   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:42.600951   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:42.601003   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:42.639288   57440 cri.go:89] found id: "d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd"
	I0816 13:48:42.639311   57440 cri.go:89] found id: ""
	I0816 13:48:42.639320   57440 logs.go:276] 1 containers: [d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd]
	I0816 13:48:42.639367   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:42.644502   57440 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:42.644554   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:42.685041   57440 cri.go:89] found id: ""
	I0816 13:48:42.685065   57440 logs.go:276] 0 containers: []
	W0816 13:48:42.685074   57440 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:42.685079   57440 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 13:48:42.685137   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 13:48:42.722485   57440 cri.go:89] found id: "b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae"
	I0816 13:48:42.722506   57440 cri.go:89] found id: "35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f"
	I0816 13:48:42.722510   57440 cri.go:89] found id: ""
	I0816 13:48:42.722519   57440 logs.go:276] 2 containers: [b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae 35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f]
	I0816 13:48:42.722590   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:42.727136   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:42.731169   57440 logs.go:123] Gathering logs for kube-controller-manager [d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd] ...
	I0816 13:48:42.731189   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd"
	I0816 13:48:42.794303   57440 logs.go:123] Gathering logs for storage-provisioner [b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae] ...
	I0816 13:48:42.794334   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae"
	I0816 13:48:42.833686   57440 logs.go:123] Gathering logs for container status ...
	I0816 13:48:42.833715   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:42.874606   57440 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:42.874632   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:42.948074   57440 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:42.948111   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:42.963546   57440 logs.go:123] Gathering logs for etcd [43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453] ...
	I0816 13:48:42.963571   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453"
	I0816 13:48:43.027410   57440 logs.go:123] Gathering logs for kube-scheduler [db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4] ...
	I0816 13:48:43.027446   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4"
	I0816 13:48:43.067643   57440 logs.go:123] Gathering logs for kube-proxy [ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5] ...
	I0816 13:48:43.067670   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5"
	I0816 13:48:43.115156   57440 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:43.115183   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 13:48:43.246588   57440 logs.go:123] Gathering logs for kube-apiserver [17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1] ...
	I0816 13:48:43.246618   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1"
	I0816 13:48:43.291042   57440 logs.go:123] Gathering logs for coredns [1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc] ...
	I0816 13:48:43.291069   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc"
	I0816 13:48:43.330741   57440 logs.go:123] Gathering logs for storage-provisioner [35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f] ...
	I0816 13:48:43.330771   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f"
	I0816 13:48:43.371970   57440 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:43.371999   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:46.357313   57440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:46.373368   57440 api_server.go:72] duration metric: took 4m16.32601859s to wait for apiserver process to appear ...
	I0816 13:48:46.373396   57440 api_server.go:88] waiting for apiserver healthz status ...
	I0816 13:48:46.373426   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:46.373473   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:46.411034   57440 cri.go:89] found id: "17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1"
	I0816 13:48:46.411059   57440 cri.go:89] found id: ""
	I0816 13:48:46.411067   57440 logs.go:276] 1 containers: [17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1]
	I0816 13:48:46.411121   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:46.415948   57440 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:46.416009   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:46.458648   57440 cri.go:89] found id: "43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453"
	I0816 13:48:46.458673   57440 cri.go:89] found id: ""
	I0816 13:48:46.458684   57440 logs.go:276] 1 containers: [43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453]
	I0816 13:48:46.458735   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:46.463268   57440 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:46.463332   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:46.502120   57440 cri.go:89] found id: "1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc"
	I0816 13:48:46.502139   57440 cri.go:89] found id: ""
	I0816 13:48:46.502149   57440 logs.go:276] 1 containers: [1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc]
	I0816 13:48:46.502319   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:46.508632   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:46.508692   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:46.552732   57440 cri.go:89] found id: "db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4"
	I0816 13:48:46.552757   57440 cri.go:89] found id: ""
	I0816 13:48:46.552765   57440 logs.go:276] 1 containers: [db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4]
	I0816 13:48:46.552812   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:46.557459   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:46.557524   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:46.598286   57440 cri.go:89] found id: "ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5"
	I0816 13:48:46.598308   57440 cri.go:89] found id: ""
	I0816 13:48:46.598330   57440 logs.go:276] 1 containers: [ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5]
	I0816 13:48:46.598403   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:46.603050   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:46.603110   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:46.641616   57440 cri.go:89] found id: "d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd"
	I0816 13:48:46.641638   57440 cri.go:89] found id: ""
	I0816 13:48:46.641648   57440 logs.go:276] 1 containers: [d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd]
	I0816 13:48:46.641712   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:46.646008   57440 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:46.646076   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:46.682259   57440 cri.go:89] found id: ""
	I0816 13:48:46.682290   57440 logs.go:276] 0 containers: []
	W0816 13:48:46.682302   57440 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:46.682310   57440 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 13:48:46.682366   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 13:48:46.718955   57440 cri.go:89] found id: "b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae"
	I0816 13:48:46.718979   57440 cri.go:89] found id: "35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f"
	I0816 13:48:46.718985   57440 cri.go:89] found id: ""
	I0816 13:48:46.718993   57440 logs.go:276] 2 containers: [b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae 35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f]
	I0816 13:48:46.719049   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:46.723519   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:46.727942   57440 logs.go:123] Gathering logs for kube-scheduler [db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4] ...
	I0816 13:48:46.727968   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4"
	I0816 13:48:46.771942   57440 logs.go:123] Gathering logs for storage-provisioner [b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae] ...
	I0816 13:48:46.771971   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae"
	I0816 13:48:46.818294   57440 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:46.818319   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:46.887977   57440 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:46.888021   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:46.903567   57440 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:46.903599   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 13:48:47.010715   57440 logs.go:123] Gathering logs for kube-apiserver [17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1] ...
	I0816 13:48:47.010747   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1"
	I0816 13:48:47.056317   57440 logs.go:123] Gathering logs for etcd [43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453] ...
	I0816 13:48:47.056346   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453"
	I0816 13:48:47.114669   57440 logs.go:123] Gathering logs for coredns [1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc] ...
	I0816 13:48:47.114696   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc"
	I0816 13:48:47.498472   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:49.998541   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:47.606991   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:49.607458   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:47.157046   57440 logs.go:123] Gathering logs for storage-provisioner [35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f] ...
	I0816 13:48:47.157073   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f"
	I0816 13:48:47.199364   57440 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:47.199393   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:47.640964   57440 logs.go:123] Gathering logs for kube-proxy [ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5] ...
	I0816 13:48:47.641003   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5"
	I0816 13:48:47.683503   57440 logs.go:123] Gathering logs for kube-controller-manager [d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd] ...
	I0816 13:48:47.683541   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd"
	I0816 13:48:47.746748   57440 logs.go:123] Gathering logs for container status ...
	I0816 13:48:47.746798   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:50.296176   57440 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0816 13:48:50.300482   57440 api_server.go:279] https://192.168.61.116:8443/healthz returned 200:
	ok
	I0816 13:48:50.301550   57440 api_server.go:141] control plane version: v1.31.0
	I0816 13:48:50.301570   57440 api_server.go:131] duration metric: took 3.928168044s to wait for apiserver health ...
	I0816 13:48:50.301578   57440 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 13:48:50.301599   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:50.301653   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:50.343199   57440 cri.go:89] found id: "17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1"
	I0816 13:48:50.343223   57440 cri.go:89] found id: ""
	I0816 13:48:50.343231   57440 logs.go:276] 1 containers: [17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1]
	I0816 13:48:50.343276   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:50.347576   57440 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:50.347651   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:50.387912   57440 cri.go:89] found id: "43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453"
	I0816 13:48:50.387937   57440 cri.go:89] found id: ""
	I0816 13:48:50.387947   57440 logs.go:276] 1 containers: [43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453]
	I0816 13:48:50.388004   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:50.392120   57440 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:50.392188   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:50.428655   57440 cri.go:89] found id: "1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc"
	I0816 13:48:50.428680   57440 cri.go:89] found id: ""
	I0816 13:48:50.428688   57440 logs.go:276] 1 containers: [1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc]
	I0816 13:48:50.428734   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:50.432863   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:50.432941   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:50.472269   57440 cri.go:89] found id: "db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4"
	I0816 13:48:50.472295   57440 cri.go:89] found id: ""
	I0816 13:48:50.472304   57440 logs.go:276] 1 containers: [db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4]
	I0816 13:48:50.472351   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:50.476961   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:50.477006   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:50.514772   57440 cri.go:89] found id: "ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5"
	I0816 13:48:50.514793   57440 cri.go:89] found id: ""
	I0816 13:48:50.514801   57440 logs.go:276] 1 containers: [ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5]
	I0816 13:48:50.514857   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:50.520430   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:50.520492   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:50.564708   57440 cri.go:89] found id: "d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd"
	I0816 13:48:50.564733   57440 cri.go:89] found id: ""
	I0816 13:48:50.564741   57440 logs.go:276] 1 containers: [d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd]
	I0816 13:48:50.564788   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:50.569255   57440 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:50.569306   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:50.607803   57440 cri.go:89] found id: ""
	I0816 13:48:50.607823   57440 logs.go:276] 0 containers: []
	W0816 13:48:50.607829   57440 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:50.607835   57440 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 13:48:50.607888   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 13:48:50.643909   57440 cri.go:89] found id: "b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae"
	I0816 13:48:50.643934   57440 cri.go:89] found id: "35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f"
	I0816 13:48:50.643940   57440 cri.go:89] found id: ""
	I0816 13:48:50.643949   57440 logs.go:276] 2 containers: [b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae 35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f]
	I0816 13:48:50.643994   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:50.648575   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:50.653322   57440 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:50.653354   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:50.667847   57440 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:50.667878   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 13:48:50.774932   57440 logs.go:123] Gathering logs for kube-apiserver [17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1] ...
	I0816 13:48:50.774969   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1"
	I0816 13:48:50.823473   57440 logs.go:123] Gathering logs for etcd [43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453] ...
	I0816 13:48:50.823503   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453"
	I0816 13:48:50.884009   57440 logs.go:123] Gathering logs for storage-provisioner [b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae] ...
	I0816 13:48:50.884044   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae"
	I0816 13:48:50.925187   57440 logs.go:123] Gathering logs for container status ...
	I0816 13:48:50.925219   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:50.965019   57440 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:50.965046   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:51.033614   57440 logs.go:123] Gathering logs for coredns [1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc] ...
	I0816 13:48:51.033651   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc"
	I0816 13:48:51.068360   57440 logs.go:123] Gathering logs for kube-scheduler [db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4] ...
	I0816 13:48:51.068387   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4"
	I0816 13:48:51.107768   57440 logs.go:123] Gathering logs for kube-proxy [ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5] ...
	I0816 13:48:51.107792   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5"
	I0816 13:48:51.163637   57440 logs.go:123] Gathering logs for kube-controller-manager [d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd] ...
	I0816 13:48:51.163673   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd"
	I0816 13:48:51.227436   57440 logs.go:123] Gathering logs for storage-provisioner [35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f] ...
	I0816 13:48:51.227462   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f"
	I0816 13:48:51.265505   57440 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:51.265531   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:54.130801   57440 system_pods.go:59] 8 kube-system pods found
	I0816 13:48:54.130828   57440 system_pods.go:61] "coredns-6f6b679f8f-8kbs6" [e732183e-3b22-4a11-909a-246de5fc1c8a] Running
	I0816 13:48:54.130833   57440 system_pods.go:61] "etcd-no-preload-311070" [e05deb95-06ff-4a8d-b520-0350b2332814] Running
	I0816 13:48:54.130837   57440 system_pods.go:61] "kube-apiserver-no-preload-311070" [06cb4989-0511-4c14-a3ec-e966829cf2ec] Running
	I0816 13:48:54.130840   57440 system_pods.go:61] "kube-controller-manager-no-preload-311070" [baa9f379-4e14-4873-a456-42e90a816b0b] Running
	I0816 13:48:54.130843   57440 system_pods.go:61] "kube-proxy-b8d5b" [9ed1c33b-903f-43e8-880c-b9a49c658806] Running
	I0816 13:48:54.130846   57440 system_pods.go:61] "kube-scheduler-no-preload-311070" [166c0f64-ebf6-413f-82d5-f1c32991c63a] Running
	I0816 13:48:54.130852   57440 system_pods.go:61] "metrics-server-6867b74b74-mgxhv" [e9654a8e-4db2-494d-93a7-a134b0e2bb50] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 13:48:54.130855   57440 system_pods.go:61] "storage-provisioner" [f340d2e3-2889-4200-b477-830494b827c6] Running
	I0816 13:48:54.130862   57440 system_pods.go:74] duration metric: took 3.829279192s to wait for pod list to return data ...
	I0816 13:48:54.130868   57440 default_sa.go:34] waiting for default service account to be created ...
	I0816 13:48:54.133253   57440 default_sa.go:45] found service account: "default"
	I0816 13:48:54.133282   57440 default_sa.go:55] duration metric: took 2.407297ms for default service account to be created ...
	I0816 13:48:54.133292   57440 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 13:48:54.138812   57440 system_pods.go:86] 8 kube-system pods found
	I0816 13:48:54.138835   57440 system_pods.go:89] "coredns-6f6b679f8f-8kbs6" [e732183e-3b22-4a11-909a-246de5fc1c8a] Running
	I0816 13:48:54.138841   57440 system_pods.go:89] "etcd-no-preload-311070" [e05deb95-06ff-4a8d-b520-0350b2332814] Running
	I0816 13:48:54.138845   57440 system_pods.go:89] "kube-apiserver-no-preload-311070" [06cb4989-0511-4c14-a3ec-e966829cf2ec] Running
	I0816 13:48:54.138849   57440 system_pods.go:89] "kube-controller-manager-no-preload-311070" [baa9f379-4e14-4873-a456-42e90a816b0b] Running
	I0816 13:48:54.138853   57440 system_pods.go:89] "kube-proxy-b8d5b" [9ed1c33b-903f-43e8-880c-b9a49c658806] Running
	I0816 13:48:54.138856   57440 system_pods.go:89] "kube-scheduler-no-preload-311070" [166c0f64-ebf6-413f-82d5-f1c32991c63a] Running
	I0816 13:48:54.138863   57440 system_pods.go:89] "metrics-server-6867b74b74-mgxhv" [e9654a8e-4db2-494d-93a7-a134b0e2bb50] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 13:48:54.138868   57440 system_pods.go:89] "storage-provisioner" [f340d2e3-2889-4200-b477-830494b827c6] Running
	I0816 13:48:54.138874   57440 system_pods.go:126] duration metric: took 5.576801ms to wait for k8s-apps to be running ...
	I0816 13:48:54.138879   57440 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 13:48:54.138922   57440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 13:48:54.154406   57440 system_svc.go:56] duration metric: took 15.507123ms WaitForService to wait for kubelet
	I0816 13:48:54.154438   57440 kubeadm.go:582] duration metric: took 4m24.107091364s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 13:48:54.154463   57440 node_conditions.go:102] verifying NodePressure condition ...
	I0816 13:48:54.156991   57440 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 13:48:54.157012   57440 node_conditions.go:123] node cpu capacity is 2
	I0816 13:48:54.157027   57440 node_conditions.go:105] duration metric: took 2.558338ms to run NodePressure ...
	I0816 13:48:54.157041   57440 start.go:241] waiting for startup goroutines ...
	I0816 13:48:54.157052   57440 start.go:246] waiting for cluster config update ...
	I0816 13:48:54.157070   57440 start.go:255] writing updated cluster config ...
	I0816 13:48:54.157381   57440 ssh_runner.go:195] Run: rm -f paused
	I0816 13:48:54.205583   57440 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 13:48:54.207845   57440 out.go:177] * Done! kubectl is now configured to use "no-preload-311070" cluster and "default" namespace by default
	I0816 13:48:51.999301   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:54.498057   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:52.107465   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:54.606735   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:56.498967   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:58.997311   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:56.606925   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:58.606970   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:00.607943   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:00.997760   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:02.998653   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:03.107555   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:05.606363   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:05.497723   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:07.498572   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:09.997905   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:07.607916   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:09.606579   58430 pod_ready.go:82] duration metric: took 4m0.00617652s for pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace to be "Ready" ...
	E0816 13:49:09.606602   58430 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0816 13:49:09.606612   58430 pod_ready.go:39] duration metric: took 4m3.606005486s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:49:09.606627   58430 api_server.go:52] waiting for apiserver process to appear ...
	I0816 13:49:09.606652   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:49:09.606698   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:49:09.660442   58430 cri.go:89] found id: "4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190"
	I0816 13:49:09.660461   58430 cri.go:89] found id: ""
	I0816 13:49:09.660469   58430 logs.go:276] 1 containers: [4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190]
	I0816 13:49:09.660519   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:09.664752   58430 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:49:09.664813   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:49:09.701589   58430 cri.go:89] found id: "83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176"
	I0816 13:49:09.701615   58430 cri.go:89] found id: ""
	I0816 13:49:09.701625   58430 logs.go:276] 1 containers: [83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176]
	I0816 13:49:09.701681   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:09.706048   58430 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:49:09.706114   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:49:09.743810   58430 cri.go:89] found id: "8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910"
	I0816 13:49:09.743832   58430 cri.go:89] found id: ""
	I0816 13:49:09.743841   58430 logs.go:276] 1 containers: [8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910]
	I0816 13:49:09.743898   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:09.748197   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:49:09.748271   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:49:09.783730   58430 cri.go:89] found id: "ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9"
	I0816 13:49:09.783752   58430 cri.go:89] found id: ""
	I0816 13:49:09.783765   58430 logs.go:276] 1 containers: [ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9]
	I0816 13:49:09.783828   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:09.787845   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:49:09.787909   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:49:09.828449   58430 cri.go:89] found id: "99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb"
	I0816 13:49:09.828472   58430 cri.go:89] found id: ""
	I0816 13:49:09.828481   58430 logs.go:276] 1 containers: [99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb]
	I0816 13:49:09.828546   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:09.832890   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:49:09.832963   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:49:09.880136   58430 cri.go:89] found id: "590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239"
	I0816 13:49:09.880164   58430 cri.go:89] found id: ""
	I0816 13:49:09.880175   58430 logs.go:276] 1 containers: [590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239]
	I0816 13:49:09.880232   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:09.884533   58430 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:49:09.884599   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:49:09.924776   58430 cri.go:89] found id: ""
	I0816 13:49:09.924805   58430 logs.go:276] 0 containers: []
	W0816 13:49:09.924816   58430 logs.go:278] No container was found matching "kindnet"
	I0816 13:49:09.924828   58430 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 13:49:09.924889   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 13:49:09.971663   58430 cri.go:89] found id: "7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b"
	I0816 13:49:09.971689   58430 cri.go:89] found id: "17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825"
	I0816 13:49:09.971695   58430 cri.go:89] found id: ""
	I0816 13:49:09.971705   58430 logs.go:276] 2 containers: [7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b 17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825]
	I0816 13:49:09.971770   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:09.976297   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:09.980815   58430 logs.go:123] Gathering logs for coredns [8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910] ...
	I0816 13:49:09.980844   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910"
	I0816 13:49:10.020287   58430 logs.go:123] Gathering logs for kube-scheduler [ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9] ...
	I0816 13:49:10.020317   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9"
	I0816 13:49:10.060266   58430 logs.go:123] Gathering logs for kube-controller-manager [590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239] ...
	I0816 13:49:10.060291   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239"
	I0816 13:49:10.113574   58430 logs.go:123] Gathering logs for storage-provisioner [7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b] ...
	I0816 13:49:10.113608   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b"
	I0816 13:49:10.153457   58430 logs.go:123] Gathering logs for storage-provisioner [17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825] ...
	I0816 13:49:10.153482   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825"
	I0816 13:49:10.191530   58430 logs.go:123] Gathering logs for dmesg ...
	I0816 13:49:10.191559   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:49:10.206267   58430 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:49:10.206296   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 13:49:10.326723   58430 logs.go:123] Gathering logs for etcd [83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176] ...
	I0816 13:49:10.326753   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176"
	I0816 13:49:10.377541   58430 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:49:10.377574   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:49:10.895387   58430 logs.go:123] Gathering logs for container status ...
	I0816 13:49:10.895445   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:49:10.947447   58430 logs.go:123] Gathering logs for kubelet ...
	I0816 13:49:10.947475   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:49:11.997943   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:13.998932   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:11.020745   58430 logs.go:123] Gathering logs for kube-apiserver [4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190] ...
	I0816 13:49:11.020786   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190"
	I0816 13:49:11.081224   58430 logs.go:123] Gathering logs for kube-proxy [99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb] ...
	I0816 13:49:11.081257   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb"
	I0816 13:49:13.632726   58430 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:49:13.651185   58430 api_server.go:72] duration metric: took 4m14.880109274s to wait for apiserver process to appear ...
	I0816 13:49:13.651214   58430 api_server.go:88] waiting for apiserver healthz status ...
	I0816 13:49:13.651254   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:49:13.651308   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:49:13.691473   58430 cri.go:89] found id: "4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190"
	I0816 13:49:13.691495   58430 cri.go:89] found id: ""
	I0816 13:49:13.691503   58430 logs.go:276] 1 containers: [4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190]
	I0816 13:49:13.691582   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:13.695945   58430 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:49:13.695998   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:49:13.730798   58430 cri.go:89] found id: "83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176"
	I0816 13:49:13.730830   58430 cri.go:89] found id: ""
	I0816 13:49:13.730840   58430 logs.go:276] 1 containers: [83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176]
	I0816 13:49:13.730913   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:13.735156   58430 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:49:13.735222   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:49:13.769612   58430 cri.go:89] found id: "8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910"
	I0816 13:49:13.769639   58430 cri.go:89] found id: ""
	I0816 13:49:13.769650   58430 logs.go:276] 1 containers: [8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910]
	I0816 13:49:13.769710   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:13.773690   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:49:13.773745   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:49:13.815417   58430 cri.go:89] found id: "ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9"
	I0816 13:49:13.815444   58430 cri.go:89] found id: ""
	I0816 13:49:13.815454   58430 logs.go:276] 1 containers: [ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9]
	I0816 13:49:13.815515   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:13.819596   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:49:13.819666   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:49:13.852562   58430 cri.go:89] found id: "99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb"
	I0816 13:49:13.852587   58430 cri.go:89] found id: ""
	I0816 13:49:13.852597   58430 logs.go:276] 1 containers: [99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb]
	I0816 13:49:13.852657   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:13.856697   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:49:13.856757   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:49:13.902327   58430 cri.go:89] found id: "590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239"
	I0816 13:49:13.902346   58430 cri.go:89] found id: ""
	I0816 13:49:13.902353   58430 logs.go:276] 1 containers: [590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239]
	I0816 13:49:13.902416   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:13.906789   58430 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:49:13.906840   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:49:13.943401   58430 cri.go:89] found id: ""
	I0816 13:49:13.943430   58430 logs.go:276] 0 containers: []
	W0816 13:49:13.943438   58430 logs.go:278] No container was found matching "kindnet"
	I0816 13:49:13.943443   58430 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 13:49:13.943490   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 13:49:13.979154   58430 cri.go:89] found id: "7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b"
	I0816 13:49:13.979178   58430 cri.go:89] found id: "17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825"
	I0816 13:49:13.979182   58430 cri.go:89] found id: ""
	I0816 13:49:13.979189   58430 logs.go:276] 2 containers: [7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b 17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825]
	I0816 13:49:13.979235   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:13.983301   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:13.988522   58430 logs.go:123] Gathering logs for dmesg ...
	I0816 13:49:13.988545   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:49:14.005891   58430 logs.go:123] Gathering logs for kube-apiserver [4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190] ...
	I0816 13:49:14.005916   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190"
	I0816 13:49:14.055686   58430 logs.go:123] Gathering logs for etcd [83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176] ...
	I0816 13:49:14.055713   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176"
	I0816 13:49:14.104975   58430 logs.go:123] Gathering logs for kube-proxy [99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb] ...
	I0816 13:49:14.105010   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb"
	I0816 13:49:14.145761   58430 logs.go:123] Gathering logs for kube-controller-manager [590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239] ...
	I0816 13:49:14.145786   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239"
	I0816 13:49:14.198935   58430 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:49:14.198966   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:49:14.662287   58430 logs.go:123] Gathering logs for container status ...
	I0816 13:49:14.662323   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:49:14.717227   58430 logs.go:123] Gathering logs for kubelet ...
	I0816 13:49:14.717256   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:49:14.789824   58430 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:49:14.789868   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 13:49:14.902892   58430 logs.go:123] Gathering logs for coredns [8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910] ...
	I0816 13:49:14.902922   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910"
	I0816 13:49:14.946711   58430 logs.go:123] Gathering logs for kube-scheduler [ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9] ...
	I0816 13:49:14.946736   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9"
	I0816 13:49:14.986143   58430 logs.go:123] Gathering logs for storage-provisioner [7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b] ...
	I0816 13:49:14.986175   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b"
	I0816 13:49:15.022107   58430 logs.go:123] Gathering logs for storage-provisioner [17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825] ...
	I0816 13:49:15.022138   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825"
	I0816 13:49:16.497493   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:18.497979   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:17.556820   58430 api_server.go:253] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 13:49:17.562249   58430 api_server.go:279] https://192.168.50.186:8444/healthz returned 200:
	ok
	I0816 13:49:17.563264   58430 api_server.go:141] control plane version: v1.31.0
	I0816 13:49:17.563280   58430 api_server.go:131] duration metric: took 3.912060569s to wait for apiserver health ...
	I0816 13:49:17.563288   58430 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 13:49:17.563312   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:49:17.563377   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:49:17.604072   58430 cri.go:89] found id: "4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190"
	I0816 13:49:17.604099   58430 cri.go:89] found id: ""
	I0816 13:49:17.604109   58430 logs.go:276] 1 containers: [4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190]
	I0816 13:49:17.604163   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:17.608623   58430 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:49:17.608678   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:49:17.650241   58430 cri.go:89] found id: "83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176"
	I0816 13:49:17.650267   58430 cri.go:89] found id: ""
	I0816 13:49:17.650275   58430 logs.go:276] 1 containers: [83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176]
	I0816 13:49:17.650328   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:17.654928   58430 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:49:17.655000   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:49:17.690057   58430 cri.go:89] found id: "8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910"
	I0816 13:49:17.690085   58430 cri.go:89] found id: ""
	I0816 13:49:17.690095   58430 logs.go:276] 1 containers: [8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910]
	I0816 13:49:17.690164   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:17.694636   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:49:17.694692   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:49:17.730134   58430 cri.go:89] found id: "ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9"
	I0816 13:49:17.730167   58430 cri.go:89] found id: ""
	I0816 13:49:17.730177   58430 logs.go:276] 1 containers: [ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9]
	I0816 13:49:17.730238   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:17.734364   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:49:17.734420   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:49:17.769579   58430 cri.go:89] found id: "99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb"
	I0816 13:49:17.769595   58430 cri.go:89] found id: ""
	I0816 13:49:17.769603   58430 logs.go:276] 1 containers: [99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb]
	I0816 13:49:17.769643   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:17.773543   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:49:17.773601   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:49:17.814287   58430 cri.go:89] found id: "590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239"
	I0816 13:49:17.814310   58430 cri.go:89] found id: ""
	I0816 13:49:17.814319   58430 logs.go:276] 1 containers: [590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239]
	I0816 13:49:17.814393   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:17.818904   58430 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:49:17.818977   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:49:17.858587   58430 cri.go:89] found id: ""
	I0816 13:49:17.858614   58430 logs.go:276] 0 containers: []
	W0816 13:49:17.858622   58430 logs.go:278] No container was found matching "kindnet"
	I0816 13:49:17.858627   58430 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 13:49:17.858674   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 13:49:17.901759   58430 cri.go:89] found id: "7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b"
	I0816 13:49:17.901784   58430 cri.go:89] found id: "17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825"
	I0816 13:49:17.901788   58430 cri.go:89] found id: ""
	I0816 13:49:17.901796   58430 logs.go:276] 2 containers: [7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b 17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825]
	I0816 13:49:17.901853   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:17.906139   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:17.910273   58430 logs.go:123] Gathering logs for dmesg ...
	I0816 13:49:17.910293   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:49:17.924565   58430 logs.go:123] Gathering logs for etcd [83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176] ...
	I0816 13:49:17.924590   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176"
	I0816 13:49:17.971895   58430 logs.go:123] Gathering logs for coredns [8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910] ...
	I0816 13:49:17.971927   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910"
	I0816 13:49:18.011332   58430 logs.go:123] Gathering logs for kube-scheduler [ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9] ...
	I0816 13:49:18.011364   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9"
	I0816 13:49:18.049264   58430 logs.go:123] Gathering logs for storage-provisioner [7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b] ...
	I0816 13:49:18.049292   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b"
	I0816 13:49:18.084004   58430 logs.go:123] Gathering logs for container status ...
	I0816 13:49:18.084030   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:49:18.136961   58430 logs.go:123] Gathering logs for kubelet ...
	I0816 13:49:18.137000   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:49:18.210452   58430 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:49:18.210483   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 13:49:18.327398   58430 logs.go:123] Gathering logs for kube-apiserver [4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190] ...
	I0816 13:49:18.327429   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190"
	I0816 13:49:18.378777   58430 logs.go:123] Gathering logs for kube-proxy [99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb] ...
	I0816 13:49:18.378809   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb"
	I0816 13:49:18.430052   58430 logs.go:123] Gathering logs for kube-controller-manager [590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239] ...
	I0816 13:49:18.430088   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239"
	I0816 13:49:18.496775   58430 logs.go:123] Gathering logs for storage-provisioner [17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825] ...
	I0816 13:49:18.496806   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825"
	I0816 13:49:18.540493   58430 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:49:18.540523   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:49:21.451644   58430 system_pods.go:59] 8 kube-system pods found
	I0816 13:49:21.451673   58430 system_pods.go:61] "coredns-6f6b679f8f-xdwhx" [66987c52-9a8c-4ddd-a6cf-ac84172d8c8c] Running
	I0816 13:49:21.451679   58430 system_pods.go:61] "etcd-default-k8s-diff-port-893736" [54f5bb47-00f5-4c65-88ae-460571872fd1] Running
	I0816 13:49:21.451682   58430 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-893736" [c8c57619-3a4c-4cc9-b637-7842194071ad] Running
	I0816 13:49:21.451687   58430 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-893736" [e553aced-34bd-4d30-b473-082001fe2948] Running
	I0816 13:49:21.451691   58430 system_pods.go:61] "kube-proxy-btq6r" [a2b7b283-da62-4cb8-a039-07a509491e5e] Running
	I0816 13:49:21.451694   58430 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-893736" [0a30cbd0-a2ac-4ca9-bddc-674e6fc9ae77] Running
	I0816 13:49:21.451701   58430 system_pods.go:61] "metrics-server-6867b74b74-j9tqh" [ef077e6d-f368-4872-bb87-9e031d3ea764] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 13:49:21.451705   58430 system_pods.go:61] "storage-provisioner" [e2fbf16a-3bc7-4300-8023-5dbb20ba70bc] Running
	I0816 13:49:21.451713   58430 system_pods.go:74] duration metric: took 3.888418707s to wait for pod list to return data ...
	I0816 13:49:21.451719   58430 default_sa.go:34] waiting for default service account to be created ...
	I0816 13:49:21.454558   58430 default_sa.go:45] found service account: "default"
	I0816 13:49:21.454578   58430 default_sa.go:55] duration metric: took 2.853068ms for default service account to be created ...
	I0816 13:49:21.454585   58430 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 13:49:21.458906   58430 system_pods.go:86] 8 kube-system pods found
	I0816 13:49:21.458930   58430 system_pods.go:89] "coredns-6f6b679f8f-xdwhx" [66987c52-9a8c-4ddd-a6cf-ac84172d8c8c] Running
	I0816 13:49:21.458935   58430 system_pods.go:89] "etcd-default-k8s-diff-port-893736" [54f5bb47-00f5-4c65-88ae-460571872fd1] Running
	I0816 13:49:21.458941   58430 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-893736" [c8c57619-3a4c-4cc9-b637-7842194071ad] Running
	I0816 13:49:21.458944   58430 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-893736" [e553aced-34bd-4d30-b473-082001fe2948] Running
	I0816 13:49:21.458948   58430 system_pods.go:89] "kube-proxy-btq6r" [a2b7b283-da62-4cb8-a039-07a509491e5e] Running
	I0816 13:49:21.458951   58430 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-893736" [0a30cbd0-a2ac-4ca9-bddc-674e6fc9ae77] Running
	I0816 13:49:21.458958   58430 system_pods.go:89] "metrics-server-6867b74b74-j9tqh" [ef077e6d-f368-4872-bb87-9e031d3ea764] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 13:49:21.458961   58430 system_pods.go:89] "storage-provisioner" [e2fbf16a-3bc7-4300-8023-5dbb20ba70bc] Running
	I0816 13:49:21.458968   58430 system_pods.go:126] duration metric: took 4.378971ms to wait for k8s-apps to be running ...
	I0816 13:49:21.458975   58430 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 13:49:21.459016   58430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 13:49:21.476060   58430 system_svc.go:56] duration metric: took 17.075817ms WaitForService to wait for kubelet
	I0816 13:49:21.476086   58430 kubeadm.go:582] duration metric: took 4m22.705015833s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 13:49:21.476109   58430 node_conditions.go:102] verifying NodePressure condition ...
	I0816 13:49:21.479557   58430 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 13:49:21.479585   58430 node_conditions.go:123] node cpu capacity is 2
	I0816 13:49:21.479600   58430 node_conditions.go:105] duration metric: took 3.483638ms to run NodePressure ...
	I0816 13:49:21.479613   58430 start.go:241] waiting for startup goroutines ...
	I0816 13:49:21.479622   58430 start.go:246] waiting for cluster config update ...
	I0816 13:49:21.479637   58430 start.go:255] writing updated cluster config ...
	I0816 13:49:21.479949   58430 ssh_runner.go:195] Run: rm -f paused
	I0816 13:49:21.530237   58430 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 13:49:21.532328   58430 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-893736" cluster and "default" namespace by default
	I0816 13:49:20.998486   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:23.498358   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:25.498502   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:27.998622   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:30.491886   57240 pod_ready.go:82] duration metric: took 4m0.000539211s for pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace to be "Ready" ...
	E0816 13:49:30.491929   57240 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace to be "Ready" (will not retry!)
	I0816 13:49:30.491945   57240 pod_ready.go:39] duration metric: took 4m12.492024576s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:49:30.491972   57240 kubeadm.go:597] duration metric: took 4m19.795438093s to restartPrimaryControlPlane
	W0816 13:49:30.492032   57240 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0816 13:49:30.492059   57240 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 13:49:56.783263   57240 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.29118348s)
	I0816 13:49:56.783321   57240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 13:49:56.798550   57240 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 13:49:56.810542   57240 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 13:49:56.820837   57240 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 13:49:56.820873   57240 kubeadm.go:157] found existing configuration files:
	
	I0816 13:49:56.820947   57240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 13:49:56.831998   57240 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 13:49:56.832057   57240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 13:49:56.842351   57240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 13:49:56.852062   57240 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 13:49:56.852119   57240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 13:49:56.862337   57240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 13:49:56.872000   57240 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 13:49:56.872050   57240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 13:49:56.881764   57240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 13:49:56.891211   57240 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 13:49:56.891276   57240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 13:49:56.900969   57240 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 13:49:56.942823   57240 kubeadm.go:310] W0816 13:49:56.895203    2544 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 13:49:56.943751   57240 kubeadm.go:310] W0816 13:49:56.896255    2544 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 13:49:57.049491   57240 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 13:50:05.244505   57240 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0816 13:50:05.244561   57240 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 13:50:05.244657   57240 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 13:50:05.244775   57240 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 13:50:05.244901   57240 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0816 13:50:05.244989   57240 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 13:50:05.246568   57240 out.go:235]   - Generating certificates and keys ...
	I0816 13:50:05.246667   57240 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 13:50:05.246779   57240 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 13:50:05.246885   57240 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 13:50:05.246968   57240 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 13:50:05.247065   57240 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 13:50:05.247125   57240 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 13:50:05.247195   57240 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 13:50:05.247260   57240 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 13:50:05.247372   57240 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 13:50:05.247480   57240 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 13:50:05.247521   57240 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 13:50:05.247590   57240 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 13:50:05.247670   57240 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 13:50:05.247751   57240 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0816 13:50:05.247830   57240 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 13:50:05.247886   57240 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 13:50:05.247965   57240 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 13:50:05.248046   57240 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 13:50:05.248100   57240 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 13:50:05.249601   57240 out.go:235]   - Booting up control plane ...
	I0816 13:50:05.249698   57240 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 13:50:05.249779   57240 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 13:50:05.249835   57240 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 13:50:05.249930   57240 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 13:50:05.250007   57240 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 13:50:05.250046   57240 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 13:50:05.250184   57240 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0816 13:50:05.250289   57240 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0816 13:50:05.250343   57240 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002296228s
	I0816 13:50:05.250403   57240 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0816 13:50:05.250456   57240 kubeadm.go:310] [api-check] The API server is healthy after 5.002119618s
	I0816 13:50:05.250546   57240 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0816 13:50:05.250651   57240 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0816 13:50:05.250700   57240 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0816 13:50:05.250876   57240 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-302520 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0816 13:50:05.250930   57240 kubeadm.go:310] [bootstrap-token] Using token: dta4cr.diyk2wto3tx3ixlb
	I0816 13:50:05.252120   57240 out.go:235]   - Configuring RBAC rules ...
	I0816 13:50:05.252207   57240 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0816 13:50:05.252287   57240 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0816 13:50:05.252418   57240 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0816 13:50:05.252542   57240 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0816 13:50:05.252648   57240 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0816 13:50:05.252724   57240 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0816 13:50:05.252819   57240 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0816 13:50:05.252856   57240 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0816 13:50:05.252895   57240 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0816 13:50:05.252901   57240 kubeadm.go:310] 
	I0816 13:50:05.253004   57240 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0816 13:50:05.253022   57240 kubeadm.go:310] 
	I0816 13:50:05.253116   57240 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0816 13:50:05.253126   57240 kubeadm.go:310] 
	I0816 13:50:05.253155   57240 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0816 13:50:05.253240   57240 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0816 13:50:05.253283   57240 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0816 13:50:05.253289   57240 kubeadm.go:310] 
	I0816 13:50:05.253340   57240 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0816 13:50:05.253347   57240 kubeadm.go:310] 
	I0816 13:50:05.253405   57240 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0816 13:50:05.253423   57240 kubeadm.go:310] 
	I0816 13:50:05.253484   57240 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0816 13:50:05.253556   57240 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0816 13:50:05.253621   57240 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0816 13:50:05.253629   57240 kubeadm.go:310] 
	I0816 13:50:05.253710   57240 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0816 13:50:05.253840   57240 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0816 13:50:05.253855   57240 kubeadm.go:310] 
	I0816 13:50:05.253962   57240 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token dta4cr.diyk2wto3tx3ixlb \
	I0816 13:50:05.254087   57240 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4dc33a2efd2478f5cbf5aa38b4f971f510c5b41ce450063af834610254851641 \
	I0816 13:50:05.254122   57240 kubeadm.go:310] 	--control-plane 
	I0816 13:50:05.254126   57240 kubeadm.go:310] 
	I0816 13:50:05.254202   57240 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0816 13:50:05.254209   57240 kubeadm.go:310] 
	I0816 13:50:05.254280   57240 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token dta4cr.diyk2wto3tx3ixlb \
	I0816 13:50:05.254394   57240 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4dc33a2efd2478f5cbf5aa38b4f971f510c5b41ce450063af834610254851641 
	I0816 13:50:05.254407   57240 cni.go:84] Creating CNI manager for ""
	I0816 13:50:05.254416   57240 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:50:05.255889   57240 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 13:50:05.257086   57240 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 13:50:05.268668   57240 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 13:50:05.288676   57240 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 13:50:05.288735   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 13:50:05.288755   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-302520 minikube.k8s.io/updated_at=2024_08_16T13_50_05_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ab84f9bc76071a77c857a14f5c66dccc01002b05 minikube.k8s.io/name=embed-certs-302520 minikube.k8s.io/primary=true
	I0816 13:50:05.494987   57240 ops.go:34] apiserver oom_adj: -16
	I0816 13:50:05.495066   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 13:50:05.995792   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 13:50:06.495937   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 13:50:06.995513   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 13:50:07.495437   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 13:50:07.995600   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 13:50:08.495194   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 13:50:08.995101   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 13:50:09.495533   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 13:50:09.659383   57240 kubeadm.go:1113] duration metric: took 4.370714211s to wait for elevateKubeSystemPrivileges
	I0816 13:50:09.659425   57240 kubeadm.go:394] duration metric: took 4m59.010243945s to StartCluster
	I0816 13:50:09.659448   57240 settings.go:142] acquiring lock: {Name:mk96ffc9331ceacd8f3c1c33a59b38e047898a0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:50:09.659529   57240 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 13:50:09.661178   57240 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/kubeconfig: {Name:mk102d7d0e1fecb6a50b9d6c1ee82dcff7f7a898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:50:09.661475   57240 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 13:50:09.661579   57240 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 13:50:09.661662   57240 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-302520"
	I0816 13:50:09.661678   57240 addons.go:69] Setting default-storageclass=true in profile "embed-certs-302520"
	I0816 13:50:09.661693   57240 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-302520"
	W0816 13:50:09.661701   57240 addons.go:243] addon storage-provisioner should already be in state true
	I0816 13:50:09.661683   57240 config.go:182] Loaded profile config "embed-certs-302520": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 13:50:09.661707   57240 addons.go:69] Setting metrics-server=true in profile "embed-certs-302520"
	I0816 13:50:09.661730   57240 host.go:66] Checking if "embed-certs-302520" exists ...
	I0816 13:50:09.661732   57240 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-302520"
	I0816 13:50:09.661744   57240 addons.go:234] Setting addon metrics-server=true in "embed-certs-302520"
	W0816 13:50:09.661758   57240 addons.go:243] addon metrics-server should already be in state true
	I0816 13:50:09.661789   57240 host.go:66] Checking if "embed-certs-302520" exists ...
	I0816 13:50:09.662063   57240 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:50:09.662070   57240 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:50:09.662092   57240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:50:09.662093   57240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:50:09.662125   57240 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:50:09.662177   57240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:50:09.663568   57240 out.go:177] * Verifying Kubernetes components...
	I0816 13:50:09.665144   57240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:50:09.679643   57240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44127
	I0816 13:50:09.679976   57240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33121
	I0816 13:50:09.680138   57240 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:50:09.680460   57240 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:50:09.680652   57240 main.go:141] libmachine: Using API Version  1
	I0816 13:50:09.680677   57240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:50:09.681040   57240 main.go:141] libmachine: Using API Version  1
	I0816 13:50:09.681060   57240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:50:09.681084   57240 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:50:09.681449   57240 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:50:09.681659   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetState
	I0816 13:50:09.681706   57240 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:50:09.681737   57240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:50:09.682300   57240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42691
	I0816 13:50:09.682644   57240 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:50:09.683099   57240 main.go:141] libmachine: Using API Version  1
	I0816 13:50:09.683121   57240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:50:09.683464   57240 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:50:09.683993   57240 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:50:09.684020   57240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:50:09.684695   57240 addons.go:234] Setting addon default-storageclass=true in "embed-certs-302520"
	W0816 13:50:09.684713   57240 addons.go:243] addon default-storageclass should already be in state true
	I0816 13:50:09.684733   57240 host.go:66] Checking if "embed-certs-302520" exists ...
	I0816 13:50:09.685016   57240 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:50:09.685044   57240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:50:09.699612   57240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40707
	I0816 13:50:09.700235   57240 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:50:09.700244   57240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36139
	I0816 13:50:09.700776   57240 main.go:141] libmachine: Using API Version  1
	I0816 13:50:09.700795   57240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:50:09.700827   57240 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:50:09.701285   57240 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:50:09.701369   57240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40047
	I0816 13:50:09.701457   57240 main.go:141] libmachine: Using API Version  1
	I0816 13:50:09.701467   57240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:50:09.701939   57240 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:50:09.701980   57240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:50:09.702188   57240 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:50:09.702209   57240 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:50:09.702494   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetState
	I0816 13:50:09.702618   57240 main.go:141] libmachine: Using API Version  1
	I0816 13:50:09.702635   57240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:50:09.703042   57240 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:50:09.703250   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetState
	I0816 13:50:09.704568   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	I0816 13:50:09.705308   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	I0816 13:50:09.707074   57240 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0816 13:50:09.707074   57240 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:50:09.708773   57240 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 13:50:09.708792   57240 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 13:50:09.708813   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:50:09.708894   57240 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 13:50:09.708924   57240 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 13:50:09.708941   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:50:09.714305   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:50:09.714338   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:50:09.714812   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:50:09.714840   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:50:09.714874   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:50:09.714928   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:50:09.715181   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:50:09.715215   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:50:09.715363   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:50:09.715399   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:50:09.715512   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:50:09.715556   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:50:09.715634   57240 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/embed-certs-302520/id_rsa Username:docker}
	I0816 13:50:09.715876   57240 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/embed-certs-302520/id_rsa Username:docker}
	I0816 13:50:09.724172   57240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37325
	I0816 13:50:09.724636   57240 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:50:09.725184   57240 main.go:141] libmachine: Using API Version  1
	I0816 13:50:09.725213   57240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:50:09.725596   57240 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:50:09.725799   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetState
	I0816 13:50:09.727188   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	I0816 13:50:09.727410   57240 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 13:50:09.727426   57240 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 13:50:09.727447   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:50:09.729840   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:50:09.730228   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:50:09.730255   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:50:09.730534   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:50:09.730723   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:50:09.730867   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:50:09.731014   57240 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/embed-certs-302520/id_rsa Username:docker}
	I0816 13:50:09.899195   57240 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 13:50:09.939173   57240 node_ready.go:35] waiting up to 6m0s for node "embed-certs-302520" to be "Ready" ...
	I0816 13:50:09.958087   57240 node_ready.go:49] node "embed-certs-302520" has status "Ready":"True"
	I0816 13:50:09.958119   57240 node_ready.go:38] duration metric: took 18.911367ms for node "embed-certs-302520" to be "Ready" ...
	I0816 13:50:09.958130   57240 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:50:09.963326   57240 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-whnqh" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:10.083721   57240 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 13:50:10.184794   57240 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 13:50:10.203192   57240 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 13:50:10.203214   57240 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0816 13:50:10.285922   57240 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 13:50:10.285950   57240 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 13:50:10.370797   57240 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 13:50:10.370825   57240 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 13:50:10.420892   57240 main.go:141] libmachine: Making call to close driver server
	I0816 13:50:10.420942   57240 main.go:141] libmachine: (embed-certs-302520) Calling .Close
	I0816 13:50:10.421261   57240 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:50:10.421280   57240 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:50:10.421282   57240 main.go:141] libmachine: (embed-certs-302520) DBG | Closing plugin on server side
	I0816 13:50:10.421293   57240 main.go:141] libmachine: Making call to close driver server
	I0816 13:50:10.421303   57240 main.go:141] libmachine: (embed-certs-302520) Calling .Close
	I0816 13:50:10.421556   57240 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:50:10.421620   57240 main.go:141] libmachine: (embed-certs-302520) DBG | Closing plugin on server side
	I0816 13:50:10.421625   57240 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:50:10.427229   57240 main.go:141] libmachine: Making call to close driver server
	I0816 13:50:10.427250   57240 main.go:141] libmachine: (embed-certs-302520) Calling .Close
	I0816 13:50:10.427591   57240 main.go:141] libmachine: (embed-certs-302520) DBG | Closing plugin on server side
	I0816 13:50:10.427638   57240 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:50:10.427655   57240 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:50:10.454486   57240 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 13:50:11.225905   57240 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.041077031s)
	I0816 13:50:11.225958   57240 main.go:141] libmachine: Making call to close driver server
	I0816 13:50:11.225969   57240 main.go:141] libmachine: (embed-certs-302520) Calling .Close
	I0816 13:50:11.226248   57240 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:50:11.226268   57240 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:50:11.226273   57240 main.go:141] libmachine: (embed-certs-302520) DBG | Closing plugin on server side
	I0816 13:50:11.226295   57240 main.go:141] libmachine: Making call to close driver server
	I0816 13:50:11.226310   57240 main.go:141] libmachine: (embed-certs-302520) Calling .Close
	I0816 13:50:11.226561   57240 main.go:141] libmachine: (embed-certs-302520) DBG | Closing plugin on server side
	I0816 13:50:11.226608   57240 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:50:11.226627   57240 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:50:11.447454   57240 main.go:141] libmachine: Making call to close driver server
	I0816 13:50:11.447484   57240 main.go:141] libmachine: (embed-certs-302520) Calling .Close
	I0816 13:50:11.447823   57240 main.go:141] libmachine: (embed-certs-302520) DBG | Closing plugin on server side
	I0816 13:50:11.447890   57240 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:50:11.447908   57240 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:50:11.447924   57240 main.go:141] libmachine: Making call to close driver server
	I0816 13:50:11.447936   57240 main.go:141] libmachine: (embed-certs-302520) Calling .Close
	I0816 13:50:11.448179   57240 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:50:11.448195   57240 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:50:11.448241   57240 addons.go:475] Verifying addon metrics-server=true in "embed-certs-302520"
	I0816 13:50:11.450274   57240 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0816 13:50:11.451676   57240 addons.go:510] duration metric: took 1.790101568s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0816 13:50:11.971087   57240 pod_ready.go:103] pod "coredns-6f6b679f8f-whnqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:50:12.470167   57240 pod_ready.go:93] pod "coredns-6f6b679f8f-whnqh" in "kube-system" namespace has status "Ready":"True"
	I0816 13:50:12.470193   57240 pod_ready.go:82] duration metric: took 2.506842546s for pod "coredns-6f6b679f8f-whnqh" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:12.470203   57240 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-zh69g" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:12.474959   57240 pod_ready.go:93] pod "coredns-6f6b679f8f-zh69g" in "kube-system" namespace has status "Ready":"True"
	I0816 13:50:12.474980   57240 pod_ready.go:82] duration metric: took 4.769458ms for pod "coredns-6f6b679f8f-zh69g" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:12.474988   57240 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:12.479388   57240 pod_ready.go:93] pod "etcd-embed-certs-302520" in "kube-system" namespace has status "Ready":"True"
	I0816 13:50:12.479410   57240 pod_ready.go:82] duration metric: took 4.41564ms for pod "etcd-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:12.479421   57240 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:12.483567   57240 pod_ready.go:93] pod "kube-apiserver-embed-certs-302520" in "kube-system" namespace has status "Ready":"True"
	I0816 13:50:12.483589   57240 pod_ready.go:82] duration metric: took 4.159906ms for pod "kube-apiserver-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:12.483600   57240 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:14.490212   57240 pod_ready.go:103] pod "kube-controller-manager-embed-certs-302520" in "kube-system" namespace has status "Ready":"False"
	I0816 13:50:15.990204   57240 pod_ready.go:93] pod "kube-controller-manager-embed-certs-302520" in "kube-system" namespace has status "Ready":"True"
	I0816 13:50:15.990226   57240 pod_ready.go:82] duration metric: took 3.506618768s for pod "kube-controller-manager-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:15.990235   57240 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-spgtw" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:15.994580   57240 pod_ready.go:93] pod "kube-proxy-spgtw" in "kube-system" namespace has status "Ready":"True"
	I0816 13:50:15.994597   57240 pod_ready.go:82] duration metric: took 4.356588ms for pod "kube-proxy-spgtw" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:15.994605   57240 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:16.068472   57240 pod_ready.go:93] pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace has status "Ready":"True"
	I0816 13:50:16.068495   57240 pod_ready.go:82] duration metric: took 73.884906ms for pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:16.068503   57240 pod_ready.go:39] duration metric: took 6.110362477s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:50:16.068519   57240 api_server.go:52] waiting for apiserver process to appear ...
	I0816 13:50:16.068579   57240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:50:16.086318   57240 api_server.go:72] duration metric: took 6.424804798s to wait for apiserver process to appear ...
	I0816 13:50:16.086345   57240 api_server.go:88] waiting for apiserver healthz status ...
	I0816 13:50:16.086361   57240 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0816 13:50:16.091170   57240 api_server.go:279] https://192.168.39.125:8443/healthz returned 200:
	ok
	I0816 13:50:16.092122   57240 api_server.go:141] control plane version: v1.31.0
	I0816 13:50:16.092138   57240 api_server.go:131] duration metric: took 5.787898ms to wait for apiserver health ...
	I0816 13:50:16.092146   57240 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 13:50:16.271303   57240 system_pods.go:59] 9 kube-system pods found
	I0816 13:50:16.271338   57240 system_pods.go:61] "coredns-6f6b679f8f-whnqh" [6f4d69de-4130-4959-b1ef-9ddfbe5d6a72] Running
	I0816 13:50:16.271344   57240 system_pods.go:61] "coredns-6f6b679f8f-zh69g" [b65235cd-590b-4108-b5fc-b5f6072c8f5f] Running
	I0816 13:50:16.271348   57240 system_pods.go:61] "etcd-embed-certs-302520" [54a46f37-7b4c-4732-908d-df64558dd74f] Running
	I0816 13:50:16.271353   57240 system_pods.go:61] "kube-apiserver-embed-certs-302520" [d58b625b-c94e-44a7-ac30-18b1e2e8691e] Running
	I0816 13:50:16.271359   57240 system_pods.go:61] "kube-controller-manager-embed-certs-302520" [6bb26bff-7111-40c5-9f18-9ca1b733f990] Running
	I0816 13:50:16.271364   57240 system_pods.go:61] "kube-proxy-spgtw" [e9b2b029-a32e-4dd5-b4ff-bd3d61b97a02] Running
	I0816 13:50:16.271370   57240 system_pods.go:61] "kube-scheduler-embed-certs-302520" [aea7ddf8-67b1-468d-9ab8-c78b0bfecdbb] Running
	I0816 13:50:16.271379   57240 system_pods.go:61] "metrics-server-6867b74b74-q58h2" [1351eabe-df61-4b9c-b67b-2e9c963b0eaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 13:50:16.271389   57240 system_pods.go:61] "storage-provisioner" [8e139aaf-e6d1-4661-8c7b-90c1cc9827d4] Running
	I0816 13:50:16.271398   57240 system_pods.go:74] duration metric: took 179.244421ms to wait for pod list to return data ...
	I0816 13:50:16.271410   57240 default_sa.go:34] waiting for default service account to be created ...
	I0816 13:50:16.468167   57240 default_sa.go:45] found service account: "default"
	I0816 13:50:16.468196   57240 default_sa.go:55] duration metric: took 196.779435ms for default service account to be created ...
	I0816 13:50:16.468207   57240 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 13:50:16.670917   57240 system_pods.go:86] 9 kube-system pods found
	I0816 13:50:16.670943   57240 system_pods.go:89] "coredns-6f6b679f8f-whnqh" [6f4d69de-4130-4959-b1ef-9ddfbe5d6a72] Running
	I0816 13:50:16.670949   57240 system_pods.go:89] "coredns-6f6b679f8f-zh69g" [b65235cd-590b-4108-b5fc-b5f6072c8f5f] Running
	I0816 13:50:16.670953   57240 system_pods.go:89] "etcd-embed-certs-302520" [54a46f37-7b4c-4732-908d-df64558dd74f] Running
	I0816 13:50:16.670957   57240 system_pods.go:89] "kube-apiserver-embed-certs-302520" [d58b625b-c94e-44a7-ac30-18b1e2e8691e] Running
	I0816 13:50:16.670960   57240 system_pods.go:89] "kube-controller-manager-embed-certs-302520" [6bb26bff-7111-40c5-9f18-9ca1b733f990] Running
	I0816 13:50:16.670963   57240 system_pods.go:89] "kube-proxy-spgtw" [e9b2b029-a32e-4dd5-b4ff-bd3d61b97a02] Running
	I0816 13:50:16.670967   57240 system_pods.go:89] "kube-scheduler-embed-certs-302520" [aea7ddf8-67b1-468d-9ab8-c78b0bfecdbb] Running
	I0816 13:50:16.670972   57240 system_pods.go:89] "metrics-server-6867b74b74-q58h2" [1351eabe-df61-4b9c-b67b-2e9c963b0eaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 13:50:16.670976   57240 system_pods.go:89] "storage-provisioner" [8e139aaf-e6d1-4661-8c7b-90c1cc9827d4] Running
	I0816 13:50:16.670984   57240 system_pods.go:126] duration metric: took 202.771216ms to wait for k8s-apps to be running ...
	I0816 13:50:16.670990   57240 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 13:50:16.671040   57240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 13:50:16.686873   57240 system_svc.go:56] duration metric: took 15.876641ms WaitForService to wait for kubelet
	I0816 13:50:16.686906   57240 kubeadm.go:582] duration metric: took 7.025397638s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 13:50:16.686925   57240 node_conditions.go:102] verifying NodePressure condition ...
	I0816 13:50:16.869367   57240 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 13:50:16.869393   57240 node_conditions.go:123] node cpu capacity is 2
	I0816 13:50:16.869405   57240 node_conditions.go:105] duration metric: took 182.475776ms to run NodePressure ...
	I0816 13:50:16.869420   57240 start.go:241] waiting for startup goroutines ...
	I0816 13:50:16.869427   57240 start.go:246] waiting for cluster config update ...
	I0816 13:50:16.869436   57240 start.go:255] writing updated cluster config ...
	I0816 13:50:16.869686   57240 ssh_runner.go:195] Run: rm -f paused
	I0816 13:50:16.919168   57240 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 13:50:16.921207   57240 out.go:177] * Done! kubectl is now configured to use "embed-certs-302520" cluster and "default" namespace by default
	I0816 13:50:32.875973   57945 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0816 13:50:32.876092   57945 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0816 13:50:32.877853   57945 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0816 13:50:32.877964   57945 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 13:50:32.878066   57945 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 13:50:32.878184   57945 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 13:50:32.878286   57945 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0816 13:50:32.878362   57945 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 13:50:32.880211   57945 out.go:235]   - Generating certificates and keys ...
	I0816 13:50:32.880308   57945 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 13:50:32.880389   57945 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 13:50:32.880480   57945 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 13:50:32.880575   57945 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 13:50:32.880684   57945 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 13:50:32.880782   57945 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 13:50:32.880874   57945 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 13:50:32.880988   57945 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 13:50:32.881100   57945 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 13:50:32.881190   57945 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 13:50:32.881228   57945 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 13:50:32.881274   57945 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 13:50:32.881318   57945 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 13:50:32.881362   57945 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 13:50:32.881418   57945 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 13:50:32.881473   57945 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 13:50:32.881585   57945 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 13:50:32.881676   57945 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 13:50:32.881747   57945 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 13:50:32.881846   57945 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 13:50:32.883309   57945 out.go:235]   - Booting up control plane ...
	I0816 13:50:32.883394   57945 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 13:50:32.883493   57945 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 13:50:32.883563   57945 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 13:50:32.883661   57945 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 13:50:32.883867   57945 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0816 13:50:32.883916   57945 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0816 13:50:32.883985   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:50:32.884185   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:50:32.884285   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:50:32.884483   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:50:32.884557   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:50:32.884718   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:50:32.884775   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:50:32.884984   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:50:32.885058   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:50:32.885258   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:50:32.885272   57945 kubeadm.go:310] 
	I0816 13:50:32.885367   57945 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0816 13:50:32.885419   57945 kubeadm.go:310] 		timed out waiting for the condition
	I0816 13:50:32.885426   57945 kubeadm.go:310] 
	I0816 13:50:32.885455   57945 kubeadm.go:310] 	This error is likely caused by:
	I0816 13:50:32.885489   57945 kubeadm.go:310] 		- The kubelet is not running
	I0816 13:50:32.885579   57945 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0816 13:50:32.885587   57945 kubeadm.go:310] 
	I0816 13:50:32.885709   57945 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0816 13:50:32.885745   57945 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0816 13:50:32.885774   57945 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0816 13:50:32.885781   57945 kubeadm.go:310] 
	I0816 13:50:32.885866   57945 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0816 13:50:32.885938   57945 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0816 13:50:32.885945   57945 kubeadm.go:310] 
	I0816 13:50:32.886039   57945 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0816 13:50:32.886139   57945 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0816 13:50:32.886251   57945 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0816 13:50:32.886331   57945 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0816 13:50:32.886369   57945 kubeadm.go:310] 
	W0816 13:50:32.886438   57945 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0816 13:50:32.886474   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 13:50:33.351503   57945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 13:50:33.366285   57945 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 13:50:33.378157   57945 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 13:50:33.378180   57945 kubeadm.go:157] found existing configuration files:
	
	I0816 13:50:33.378241   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 13:50:33.389301   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 13:50:33.389358   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 13:50:33.400730   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 13:50:33.412130   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 13:50:33.412209   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 13:50:33.423484   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 13:50:33.433610   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 13:50:33.433676   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 13:50:33.445384   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 13:50:33.456098   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 13:50:33.456159   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 13:50:33.466036   57945 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 13:50:33.693238   57945 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 13:52:29.699171   57945 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0816 13:52:29.699367   57945 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0816 13:52:29.700903   57945 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0816 13:52:29.701036   57945 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 13:52:29.701228   57945 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 13:52:29.701460   57945 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 13:52:29.701761   57945 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0816 13:52:29.701863   57945 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 13:52:29.703486   57945 out.go:235]   - Generating certificates and keys ...
	I0816 13:52:29.703550   57945 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 13:52:29.703603   57945 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 13:52:29.703671   57945 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 13:52:29.703732   57945 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 13:52:29.703823   57945 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 13:52:29.703918   57945 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 13:52:29.704016   57945 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 13:52:29.704098   57945 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 13:52:29.704190   57945 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 13:52:29.704283   57945 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 13:52:29.704344   57945 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 13:52:29.704407   57945 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 13:52:29.704469   57945 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 13:52:29.704541   57945 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 13:52:29.704630   57945 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 13:52:29.704674   57945 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 13:52:29.704753   57945 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 13:52:29.704824   57945 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 13:52:29.704855   57945 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 13:52:29.704939   57945 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 13:52:29.706461   57945 out.go:235]   - Booting up control plane ...
	I0816 13:52:29.706555   57945 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 13:52:29.706672   57945 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 13:52:29.706744   57945 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 13:52:29.706836   57945 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 13:52:29.707002   57945 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0816 13:52:29.707047   57945 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0816 13:52:29.707126   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:52:29.707345   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:52:29.707438   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:52:29.707691   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:52:29.707752   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:52:29.707892   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:52:29.707969   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:52:29.708132   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:52:29.708219   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:52:29.708478   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:52:29.708500   57945 kubeadm.go:310] 
	I0816 13:52:29.708538   57945 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0816 13:52:29.708579   57945 kubeadm.go:310] 		timed out waiting for the condition
	I0816 13:52:29.708593   57945 kubeadm.go:310] 
	I0816 13:52:29.708633   57945 kubeadm.go:310] 	This error is likely caused by:
	I0816 13:52:29.708660   57945 kubeadm.go:310] 		- The kubelet is not running
	I0816 13:52:29.708743   57945 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0816 13:52:29.708750   57945 kubeadm.go:310] 
	I0816 13:52:29.708841   57945 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0816 13:52:29.708892   57945 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0816 13:52:29.708959   57945 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0816 13:52:29.708969   57945 kubeadm.go:310] 
	I0816 13:52:29.709120   57945 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0816 13:52:29.709237   57945 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0816 13:52:29.709248   57945 kubeadm.go:310] 
	I0816 13:52:29.709412   57945 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0816 13:52:29.709551   57945 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0816 13:52:29.709660   57945 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0816 13:52:29.709755   57945 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0816 13:52:29.709782   57945 kubeadm.go:310] 
	I0816 13:52:29.709836   57945 kubeadm.go:394] duration metric: took 7m57.514215667s to StartCluster
	I0816 13:52:29.709886   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:52:29.709942   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:52:29.753540   57945 cri.go:89] found id: ""
	I0816 13:52:29.753569   57945 logs.go:276] 0 containers: []
	W0816 13:52:29.753580   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:52:29.753588   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:52:29.753655   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:52:29.793951   57945 cri.go:89] found id: ""
	I0816 13:52:29.793975   57945 logs.go:276] 0 containers: []
	W0816 13:52:29.793983   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:52:29.793988   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:52:29.794040   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:52:29.831303   57945 cri.go:89] found id: ""
	I0816 13:52:29.831334   57945 logs.go:276] 0 containers: []
	W0816 13:52:29.831345   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:52:29.831356   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:52:29.831420   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:52:29.867252   57945 cri.go:89] found id: ""
	I0816 13:52:29.867277   57945 logs.go:276] 0 containers: []
	W0816 13:52:29.867285   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:52:29.867296   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:52:29.867349   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:52:29.901161   57945 cri.go:89] found id: ""
	I0816 13:52:29.901188   57945 logs.go:276] 0 containers: []
	W0816 13:52:29.901204   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:52:29.901212   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:52:29.901268   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:52:29.935781   57945 cri.go:89] found id: ""
	I0816 13:52:29.935808   57945 logs.go:276] 0 containers: []
	W0816 13:52:29.935816   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:52:29.935823   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:52:29.935873   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:52:29.970262   57945 cri.go:89] found id: ""
	I0816 13:52:29.970292   57945 logs.go:276] 0 containers: []
	W0816 13:52:29.970303   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:52:29.970310   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:52:29.970370   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:52:30.026580   57945 cri.go:89] found id: ""
	I0816 13:52:30.026610   57945 logs.go:276] 0 containers: []
	W0816 13:52:30.026621   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:52:30.026642   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:52:30.026657   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:52:30.050718   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:52:30.050747   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:52:30.146600   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:52:30.146623   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:52:30.146637   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:52:30.268976   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:52:30.269012   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:52:30.312306   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:52:30.312341   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0816 13:52:30.363242   57945 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0816 13:52:30.363303   57945 out.go:270] * 
	W0816 13:52:30.363365   57945 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0816 13:52:30.363377   57945 out.go:270] * 
	W0816 13:52:30.364104   57945 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 13:52:30.366989   57945 out.go:201] 
	W0816 13:52:30.368192   57945 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0816 13:52:30.368293   57945 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0816 13:52:30.368318   57945 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0816 13:52:30.369674   57945 out.go:201] 
	
	
	==> CRI-O <==
	Aug 16 13:58:23 default-k8s-diff-port-893736 crio[726]: time="2024-08-16 13:58:23.560035727Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723816703560004478,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f3872808-d60c-425c-9161-06c74d35ff51 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 13:58:23 default-k8s-diff-port-893736 crio[726]: time="2024-08-16 13:58:23.560858695Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=53a14aa1-0ab4-4bc3-9135-0294c9ce17b1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:58:23 default-k8s-diff-port-893736 crio[726]: time="2024-08-16 13:58:23.560956920Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=53a14aa1-0ab4-4bc3-9135-0294c9ce17b1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:58:23 default-k8s-diff-port-893736 crio[726]: time="2024-08-16 13:58:23.561217353Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b,PodSandboxId:0832c1beaccf1e546405a95590e2f232fd9dc3af301b054d0ddd1c26def744c2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723815927288158093,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2fbf16a-3bc7-4300-8023-5dbb20ba70bc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53a684d7eb166f20d306b8d2f298e21663f877c1f86e1b35603cee544142d1af,PodSandboxId:f2cca593e350029016755210ab3afd4acfdb3a896b2a39a1aff8994c8254dab0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723815907382589834,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a2a34a97-11aa-4c0e-b5e7-061dba89ed2d,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910,PodSandboxId:3a0568e7a14e9087cc579e0b7e7de4698d9f45ce54d316edc12b53e5bdee8d9e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723815904126132263,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-xdwhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66987c52-9a8c-4ddd-a6cf-ac84172d8c8c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb,PodSandboxId:f947c137b097f1a1e432cca00e0188c9449ebef74565f313500bce79b947dc63,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723815896583153000,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-btq6r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b7b283-d
a62-4cb8-a039-07a509491e5e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825,PodSandboxId:0832c1beaccf1e546405a95590e2f232fd9dc3af301b054d0ddd1c26def744c2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723815896468762272,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2fbf16a-3bc7-4300-8023-
5dbb20ba70bc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9,PodSandboxId:910361984af1fb80fb91b7169b8066c03ad84a0bec20ffaf6c1dfa6f3c5799e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723815891719403480,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-893736,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd65b07d81e7fe90256eaf
6d40549d5a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176,PodSandboxId:4aceba4e7ec56d78083e97d22dd30b21d45b180cdcc11a0acacc2f9b61bc17fe,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723815891718770190,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-893736,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e3972b8e55820f8f106be0692f94f90,},Annotations:map[str
ing]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239,PodSandboxId:3758ce55631b96de1faf39ead67443a9acde0c3a40267f1fc5631306ed23670c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723815891711839477,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-893736,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57bd8aaf450c00c9ac4dc94bbc9c4
8de,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190,PodSandboxId:5042006cc8ce07e1595b62cd91a701e5674d2a8f26d0ee21ea000c84fa2100c5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723815891706741833,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-893736,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85077b11aa053e7b722c3c3d1f6c9c7
b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=53a14aa1-0ab4-4bc3-9135-0294c9ce17b1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:58:23 default-k8s-diff-port-893736 crio[726]: time="2024-08-16 13:58:23.597625142Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ff021adb-f691-42e5-9380-1c0fcf33af31 name=/runtime.v1.RuntimeService/Version
	Aug 16 13:58:23 default-k8s-diff-port-893736 crio[726]: time="2024-08-16 13:58:23.597719335Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ff021adb-f691-42e5-9380-1c0fcf33af31 name=/runtime.v1.RuntimeService/Version
	Aug 16 13:58:23 default-k8s-diff-port-893736 crio[726]: time="2024-08-16 13:58:23.598629228Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=21683fdd-90f4-420f-a57b-ee0006be60ce name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 13:58:23 default-k8s-diff-port-893736 crio[726]: time="2024-08-16 13:58:23.599006504Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723816703598987336,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=21683fdd-90f4-420f-a57b-ee0006be60ce name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 13:58:23 default-k8s-diff-port-893736 crio[726]: time="2024-08-16 13:58:23.599550436Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ae11de41-0165-4d43-bf5b-23528979486f name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:58:23 default-k8s-diff-port-893736 crio[726]: time="2024-08-16 13:58:23.599596871Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ae11de41-0165-4d43-bf5b-23528979486f name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:58:23 default-k8s-diff-port-893736 crio[726]: time="2024-08-16 13:58:23.599792898Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b,PodSandboxId:0832c1beaccf1e546405a95590e2f232fd9dc3af301b054d0ddd1c26def744c2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723815927288158093,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2fbf16a-3bc7-4300-8023-5dbb20ba70bc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53a684d7eb166f20d306b8d2f298e21663f877c1f86e1b35603cee544142d1af,PodSandboxId:f2cca593e350029016755210ab3afd4acfdb3a896b2a39a1aff8994c8254dab0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723815907382589834,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a2a34a97-11aa-4c0e-b5e7-061dba89ed2d,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910,PodSandboxId:3a0568e7a14e9087cc579e0b7e7de4698d9f45ce54d316edc12b53e5bdee8d9e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723815904126132263,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-xdwhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66987c52-9a8c-4ddd-a6cf-ac84172d8c8c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb,PodSandboxId:f947c137b097f1a1e432cca00e0188c9449ebef74565f313500bce79b947dc63,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723815896583153000,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-btq6r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b7b283-d
a62-4cb8-a039-07a509491e5e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825,PodSandboxId:0832c1beaccf1e546405a95590e2f232fd9dc3af301b054d0ddd1c26def744c2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723815896468762272,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2fbf16a-3bc7-4300-8023-
5dbb20ba70bc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9,PodSandboxId:910361984af1fb80fb91b7169b8066c03ad84a0bec20ffaf6c1dfa6f3c5799e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723815891719403480,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-893736,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd65b07d81e7fe90256eaf
6d40549d5a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176,PodSandboxId:4aceba4e7ec56d78083e97d22dd30b21d45b180cdcc11a0acacc2f9b61bc17fe,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723815891718770190,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-893736,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e3972b8e55820f8f106be0692f94f90,},Annotations:map[str
ing]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239,PodSandboxId:3758ce55631b96de1faf39ead67443a9acde0c3a40267f1fc5631306ed23670c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723815891711839477,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-893736,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57bd8aaf450c00c9ac4dc94bbc9c4
8de,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190,PodSandboxId:5042006cc8ce07e1595b62cd91a701e5674d2a8f26d0ee21ea000c84fa2100c5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723815891706741833,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-893736,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85077b11aa053e7b722c3c3d1f6c9c7
b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ae11de41-0165-4d43-bf5b-23528979486f name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:58:23 default-k8s-diff-port-893736 crio[726]: time="2024-08-16 13:58:23.636891953Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bff57f5f-6436-49da-8848-e2443000dc4d name=/runtime.v1.RuntimeService/Version
	Aug 16 13:58:23 default-k8s-diff-port-893736 crio[726]: time="2024-08-16 13:58:23.636995242Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bff57f5f-6436-49da-8848-e2443000dc4d name=/runtime.v1.RuntimeService/Version
	Aug 16 13:58:23 default-k8s-diff-port-893736 crio[726]: time="2024-08-16 13:58:23.638023027Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=634b3bc0-bab8-497b-8bb2-fda8424016d0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 13:58:23 default-k8s-diff-port-893736 crio[726]: time="2024-08-16 13:58:23.638686013Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723816703638662114,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=634b3bc0-bab8-497b-8bb2-fda8424016d0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 13:58:23 default-k8s-diff-port-893736 crio[726]: time="2024-08-16 13:58:23.639175055Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=01045df8-a4e0-42d6-b42f-cb8910ed7db1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:58:23 default-k8s-diff-port-893736 crio[726]: time="2024-08-16 13:58:23.639226592Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=01045df8-a4e0-42d6-b42f-cb8910ed7db1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:58:23 default-k8s-diff-port-893736 crio[726]: time="2024-08-16 13:58:23.639519090Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b,PodSandboxId:0832c1beaccf1e546405a95590e2f232fd9dc3af301b054d0ddd1c26def744c2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723815927288158093,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2fbf16a-3bc7-4300-8023-5dbb20ba70bc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53a684d7eb166f20d306b8d2f298e21663f877c1f86e1b35603cee544142d1af,PodSandboxId:f2cca593e350029016755210ab3afd4acfdb3a896b2a39a1aff8994c8254dab0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723815907382589834,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a2a34a97-11aa-4c0e-b5e7-061dba89ed2d,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910,PodSandboxId:3a0568e7a14e9087cc579e0b7e7de4698d9f45ce54d316edc12b53e5bdee8d9e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723815904126132263,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-xdwhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66987c52-9a8c-4ddd-a6cf-ac84172d8c8c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb,PodSandboxId:f947c137b097f1a1e432cca00e0188c9449ebef74565f313500bce79b947dc63,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723815896583153000,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-btq6r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b7b283-d
a62-4cb8-a039-07a509491e5e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825,PodSandboxId:0832c1beaccf1e546405a95590e2f232fd9dc3af301b054d0ddd1c26def744c2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723815896468762272,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2fbf16a-3bc7-4300-8023-
5dbb20ba70bc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9,PodSandboxId:910361984af1fb80fb91b7169b8066c03ad84a0bec20ffaf6c1dfa6f3c5799e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723815891719403480,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-893736,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd65b07d81e7fe90256eaf
6d40549d5a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176,PodSandboxId:4aceba4e7ec56d78083e97d22dd30b21d45b180cdcc11a0acacc2f9b61bc17fe,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723815891718770190,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-893736,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e3972b8e55820f8f106be0692f94f90,},Annotations:map[str
ing]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239,PodSandboxId:3758ce55631b96de1faf39ead67443a9acde0c3a40267f1fc5631306ed23670c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723815891711839477,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-893736,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57bd8aaf450c00c9ac4dc94bbc9c4
8de,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190,PodSandboxId:5042006cc8ce07e1595b62cd91a701e5674d2a8f26d0ee21ea000c84fa2100c5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723815891706741833,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-893736,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85077b11aa053e7b722c3c3d1f6c9c7
b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=01045df8-a4e0-42d6-b42f-cb8910ed7db1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:58:23 default-k8s-diff-port-893736 crio[726]: time="2024-08-16 13:58:23.673205157Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1ca17334-0304-4dd9-b0e8-da144e93e01f name=/runtime.v1.RuntimeService/Version
	Aug 16 13:58:23 default-k8s-diff-port-893736 crio[726]: time="2024-08-16 13:58:23.673292757Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1ca17334-0304-4dd9-b0e8-da144e93e01f name=/runtime.v1.RuntimeService/Version
	Aug 16 13:58:23 default-k8s-diff-port-893736 crio[726]: time="2024-08-16 13:58:23.674296874Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=265dd18b-f7d7-4111-83f7-690dfbd45b65 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 13:58:23 default-k8s-diff-port-893736 crio[726]: time="2024-08-16 13:58:23.674835684Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723816703674810860,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=265dd18b-f7d7-4111-83f7-690dfbd45b65 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 13:58:23 default-k8s-diff-port-893736 crio[726]: time="2024-08-16 13:58:23.675408850Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=abe0e3b0-718f-4efa-8844-3106d2d9f081 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:58:23 default-k8s-diff-port-893736 crio[726]: time="2024-08-16 13:58:23.675520755Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=abe0e3b0-718f-4efa-8844-3106d2d9f081 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:58:23 default-k8s-diff-port-893736 crio[726]: time="2024-08-16 13:58:23.675746775Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b,PodSandboxId:0832c1beaccf1e546405a95590e2f232fd9dc3af301b054d0ddd1c26def744c2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723815927288158093,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2fbf16a-3bc7-4300-8023-5dbb20ba70bc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53a684d7eb166f20d306b8d2f298e21663f877c1f86e1b35603cee544142d1af,PodSandboxId:f2cca593e350029016755210ab3afd4acfdb3a896b2a39a1aff8994c8254dab0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723815907382589834,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a2a34a97-11aa-4c0e-b5e7-061dba89ed2d,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910,PodSandboxId:3a0568e7a14e9087cc579e0b7e7de4698d9f45ce54d316edc12b53e5bdee8d9e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723815904126132263,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-xdwhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66987c52-9a8c-4ddd-a6cf-ac84172d8c8c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb,PodSandboxId:f947c137b097f1a1e432cca00e0188c9449ebef74565f313500bce79b947dc63,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723815896583153000,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-btq6r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b7b283-d
a62-4cb8-a039-07a509491e5e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825,PodSandboxId:0832c1beaccf1e546405a95590e2f232fd9dc3af301b054d0ddd1c26def744c2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723815896468762272,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2fbf16a-3bc7-4300-8023-
5dbb20ba70bc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9,PodSandboxId:910361984af1fb80fb91b7169b8066c03ad84a0bec20ffaf6c1dfa6f3c5799e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723815891719403480,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-893736,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd65b07d81e7fe90256eaf
6d40549d5a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176,PodSandboxId:4aceba4e7ec56d78083e97d22dd30b21d45b180cdcc11a0acacc2f9b61bc17fe,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723815891718770190,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-893736,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e3972b8e55820f8f106be0692f94f90,},Annotations:map[str
ing]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239,PodSandboxId:3758ce55631b96de1faf39ead67443a9acde0c3a40267f1fc5631306ed23670c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723815891711839477,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-893736,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57bd8aaf450c00c9ac4dc94bbc9c4
8de,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190,PodSandboxId:5042006cc8ce07e1595b62cd91a701e5674d2a8f26d0ee21ea000c84fa2100c5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723815891706741833,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-893736,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85077b11aa053e7b722c3c3d1f6c9c7
b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=abe0e3b0-718f-4efa-8844-3106d2d9f081 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7f296429e678f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   0832c1beaccf1       storage-provisioner
	53a684d7eb166       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   f2cca593e3500       busybox
	8922cc9760a0e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   3a0568e7a14e9       coredns-6f6b679f8f-xdwhx
	99545c4e9a57a       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      13 minutes ago      Running             kube-proxy                1                   f947c137b097f       kube-proxy-btq6r
	17df9b5cc9f16       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   0832c1beaccf1       storage-provisioner
	ec5ec870d772b       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      13 minutes ago      Running             kube-scheduler            1                   910361984af1f       kube-scheduler-default-k8s-diff-port-893736
	83bd481c9871b       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   4aceba4e7ec56       etcd-default-k8s-diff-port-893736
	590cecb818b97       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      13 minutes ago      Running             kube-controller-manager   1                   3758ce55631b9       kube-controller-manager-default-k8s-diff-port-893736
	4f1bf38f05e69       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      13 minutes ago      Running             kube-apiserver            1                   5042006cc8ce0       kube-apiserver-default-k8s-diff-port-893736
	
	
	==> coredns [8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:50656 - 25965 "HINFO IN 1543422988393869237.765001929891377544. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.020181201s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-893736
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-893736
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab84f9bc76071a77c857a14f5c66dccc01002b05
	                    minikube.k8s.io/name=default-k8s-diff-port-893736
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_16T13_38_45_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 13:38:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-893736
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 13:58:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 13:55:38 +0000   Fri, 16 Aug 2024 13:38:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 13:55:38 +0000   Fri, 16 Aug 2024 13:38:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 13:55:38 +0000   Fri, 16 Aug 2024 13:38:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 13:55:38 +0000   Fri, 16 Aug 2024 13:45:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.186
	  Hostname:    default-k8s-diff-port-893736
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d6f3dd157da547f5bd69db04ff223432
	  System UUID:                d6f3dd15-7da5-47f5-bd69-db04ff223432
	  Boot ID:                    994e1b50-ef04-41ea-aa93-7dd82a2a6026
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 coredns-6f6b679f8f-xdwhx                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     19m
	  kube-system                 etcd-default-k8s-diff-port-893736                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         19m
	  kube-system                 kube-apiserver-default-k8s-diff-port-893736             250m (12%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-893736    200m (10%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-btq6r                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-default-k8s-diff-port-893736             100m (5%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 metrics-server-6867b74b74-j9tqh                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         18m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     19m                kubelet          Node default-k8s-diff-port-893736 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    19m                kubelet          Node default-k8s-diff-port-893736 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  19m                kubelet          Node default-k8s-diff-port-893736 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                19m                kubelet          Node default-k8s-diff-port-893736 status is now: NodeReady
	  Normal  RegisteredNode           19m                node-controller  Node default-k8s-diff-port-893736 event: Registered Node default-k8s-diff-port-893736 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-893736 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-893736 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-893736 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-893736 event: Registered Node default-k8s-diff-port-893736 in Controller
	
	
	==> dmesg <==
	[Aug16 13:44] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053339] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042395] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.239436] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.611213] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.383665] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.815541] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.061386] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060954] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +0.176878] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +0.136118] systemd-fstab-generator[681]: Ignoring "noauto" option for root device
	[  +0.309851] systemd-fstab-generator[710]: Ignoring "noauto" option for root device
	[  +4.225700] systemd-fstab-generator[808]: Ignoring "noauto" option for root device
	[  +2.167717] systemd-fstab-generator[930]: Ignoring "noauto" option for root device
	[  +0.065550] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.582640] kauditd_printk_skb: 69 callbacks suppressed
	[  +2.405193] systemd-fstab-generator[1555]: Ignoring "noauto" option for root device
	[Aug16 13:45] kauditd_printk_skb: 64 callbacks suppressed
	[  +5.046548] kauditd_printk_skb: 41 callbacks suppressed
	
	
	==> etcd [83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176] <==
	{"level":"info","ts":"2024-08-16T13:44:52.171607Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-16T13:44:52.171878Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"e5596a975f8061c0","initial-advertise-peer-urls":["https://192.168.50.186:2380"],"listen-peer-urls":["https://192.168.50.186:2380"],"advertise-client-urls":["https://192.168.50.186:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.186:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-16T13:44:52.171925Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-16T13:44:52.172051Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.186:2380"}
	{"level":"info","ts":"2024-08-16T13:44:52.172075Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.186:2380"}
	{"level":"info","ts":"2024-08-16T13:44:54.033752Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5596a975f8061c0 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-16T13:44:54.033864Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5596a975f8061c0 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-16T13:44:54.033918Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5596a975f8061c0 received MsgPreVoteResp from e5596a975f8061c0 at term 2"}
	{"level":"info","ts":"2024-08-16T13:44:54.033953Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5596a975f8061c0 became candidate at term 3"}
	{"level":"info","ts":"2024-08-16T13:44:54.033977Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5596a975f8061c0 received MsgVoteResp from e5596a975f8061c0 at term 3"}
	{"level":"info","ts":"2024-08-16T13:44:54.034009Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5596a975f8061c0 became leader at term 3"}
	{"level":"info","ts":"2024-08-16T13:44:54.034036Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e5596a975f8061c0 elected leader e5596a975f8061c0 at term 3"}
	{"level":"info","ts":"2024-08-16T13:44:54.037799Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"e5596a975f8061c0","local-member-attributes":"{Name:default-k8s-diff-port-893736 ClientURLs:[https://192.168.50.186:2379]}","request-path":"/0/members/e5596a975f8061c0/attributes","cluster-id":"e001ea9e448e2c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-16T13:44:54.037827Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T13:44:54.038151Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-16T13:44:54.038192Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-16T13:44:54.037861Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T13:44:54.039036Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T13:44:54.039263Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T13:44:54.039846Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.186:2379"}
	{"level":"info","ts":"2024-08-16T13:44:54.040503Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-08-16T13:45:12.297276Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.598558ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7043789639099131572 > lease_revoke:<id:61c0915b6f14a524>","response":"size:27"}
	{"level":"info","ts":"2024-08-16T13:54:54.073528Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":863}
	{"level":"info","ts":"2024-08-16T13:54:54.084682Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":863,"took":"10.702137ms","hash":2701611848,"current-db-size-bytes":2666496,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2666496,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-08-16T13:54:54.084814Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2701611848,"revision":863,"compact-revision":-1}
	
	
	==> kernel <==
	 13:58:24 up 13 min,  0 users,  load average: 0.01, 0.10, 0.11
	Linux default-k8s-diff-port-893736 5.10.207 #1 SMP Wed Aug 14 19:18:01 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190] <==
	W0816 13:54:56.358316       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 13:54:56.358526       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0816 13:54:56.359418       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0816 13:54:56.360613       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0816 13:55:56.360238       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 13:55:56.360606       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0816 13:55:56.360748       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 13:55:56.360853       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0816 13:55:56.361773       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0816 13:55:56.362958       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0816 13:57:56.362164       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 13:57:56.362516       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0816 13:57:56.363322       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 13:57:56.363412       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0816 13:57:56.364534       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0816 13:57:56.364561       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239] <==
	E0816 13:52:58.974603       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 13:52:59.421754       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 13:53:28.981733       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 13:53:29.430181       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 13:53:58.988612       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 13:53:59.437351       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 13:54:28.994962       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 13:54:29.446362       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 13:54:59.001713       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 13:54:59.455014       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 13:55:29.009584       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 13:55:29.462887       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0816 13:55:38.791261       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-893736"
	E0816 13:55:59.016157       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 13:55:59.470054       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0816 13:56:05.067681       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="317.496µs"
	I0816 13:56:18.061048       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="52.077µs"
	E0816 13:56:29.022223       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 13:56:29.477194       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 13:56:59.029688       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 13:56:59.484968       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 13:57:29.036887       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 13:57:29.493148       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 13:57:59.043237       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 13:57:59.501411       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0816 13:44:56.807267       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0816 13:44:56.817235       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.186"]
	E0816 13:44:56.817411       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 13:44:56.848664       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0816 13:44:56.848711       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0816 13:44:56.848737       1 server_linux.go:169] "Using iptables Proxier"
	I0816 13:44:56.852262       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 13:44:56.852605       1 server.go:483] "Version info" version="v1.31.0"
	I0816 13:44:56.852618       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 13:44:56.854157       1 config.go:197] "Starting service config controller"
	I0816 13:44:56.854183       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 13:44:56.854208       1 config.go:104] "Starting endpoint slice config controller"
	I0816 13:44:56.854213       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 13:44:56.857123       1 config.go:326] "Starting node config controller"
	I0816 13:44:56.857175       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 13:44:56.955262       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0816 13:44:56.955338       1 shared_informer.go:320] Caches are synced for service config
	I0816 13:44:56.957415       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9] <==
	I0816 13:44:52.926610       1 serving.go:386] Generated self-signed cert in-memory
	W0816 13:44:55.310586       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0816 13:44:55.312539       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0816 13:44:55.312765       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0816 13:44:55.312868       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0816 13:44:55.371928       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0816 13:44:55.371984       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 13:44:55.382179       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0816 13:44:55.382299       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0816 13:44:55.382334       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 13:44:55.382366       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0816 13:44:55.483608       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 16 13:57:11 default-k8s-diff-port-893736 kubelet[937]: E0816 13:57:11.247989     937 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723816631247251984,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:57:21 default-k8s-diff-port-893736 kubelet[937]: E0816 13:57:21.249856     937 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723816641249139724,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:57:21 default-k8s-diff-port-893736 kubelet[937]: E0816 13:57:21.250271     937 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723816641249139724,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:57:23 default-k8s-diff-port-893736 kubelet[937]: E0816 13:57:23.046663     937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-j9tqh" podUID="ef077e6d-f368-4872-bb87-9e031d3ea764"
	Aug 16 13:57:31 default-k8s-diff-port-893736 kubelet[937]: E0816 13:57:31.251905     937 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723816651251558110,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:57:31 default-k8s-diff-port-893736 kubelet[937]: E0816 13:57:31.252290     937 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723816651251558110,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:57:35 default-k8s-diff-port-893736 kubelet[937]: E0816 13:57:35.045963     937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-j9tqh" podUID="ef077e6d-f368-4872-bb87-9e031d3ea764"
	Aug 16 13:57:41 default-k8s-diff-port-893736 kubelet[937]: E0816 13:57:41.254097     937 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723816661253735193,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:57:41 default-k8s-diff-port-893736 kubelet[937]: E0816 13:57:41.254378     937 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723816661253735193,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:57:48 default-k8s-diff-port-893736 kubelet[937]: E0816 13:57:48.046476     937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-j9tqh" podUID="ef077e6d-f368-4872-bb87-9e031d3ea764"
	Aug 16 13:57:51 default-k8s-diff-port-893736 kubelet[937]: E0816 13:57:51.069093     937 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 13:57:51 default-k8s-diff-port-893736 kubelet[937]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 13:57:51 default-k8s-diff-port-893736 kubelet[937]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 13:57:51 default-k8s-diff-port-893736 kubelet[937]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 13:57:51 default-k8s-diff-port-893736 kubelet[937]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 13:57:51 default-k8s-diff-port-893736 kubelet[937]: E0816 13:57:51.255774     937 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723816671255342506,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:57:51 default-k8s-diff-port-893736 kubelet[937]: E0816 13:57:51.255817     937 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723816671255342506,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:58:01 default-k8s-diff-port-893736 kubelet[937]: E0816 13:58:01.046499     937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-j9tqh" podUID="ef077e6d-f368-4872-bb87-9e031d3ea764"
	Aug 16 13:58:01 default-k8s-diff-port-893736 kubelet[937]: E0816 13:58:01.258666     937 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723816681258340994,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:58:01 default-k8s-diff-port-893736 kubelet[937]: E0816 13:58:01.258706     937 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723816681258340994,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:58:11 default-k8s-diff-port-893736 kubelet[937]: E0816 13:58:11.260897     937 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723816691260380143,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:58:11 default-k8s-diff-port-893736 kubelet[937]: E0816 13:58:11.261187     937 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723816691260380143,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:58:16 default-k8s-diff-port-893736 kubelet[937]: E0816 13:58:16.045658     937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-j9tqh" podUID="ef077e6d-f368-4872-bb87-9e031d3ea764"
	Aug 16 13:58:21 default-k8s-diff-port-893736 kubelet[937]: E0816 13:58:21.262614     937 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723816701262086832,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:58:21 default-k8s-diff-port-893736 kubelet[937]: E0816 13:58:21.262985     937 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723816701262086832,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825] <==
	I0816 13:44:56.680103       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0816 13:45:26.684749       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b] <==
	I0816 13:45:27.384272       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0816 13:45:27.396206       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0816 13:45:27.396288       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0816 13:45:44.795204       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0816 13:45:44.795771       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b686b8e6-c7e8-4382-830a-268f7125cb2c", APIVersion:"v1", ResourceVersion:"646", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-893736_9dc92472-8aec-48b8-972b-56b4cd9bdaff became leader
	I0816 13:45:44.797879       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-893736_9dc92472-8aec-48b8-972b-56b4cd9bdaff!
	I0816 13:45:44.898673       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-893736_9dc92472-8aec-48b8-972b-56b4cd9bdaff!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-893736 -n default-k8s-diff-port-893736
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-893736 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-j9tqh
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-893736 describe pod metrics-server-6867b74b74-j9tqh
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-893736 describe pod metrics-server-6867b74b74-j9tqh: exit status 1 (63.031152ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-j9tqh" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-893736 describe pod metrics-server-6867b74b74-j9tqh: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0816 13:50:40.920649   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/functional-756697/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-302520 -n embed-certs-302520
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-08-16 13:59:17.446251197 +0000 UTC m=+5900.354941811
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-302520 -n embed-certs-302520
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-302520 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-302520 logs -n 25: (2.09944437s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cert-options-779306 -- sudo                         | cert-options-779306          | jenkins | v1.33.1 | 16 Aug 24 13:34 UTC | 16 Aug 24 13:34 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-779306                                 | cert-options-779306          | jenkins | v1.33.1 | 16 Aug 24 13:34 UTC | 16 Aug 24 13:34 UTC |
	| start   | -p no-preload-311070                                   | no-preload-311070            | jenkins | v1.33.1 | 16 Aug 24 13:34 UTC | 16 Aug 24 13:36 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-759623                           | kubernetes-upgrade-759623    | jenkins | v1.33.1 | 16 Aug 24 13:35 UTC | 16 Aug 24 13:35 UTC |
	| start   | -p embed-certs-302520                                  | embed-certs-302520           | jenkins | v1.33.1 | 16 Aug 24 13:35 UTC | 16 Aug 24 13:36 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-302520            | embed-certs-302520           | jenkins | v1.33.1 | 16 Aug 24 13:36 UTC | 16 Aug 24 13:36 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-302520                                  | embed-certs-302520           | jenkins | v1.33.1 | 16 Aug 24 13:36 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-311070             | no-preload-311070            | jenkins | v1.33.1 | 16 Aug 24 13:36 UTC | 16 Aug 24 13:37 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-311070                                   | no-preload-311070            | jenkins | v1.33.1 | 16 Aug 24 13:37 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-050553                              | cert-expiration-050553       | jenkins | v1.33.1 | 16 Aug 24 13:37 UTC | 16 Aug 24 13:38 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-050553                              | cert-expiration-050553       | jenkins | v1.33.1 | 16 Aug 24 13:38 UTC | 16 Aug 24 13:38 UTC |
	| delete  | -p                                                     | disable-driver-mounts-338033 | jenkins | v1.33.1 | 16 Aug 24 13:38 UTC | 16 Aug 24 13:38 UTC |
	|         | disable-driver-mounts-338033                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-893736 | jenkins | v1.33.1 | 16 Aug 24 13:38 UTC | 16 Aug 24 13:39 UTC |
	|         | default-k8s-diff-port-893736                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-302520                 | embed-certs-302520           | jenkins | v1.33.1 | 16 Aug 24 13:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-882237        | old-k8s-version-882237       | jenkins | v1.33.1 | 16 Aug 24 13:39 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-302520                                  | embed-certs-302520           | jenkins | v1.33.1 | 16 Aug 24 13:39 UTC | 16 Aug 24 13:50 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-311070                  | no-preload-311070            | jenkins | v1.33.1 | 16 Aug 24 13:39 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-311070                                   | no-preload-311070            | jenkins | v1.33.1 | 16 Aug 24 13:39 UTC | 16 Aug 24 13:48 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-893736  | default-k8s-diff-port-893736 | jenkins | v1.33.1 | 16 Aug 24 13:39 UTC | 16 Aug 24 13:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-893736 | jenkins | v1.33.1 | 16 Aug 24 13:39 UTC |                     |
	|         | default-k8s-diff-port-893736                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-882237                              | old-k8s-version-882237       | jenkins | v1.33.1 | 16 Aug 24 13:40 UTC | 16 Aug 24 13:40 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-882237             | old-k8s-version-882237       | jenkins | v1.33.1 | 16 Aug 24 13:40 UTC | 16 Aug 24 13:40 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-882237                              | old-k8s-version-882237       | jenkins | v1.33.1 | 16 Aug 24 13:40 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-893736       | default-k8s-diff-port-893736 | jenkins | v1.33.1 | 16 Aug 24 13:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-893736 | jenkins | v1.33.1 | 16 Aug 24 13:42 UTC | 16 Aug 24 13:49 UTC |
	|         | default-k8s-diff-port-893736                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 13:42:15
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 13:42:15.998819   58430 out.go:345] Setting OutFile to fd 1 ...
	I0816 13:42:15.998960   58430 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 13:42:15.998970   58430 out.go:358] Setting ErrFile to fd 2...
	I0816 13:42:15.998976   58430 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 13:42:15.999197   58430 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-3966/.minikube/bin
	I0816 13:42:15.999747   58430 out.go:352] Setting JSON to false
	I0816 13:42:16.000715   58430 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5081,"bootTime":1723810655,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 13:42:16.000770   58430 start.go:139] virtualization: kvm guest
	I0816 13:42:16.003216   58430 out.go:177] * [default-k8s-diff-port-893736] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 13:42:16.004663   58430 notify.go:220] Checking for updates...
	I0816 13:42:16.004698   58430 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 13:42:16.006298   58430 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 13:42:16.007719   58430 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 13:42:16.009073   58430 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-3966/.minikube
	I0816 13:42:16.010602   58430 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 13:42:16.012058   58430 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 13:42:16.013799   58430 config.go:182] Loaded profile config "default-k8s-diff-port-893736": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 13:42:16.014204   58430 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:42:16.014278   58430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:42:16.029427   58430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34093
	I0816 13:42:16.029977   58430 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:42:16.030548   58430 main.go:141] libmachine: Using API Version  1
	I0816 13:42:16.030573   58430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:42:16.030903   58430 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:42:16.031164   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:42:16.031412   58430 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 13:42:16.031691   58430 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:42:16.031731   58430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:42:16.046245   58430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38243
	I0816 13:42:16.046668   58430 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:42:16.047205   58430 main.go:141] libmachine: Using API Version  1
	I0816 13:42:16.047244   58430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:42:16.047537   58430 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:42:16.047730   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:42:16.080470   58430 out.go:177] * Using the kvm2 driver based on existing profile
	I0816 13:42:16.081700   58430 start.go:297] selected driver: kvm2
	I0816 13:42:16.081721   58430 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-893736 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-893736 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.186 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 13:42:16.081825   58430 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 13:42:16.082512   58430 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 13:42:16.082593   58430 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-3966/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 13:42:16.097784   58430 install.go:137] /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0816 13:42:16.098155   58430 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 13:42:16.098223   58430 cni.go:84] Creating CNI manager for ""
	I0816 13:42:16.098233   58430 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:42:16.098274   58430 start.go:340] cluster config:
	{Name:default-k8s-diff-port-893736 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-893736 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.186 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 13:42:16.098365   58430 iso.go:125] acquiring lock: {Name:mk00139b359c6fe4c84d80e843a2099303fb7c5b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 13:42:16.100341   58430 out.go:177] * Starting "default-k8s-diff-port-893736" primary control-plane node in "default-k8s-diff-port-893736" cluster
	I0816 13:42:17.205125   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:42:16.101925   58430 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 13:42:16.101966   58430 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0816 13:42:16.101973   58430 cache.go:56] Caching tarball of preloaded images
	I0816 13:42:16.102052   58430 preload.go:172] Found /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 13:42:16.102063   58430 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0816 13:42:16.102162   58430 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/default-k8s-diff-port-893736/config.json ...
	I0816 13:42:16.102344   58430 start.go:360] acquireMachinesLock for default-k8s-diff-port-893736: {Name:mk0b4b88ee8893cefe753073bc08069ac50e5641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 13:42:23.285172   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:42:26.357214   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:42:32.437218   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:42:35.509221   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:42:41.589174   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:42:44.661162   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:42:50.741223   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:42:53.813193   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:42:59.893180   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:43:02.965205   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:43:09.045252   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:43:12.117232   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:43:18.197189   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:43:21.269234   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:43:27.349182   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:43:30.421174   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:43:36.501197   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:43:39.573246   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:43:42.577406   57440 start.go:364] duration metric: took 4m10.318515071s to acquireMachinesLock for "no-preload-311070"
	I0816 13:43:42.577513   57440 start.go:96] Skipping create...Using existing machine configuration
	I0816 13:43:42.577529   57440 fix.go:54] fixHost starting: 
	I0816 13:43:42.577955   57440 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:43:42.577989   57440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:43:42.593032   57440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43785
	I0816 13:43:42.593416   57440 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:43:42.593860   57440 main.go:141] libmachine: Using API Version  1
	I0816 13:43:42.593882   57440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:43:42.594256   57440 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:43:42.594434   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	I0816 13:43:42.594586   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetState
	I0816 13:43:42.596234   57440 fix.go:112] recreateIfNeeded on no-preload-311070: state=Stopped err=<nil>
	I0816 13:43:42.596261   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	W0816 13:43:42.596431   57440 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 13:43:42.598334   57440 out.go:177] * Restarting existing kvm2 VM for "no-preload-311070" ...
	I0816 13:43:42.574954   57240 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 13:43:42.574990   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetMachineName
	I0816 13:43:42.575324   57240 buildroot.go:166] provisioning hostname "embed-certs-302520"
	I0816 13:43:42.575349   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetMachineName
	I0816 13:43:42.575554   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:43:42.577250   57240 machine.go:96] duration metric: took 4m37.4289608s to provisionDockerMachine
	I0816 13:43:42.577309   57240 fix.go:56] duration metric: took 4m37.450613575s for fixHost
	I0816 13:43:42.577314   57240 start.go:83] releasing machines lock for "embed-certs-302520", held for 4m37.450631849s
	W0816 13:43:42.577330   57240 start.go:714] error starting host: provision: host is not running
	W0816 13:43:42.577401   57240 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0816 13:43:42.577410   57240 start.go:729] Will try again in 5 seconds ...
	I0816 13:43:42.599558   57440 main.go:141] libmachine: (no-preload-311070) Calling .Start
	I0816 13:43:42.599720   57440 main.go:141] libmachine: (no-preload-311070) Ensuring networks are active...
	I0816 13:43:42.600383   57440 main.go:141] libmachine: (no-preload-311070) Ensuring network default is active
	I0816 13:43:42.600682   57440 main.go:141] libmachine: (no-preload-311070) Ensuring network mk-no-preload-311070 is active
	I0816 13:43:42.601157   57440 main.go:141] libmachine: (no-preload-311070) Getting domain xml...
	I0816 13:43:42.601868   57440 main.go:141] libmachine: (no-preload-311070) Creating domain...
	I0816 13:43:43.816308   57440 main.go:141] libmachine: (no-preload-311070) Waiting to get IP...
	I0816 13:43:43.817179   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:43.817566   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:43.817586   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:43.817516   58770 retry.go:31] will retry after 295.385031ms: waiting for machine to come up
	I0816 13:43:44.115046   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:44.115850   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:44.115875   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:44.115787   58770 retry.go:31] will retry after 340.249659ms: waiting for machine to come up
	I0816 13:43:44.457278   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:44.457722   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:44.457752   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:44.457657   58770 retry.go:31] will retry after 476.905089ms: waiting for machine to come up
	I0816 13:43:44.936230   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:44.936674   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:44.936714   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:44.936640   58770 retry.go:31] will retry after 555.288542ms: waiting for machine to come up
	I0816 13:43:45.493301   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:45.493698   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:45.493724   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:45.493657   58770 retry.go:31] will retry after 462.336365ms: waiting for machine to come up
	I0816 13:43:45.957163   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:45.957553   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:45.957580   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:45.957509   58770 retry.go:31] will retry after 886.665194ms: waiting for machine to come up
	I0816 13:43:46.845380   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:46.845743   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:46.845763   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:46.845723   58770 retry.go:31] will retry after 909.05227ms: waiting for machine to come up
	I0816 13:43:47.579134   57240 start.go:360] acquireMachinesLock for embed-certs-302520: {Name:mk0b4b88ee8893cefe753073bc08069ac50e5641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 13:43:47.755998   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:47.756439   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:47.756460   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:47.756407   58770 retry.go:31] will retry after 1.380778497s: waiting for machine to come up
	I0816 13:43:49.138398   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:49.138861   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:49.138884   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:49.138811   58770 retry.go:31] will retry after 1.788185586s: waiting for machine to come up
	I0816 13:43:50.929915   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:50.930326   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:50.930356   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:50.930276   58770 retry.go:31] will retry after 1.603049262s: waiting for machine to come up
	I0816 13:43:52.536034   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:52.536492   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:52.536518   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:52.536438   58770 retry.go:31] will retry after 1.964966349s: waiting for machine to come up
	I0816 13:43:54.504003   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:54.504408   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:54.504440   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:54.504363   58770 retry.go:31] will retry after 3.616796835s: waiting for machine to come up
	I0816 13:43:58.122295   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:58.122714   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:58.122747   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:58.122673   58770 retry.go:31] will retry after 3.893804146s: waiting for machine to come up
	I0816 13:44:02.020870   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.021351   57440 main.go:141] libmachine: (no-preload-311070) Found IP for machine: 192.168.61.116
	I0816 13:44:02.021372   57440 main.go:141] libmachine: (no-preload-311070) Reserving static IP address...
	I0816 13:44:02.021385   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has current primary IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.021917   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "no-preload-311070", mac: "52:54:00:14:17:b3", ip: "192.168.61.116"} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:02.021948   57440 main.go:141] libmachine: (no-preload-311070) Reserved static IP address: 192.168.61.116
	I0816 13:44:02.021966   57440 main.go:141] libmachine: (no-preload-311070) DBG | skip adding static IP to network mk-no-preload-311070 - found existing host DHCP lease matching {name: "no-preload-311070", mac: "52:54:00:14:17:b3", ip: "192.168.61.116"}
	I0816 13:44:02.021977   57440 main.go:141] libmachine: (no-preload-311070) DBG | Getting to WaitForSSH function...
	I0816 13:44:02.021989   57440 main.go:141] libmachine: (no-preload-311070) Waiting for SSH to be available...
	I0816 13:44:02.024661   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.025071   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:02.025094   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.025327   57440 main.go:141] libmachine: (no-preload-311070) DBG | Using SSH client type: external
	I0816 13:44:02.025349   57440 main.go:141] libmachine: (no-preload-311070) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/no-preload-311070/id_rsa (-rw-------)
	I0816 13:44:02.025376   57440 main.go:141] libmachine: (no-preload-311070) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-3966/.minikube/machines/no-preload-311070/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 13:44:02.025387   57440 main.go:141] libmachine: (no-preload-311070) DBG | About to run SSH command:
	I0816 13:44:02.025406   57440 main.go:141] libmachine: (no-preload-311070) DBG | exit 0
	I0816 13:44:02.148864   57440 main.go:141] libmachine: (no-preload-311070) DBG | SSH cmd err, output: <nil>: 
	I0816 13:44:02.149279   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetConfigRaw
	I0816 13:44:02.149868   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetIP
	I0816 13:44:02.152149   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.152460   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:02.152481   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.152681   57440 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070/config.json ...
	I0816 13:44:02.152853   57440 machine.go:93] provisionDockerMachine start ...
	I0816 13:44:02.152869   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	I0816 13:44:02.153131   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:02.155341   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.155703   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:02.155743   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.155845   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:03.389847   57945 start.go:364] duration metric: took 3m33.186277254s to acquireMachinesLock for "old-k8s-version-882237"
	I0816 13:44:03.389911   57945 start.go:96] Skipping create...Using existing machine configuration
	I0816 13:44:03.389923   57945 fix.go:54] fixHost starting: 
	I0816 13:44:03.390344   57945 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:03.390384   57945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:03.406808   57945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40841
	I0816 13:44:03.407227   57945 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:03.407790   57945 main.go:141] libmachine: Using API Version  1
	I0816 13:44:03.407819   57945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:03.408124   57945 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:03.408341   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:44:03.408506   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetState
	I0816 13:44:03.409993   57945 fix.go:112] recreateIfNeeded on old-k8s-version-882237: state=Stopped err=<nil>
	I0816 13:44:03.410029   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	W0816 13:44:03.410200   57945 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 13:44:03.412299   57945 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-882237" ...
	I0816 13:44:02.156024   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:02.156199   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:02.156350   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:02.156557   57440 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:02.156747   57440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0816 13:44:02.156758   57440 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 13:44:02.261263   57440 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 13:44:02.261290   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetMachineName
	I0816 13:44:02.261514   57440 buildroot.go:166] provisioning hostname "no-preload-311070"
	I0816 13:44:02.261528   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetMachineName
	I0816 13:44:02.261696   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:02.264473   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.264892   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:02.264936   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.265030   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:02.265198   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:02.265365   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:02.265485   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:02.265624   57440 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:02.265796   57440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0816 13:44:02.265813   57440 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-311070 && echo "no-preload-311070" | sudo tee /etc/hostname
	I0816 13:44:02.384079   57440 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-311070
	
	I0816 13:44:02.384112   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:02.386756   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.387065   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:02.387104   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.387285   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:02.387501   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:02.387699   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:02.387843   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:02.387997   57440 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:02.388193   57440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0816 13:44:02.388218   57440 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-311070' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-311070/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-311070' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 13:44:02.502089   57440 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 13:44:02.502122   57440 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-3966/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-3966/.minikube}
	I0816 13:44:02.502159   57440 buildroot.go:174] setting up certificates
	I0816 13:44:02.502173   57440 provision.go:84] configureAuth start
	I0816 13:44:02.502191   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetMachineName
	I0816 13:44:02.502484   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetIP
	I0816 13:44:02.505215   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.505523   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:02.505560   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.505726   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:02.507770   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.508033   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:02.508062   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.508193   57440 provision.go:143] copyHostCerts
	I0816 13:44:02.508249   57440 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem, removing ...
	I0816 13:44:02.508267   57440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem
	I0816 13:44:02.508336   57440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem (1082 bytes)
	I0816 13:44:02.508426   57440 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem, removing ...
	I0816 13:44:02.508435   57440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem
	I0816 13:44:02.508460   57440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem (1123 bytes)
	I0816 13:44:02.508520   57440 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem, removing ...
	I0816 13:44:02.508527   57440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem
	I0816 13:44:02.508548   57440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem (1675 bytes)
	I0816 13:44:02.508597   57440 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem org=jenkins.no-preload-311070 san=[127.0.0.1 192.168.61.116 localhost minikube no-preload-311070]
	I0816 13:44:02.732379   57440 provision.go:177] copyRemoteCerts
	I0816 13:44:02.732434   57440 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 13:44:02.732458   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:02.735444   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.735803   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:02.735837   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.736040   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:02.736274   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:02.736428   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:02.736587   57440 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/no-preload-311070/id_rsa Username:docker}
	I0816 13:44:02.819602   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 13:44:02.843489   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0816 13:44:02.866482   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 13:44:02.889908   57440 provision.go:87] duration metric: took 387.723287ms to configureAuth
	I0816 13:44:02.889936   57440 buildroot.go:189] setting minikube options for container-runtime
	I0816 13:44:02.890151   57440 config.go:182] Loaded profile config "no-preload-311070": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 13:44:02.890250   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:02.892851   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.893158   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:02.893184   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.893381   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:02.893607   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:02.893777   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:02.893925   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:02.894076   57440 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:02.894267   57440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0816 13:44:02.894286   57440 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 13:44:03.153730   57440 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 13:44:03.153755   57440 machine.go:96] duration metric: took 1.000891309s to provisionDockerMachine
	I0816 13:44:03.153766   57440 start.go:293] postStartSetup for "no-preload-311070" (driver="kvm2")
	I0816 13:44:03.153776   57440 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 13:44:03.153790   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	I0816 13:44:03.154088   57440 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 13:44:03.154122   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:03.156612   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:03.156931   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:03.156969   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:03.157113   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:03.157299   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:03.157438   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:03.157595   57440 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/no-preload-311070/id_rsa Username:docker}
	I0816 13:44:03.241700   57440 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 13:44:03.246133   57440 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 13:44:03.246155   57440 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/addons for local assets ...
	I0816 13:44:03.246221   57440 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/files for local assets ...
	I0816 13:44:03.246292   57440 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem -> 111492.pem in /etc/ssl/certs
	I0816 13:44:03.246379   57440 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 13:44:03.257778   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /etc/ssl/certs/111492.pem (1708 bytes)
	I0816 13:44:03.283511   57440 start.go:296] duration metric: took 129.718161ms for postStartSetup
	I0816 13:44:03.283552   57440 fix.go:56] duration metric: took 20.706029776s for fixHost
	I0816 13:44:03.283603   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:03.286296   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:03.286608   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:03.286651   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:03.286803   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:03.287016   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:03.287158   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:03.287298   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:03.287477   57440 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:03.287639   57440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0816 13:44:03.287649   57440 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 13:44:03.389691   57440 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723815843.358144829
	
	I0816 13:44:03.389710   57440 fix.go:216] guest clock: 1723815843.358144829
	I0816 13:44:03.389717   57440 fix.go:229] Guest: 2024-08-16 13:44:03.358144829 +0000 UTC Remote: 2024-08-16 13:44:03.283556408 +0000 UTC m=+271.159980604 (delta=74.588421ms)
	I0816 13:44:03.389749   57440 fix.go:200] guest clock delta is within tolerance: 74.588421ms
	I0816 13:44:03.389754   57440 start.go:83] releasing machines lock for "no-preload-311070", held for 20.812259998s
	I0816 13:44:03.389779   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	I0816 13:44:03.390029   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetIP
	I0816 13:44:03.392788   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:03.393137   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:03.393160   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:03.393365   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	I0816 13:44:03.393870   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	I0816 13:44:03.394042   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	I0816 13:44:03.394125   57440 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 13:44:03.394180   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:03.394215   57440 ssh_runner.go:195] Run: cat /version.json
	I0816 13:44:03.394235   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:03.396749   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:03.396813   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:03.397124   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:03.397152   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:03.397180   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:03.397197   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:03.397466   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:03.397543   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:03.397717   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:03.397731   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:03.397874   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:03.397921   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:03.397998   57440 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/no-preload-311070/id_rsa Username:docker}
	I0816 13:44:03.398077   57440 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/no-preload-311070/id_rsa Username:docker}
	I0816 13:44:03.473552   57440 ssh_runner.go:195] Run: systemctl --version
	I0816 13:44:03.497958   57440 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 13:44:03.644212   57440 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 13:44:03.651347   57440 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 13:44:03.651455   57440 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 13:44:03.667822   57440 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 13:44:03.667842   57440 start.go:495] detecting cgroup driver to use...
	I0816 13:44:03.667915   57440 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 13:44:03.685838   57440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 13:44:03.700002   57440 docker.go:217] disabling cri-docker service (if available) ...
	I0816 13:44:03.700073   57440 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 13:44:03.713465   57440 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 13:44:03.726793   57440 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 13:44:03.838274   57440 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 13:44:03.967880   57440 docker.go:233] disabling docker service ...
	I0816 13:44:03.967951   57440 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 13:44:03.982178   57440 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 13:44:03.994574   57440 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 13:44:04.132374   57440 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 13:44:04.242820   57440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 13:44:04.257254   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 13:44:04.277961   57440 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 13:44:04.278018   57440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:04.288557   57440 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 13:44:04.288621   57440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:04.299108   57440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:04.310139   57440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:04.320850   57440 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 13:44:04.332224   57440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:04.342905   57440 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:04.361606   57440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:04.372423   57440 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 13:44:04.382305   57440 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 13:44:04.382355   57440 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 13:44:04.396774   57440 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 13:44:04.408417   57440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:44:04.516638   57440 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 13:44:04.684247   57440 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 13:44:04.684316   57440 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 13:44:04.689824   57440 start.go:563] Will wait 60s for crictl version
	I0816 13:44:04.689878   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:44:04.693456   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 13:44:04.732628   57440 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 13:44:04.732712   57440 ssh_runner.go:195] Run: crio --version
	I0816 13:44:04.760111   57440 ssh_runner.go:195] Run: crio --version
	I0816 13:44:04.790127   57440 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 13:44:03.413613   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .Start
	I0816 13:44:03.413783   57945 main.go:141] libmachine: (old-k8s-version-882237) Ensuring networks are active...
	I0816 13:44:03.414567   57945 main.go:141] libmachine: (old-k8s-version-882237) Ensuring network default is active
	I0816 13:44:03.414873   57945 main.go:141] libmachine: (old-k8s-version-882237) Ensuring network mk-old-k8s-version-882237 is active
	I0816 13:44:03.415336   57945 main.go:141] libmachine: (old-k8s-version-882237) Getting domain xml...
	I0816 13:44:03.416198   57945 main.go:141] libmachine: (old-k8s-version-882237) Creating domain...
	I0816 13:44:04.671017   57945 main.go:141] libmachine: (old-k8s-version-882237) Waiting to get IP...
	I0816 13:44:04.672035   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:04.672467   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:04.672560   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:04.672467   58914 retry.go:31] will retry after 271.707338ms: waiting for machine to come up
	I0816 13:44:04.946147   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:04.946549   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:04.946577   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:04.946506   58914 retry.go:31] will retry after 324.872897ms: waiting for machine to come up
	I0816 13:44:04.791315   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetIP
	I0816 13:44:04.794224   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:04.794587   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:04.794613   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:04.794794   57440 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0816 13:44:04.798848   57440 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 13:44:04.811522   57440 kubeadm.go:883] updating cluster {Name:no-preload-311070 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-311070 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 13:44:04.811628   57440 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 13:44:04.811685   57440 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 13:44:04.845546   57440 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 13:44:04.845567   57440 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0816 13:44:04.845630   57440 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:04.845654   57440 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0816 13:44:04.845687   57440 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 13:44:04.845714   57440 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0816 13:44:04.845694   57440 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0816 13:44:04.845789   57440 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0816 13:44:04.845839   57440 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0816 13:44:04.845875   57440 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0816 13:44:04.847428   57440 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0816 13:44:04.847440   57440 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0816 13:44:04.847454   57440 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0816 13:44:04.847428   57440 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0816 13:44:04.847484   57440 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0816 13:44:04.847429   57440 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0816 13:44:04.847431   57440 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:04.847508   57440 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 13:44:05.036225   57440 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0816 13:44:05.071514   57440 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0816 13:44:05.075186   57440 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0816 13:44:05.075233   57440 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0816 13:44:05.075273   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:44:05.111591   57440 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0816 13:44:05.111634   57440 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0816 13:44:05.111687   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:44:05.111704   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0816 13:44:05.145127   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0816 13:44:05.145289   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0816 13:44:05.186194   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0816 13:44:05.200886   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0816 13:44:05.203824   57440 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0816 13:44:05.208201   57440 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0816 13:44:05.209021   57440 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 13:44:05.234117   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0816 13:44:05.234893   57440 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0816 13:44:05.245119   57440 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0816 13:44:05.305971   57440 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0816 13:44:05.306084   57440 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0816 13:44:05.374880   57440 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0816 13:44:05.374922   57440 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0816 13:44:05.374971   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:44:05.399114   57440 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0816 13:44:05.399156   57440 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0816 13:44:05.399187   57440 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0816 13:44:05.399216   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:44:05.399225   57440 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 13:44:05.399267   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:44:05.399318   57440 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0816 13:44:05.399414   57440 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0816 13:44:05.401940   57440 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0816 13:44:05.401975   57440 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0816 13:44:05.402006   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:44:05.513930   57440 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0816 13:44:05.513961   57440 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0816 13:44:05.514017   57440 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0816 13:44:05.514032   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0816 13:44:05.514059   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0816 13:44:05.514112   57440 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0816 13:44:05.514115   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 13:44:05.514150   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0816 13:44:05.634275   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0816 13:44:05.634340   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 13:44:05.864118   57440 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:05.273252   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:05.273730   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:05.273758   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:05.273682   58914 retry.go:31] will retry after 300.46858ms: waiting for machine to come up
	I0816 13:44:05.576567   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:05.577060   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:05.577088   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:05.577023   58914 retry.go:31] will retry after 471.968976ms: waiting for machine to come up
	I0816 13:44:06.050651   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:06.051035   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:06.051075   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:06.051005   58914 retry.go:31] will retry after 696.85088ms: waiting for machine to come up
	I0816 13:44:06.750108   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:06.750611   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:06.750643   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:06.750548   58914 retry.go:31] will retry after 752.204898ms: waiting for machine to come up
	I0816 13:44:07.504321   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:07.504741   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:07.504766   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:07.504706   58914 retry.go:31] will retry after 734.932569ms: waiting for machine to come up
	I0816 13:44:08.241587   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:08.241950   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:08.241977   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:08.241895   58914 retry.go:31] will retry after 1.245731112s: waiting for machine to come up
	I0816 13:44:09.488787   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:09.489326   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:09.489370   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:09.489282   58914 retry.go:31] will retry after 1.454286295s: waiting for machine to come up
	I0816 13:44:07.542707   57440 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (2.028664898s)
	I0816 13:44:07.542745   57440 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0816 13:44:07.542770   57440 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0816 13:44:07.542773   57440 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1: (2.028589727s)
	I0816 13:44:07.542817   57440 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0: (2.028737534s)
	I0816 13:44:07.542831   57440 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0816 13:44:07.542837   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0816 13:44:07.542869   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0816 13:44:07.542888   57440 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0: (1.908584925s)
	I0816 13:44:07.542937   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0816 13:44:07.542951   57440 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0: (1.908590671s)
	I0816 13:44:07.542995   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 13:44:07.543034   57440 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.678883978s)
	I0816 13:44:07.543068   57440 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0816 13:44:07.543103   57440 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:07.543138   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:44:11.390456   57440 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0: (3.847434032s)
	I0816 13:44:11.390507   57440 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0816 13:44:11.390610   57440 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0816 13:44:11.390647   57440 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.847797916s)
	I0816 13:44:11.390674   57440 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0816 13:44:11.390684   57440 ssh_runner.go:235] Completed: which crictl: (3.847535001s)
	I0816 13:44:11.390740   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:11.390780   57440 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0: (3.847819859s)
	I0816 13:44:11.390810   57440 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1: (3.847960553s)
	I0816 13:44:11.390825   57440 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0816 13:44:11.390848   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0816 13:44:11.390908   57440 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0816 13:44:11.390923   57440 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0: (3.848033361s)
	I0816 13:44:11.390978   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0816 13:44:11.461833   57440 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0816 13:44:11.461859   57440 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0816 13:44:11.461905   57440 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0816 13:44:11.461922   57440 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0816 13:44:11.461843   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:11.461990   57440 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0816 13:44:11.462013   57440 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0816 13:44:11.462557   57440 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0816 13:44:11.462649   57440 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0816 13:44:10.944947   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:10.945395   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:10.945459   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:10.945352   58914 retry.go:31] will retry after 1.738238967s: waiting for machine to come up
	I0816 13:44:12.686147   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:12.686673   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:12.686701   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:12.686630   58914 retry.go:31] will retry after 2.778761596s: waiting for machine to come up
	I0816 13:44:13.839070   57440 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (2.377139357s)
	I0816 13:44:13.839101   57440 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0816 13:44:13.839141   57440 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0816 13:44:13.839207   57440 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0816 13:44:13.839255   57440 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.377282192s)
	I0816 13:44:13.839312   57440 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1: (2.377281378s)
	I0816 13:44:13.839358   57440 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0816 13:44:13.839358   57440 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0: (2.376690281s)
	I0816 13:44:13.839379   57440 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0816 13:44:13.839318   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:13.880059   57440 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0816 13:44:13.880203   57440 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0816 13:44:15.818912   57440 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (1.979684366s)
	I0816 13:44:15.818943   57440 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0816 13:44:15.818975   57440 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.938747663s)
	I0816 13:44:15.818986   57440 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0816 13:44:15.819000   57440 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0816 13:44:15.819043   57440 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0816 13:44:15.468356   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:15.468788   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:15.468817   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:15.468739   58914 retry.go:31] will retry after 2.807621726s: waiting for machine to come up
	I0816 13:44:18.277604   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:18.277980   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:18.278013   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:18.277937   58914 retry.go:31] will retry after 4.131806684s: waiting for machine to come up
	I0816 13:44:17.795989   57440 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.976923514s)
	I0816 13:44:17.796013   57440 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0816 13:44:17.796040   57440 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0816 13:44:17.796088   57440 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0816 13:44:19.147815   57440 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.351703003s)
	I0816 13:44:19.147843   57440 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0816 13:44:19.147869   57440 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0816 13:44:19.147919   57440 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0816 13:44:19.791370   57440 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0816 13:44:19.791414   57440 cache_images.go:123] Successfully loaded all cached images
	I0816 13:44:19.791421   57440 cache_images.go:92] duration metric: took 14.945842963s to LoadCachedImages
	I0816 13:44:19.791440   57440 kubeadm.go:934] updating node { 192.168.61.116 8443 v1.31.0 crio true true} ...
	I0816 13:44:19.791593   57440 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-311070 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-311070 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 13:44:19.791681   57440 ssh_runner.go:195] Run: crio config
	I0816 13:44:19.843963   57440 cni.go:84] Creating CNI manager for ""
	I0816 13:44:19.843984   57440 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:44:19.844003   57440 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 13:44:19.844029   57440 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.116 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-311070 NodeName:no-preload-311070 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 13:44:19.844189   57440 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.116
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-311070"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 13:44:19.844250   57440 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 13:44:19.854942   57440 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 13:44:19.855014   57440 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 13:44:19.864794   57440 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0816 13:44:19.881210   57440 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 13:44:19.897450   57440 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0816 13:44:19.916038   57440 ssh_runner.go:195] Run: grep 192.168.61.116	control-plane.minikube.internal$ /etc/hosts
	I0816 13:44:19.919995   57440 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 13:44:19.934081   57440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:44:20.077422   57440 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 13:44:20.093846   57440 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070 for IP: 192.168.61.116
	I0816 13:44:20.093864   57440 certs.go:194] generating shared ca certs ...
	I0816 13:44:20.093881   57440 certs.go:226] acquiring lock for ca certs: {Name:mkdb46755373c135acf8239d2d4352f4b0b3d1d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:44:20.094055   57440 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key
	I0816 13:44:20.094120   57440 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key
	I0816 13:44:20.094135   57440 certs.go:256] generating profile certs ...
	I0816 13:44:20.094236   57440 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070/client.key
	I0816 13:44:20.094325   57440 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070/apiserver.key.000c4f90
	I0816 13:44:20.094391   57440 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070/proxy-client.key
	I0816 13:44:20.094529   57440 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem (1338 bytes)
	W0816 13:44:20.094571   57440 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149_empty.pem, impossibly tiny 0 bytes
	I0816 13:44:20.094584   57440 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 13:44:20.094621   57440 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem (1082 bytes)
	I0816 13:44:20.094654   57440 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem (1123 bytes)
	I0816 13:44:20.094795   57440 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem (1675 bytes)
	I0816 13:44:20.094874   57440 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem (1708 bytes)
	I0816 13:44:20.096132   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 13:44:20.130987   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 13:44:20.160701   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 13:44:20.187948   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 13:44:20.217162   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0816 13:44:20.242522   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 13:44:20.273582   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 13:44:20.300613   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 13:44:20.328363   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /usr/share/ca-certificates/111492.pem (1708 bytes)
	I0816 13:44:20.353396   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 13:44:20.377770   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem --> /usr/share/ca-certificates/11149.pem (1338 bytes)
	I0816 13:44:20.401760   57440 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 13:44:20.418302   57440 ssh_runner.go:195] Run: openssl version
	I0816 13:44:20.424065   57440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111492.pem && ln -fs /usr/share/ca-certificates/111492.pem /etc/ssl/certs/111492.pem"
	I0816 13:44:20.434841   57440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111492.pem
	I0816 13:44:20.439352   57440 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 12:33 /usr/share/ca-certificates/111492.pem
	I0816 13:44:20.439398   57440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111492.pem
	I0816 13:44:20.445210   57440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111492.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 13:44:20.455727   57440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 13:44:20.466095   57440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:44:20.470528   57440 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 12:22 /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:44:20.470568   57440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:44:20.476080   57440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 13:44:20.486189   57440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11149.pem && ln -fs /usr/share/ca-certificates/11149.pem /etc/ssl/certs/11149.pem"
	I0816 13:44:20.496373   57440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11149.pem
	I0816 13:44:20.500696   57440 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 12:33 /usr/share/ca-certificates/11149.pem
	I0816 13:44:20.500737   57440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11149.pem
	I0816 13:44:20.506426   57440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11149.pem /etc/ssl/certs/51391683.0"
	I0816 13:44:20.517130   57440 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 13:44:20.521664   57440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 13:44:20.527604   57440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 13:44:20.533478   57440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 13:44:20.539285   57440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 13:44:20.545042   57440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 13:44:20.550681   57440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 13:44:20.556239   57440 kubeadm.go:392] StartCluster: {Name:no-preload-311070 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-311070 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 13:44:20.556314   57440 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 13:44:20.556391   57440 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 13:44:20.594069   57440 cri.go:89] found id: ""
	I0816 13:44:20.594128   57440 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 13:44:20.604067   57440 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 13:44:20.604085   57440 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 13:44:20.604131   57440 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 13:44:20.614182   57440 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 13:44:20.615072   57440 kubeconfig.go:125] found "no-preload-311070" server: "https://192.168.61.116:8443"
	I0816 13:44:20.617096   57440 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 13:44:20.626046   57440 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.116
	I0816 13:44:20.626069   57440 kubeadm.go:1160] stopping kube-system containers ...
	I0816 13:44:20.626078   57440 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 13:44:20.626114   57440 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 13:44:20.659889   57440 cri.go:89] found id: ""
	I0816 13:44:20.659954   57440 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 13:44:20.676977   57440 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 13:44:20.686930   57440 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 13:44:20.686946   57440 kubeadm.go:157] found existing configuration files:
	
	I0816 13:44:20.686985   57440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 13:44:20.696144   57440 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 13:44:20.696222   57440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 13:44:20.705550   57440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 13:44:20.714350   57440 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 13:44:20.714399   57440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 13:44:20.723636   57440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 13:44:20.732287   57440 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 13:44:20.732329   57440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 13:44:20.741390   57440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 13:44:20.749913   57440 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 13:44:20.749956   57440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 13:44:20.758968   57440 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 13:44:20.768054   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:20.872847   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:21.933273   57440 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.060394194s)
	I0816 13:44:21.933303   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:22.130462   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:23.689897   58430 start.go:364] duration metric: took 2m7.587518205s to acquireMachinesLock for "default-k8s-diff-port-893736"
	I0816 13:44:23.689958   58430 start.go:96] Skipping create...Using existing machine configuration
	I0816 13:44:23.689965   58430 fix.go:54] fixHost starting: 
	I0816 13:44:23.690363   58430 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:23.690401   58430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:23.707766   58430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40097
	I0816 13:44:23.708281   58430 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:23.709439   58430 main.go:141] libmachine: Using API Version  1
	I0816 13:44:23.709462   58430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:23.709757   58430 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:23.709906   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:44:23.710017   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetState
	I0816 13:44:23.711612   58430 fix.go:112] recreateIfNeeded on default-k8s-diff-port-893736: state=Stopped err=<nil>
	I0816 13:44:23.711655   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	W0816 13:44:23.711797   58430 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 13:44:23.713600   58430 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-893736" ...
	I0816 13:44:22.413954   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.414552   57945 main.go:141] libmachine: (old-k8s-version-882237) Found IP for machine: 192.168.72.105
	I0816 13:44:22.414575   57945 main.go:141] libmachine: (old-k8s-version-882237) Reserving static IP address...
	I0816 13:44:22.414591   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has current primary IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.415085   57945 main.go:141] libmachine: (old-k8s-version-882237) Reserved static IP address: 192.168.72.105
	I0816 13:44:22.415142   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "old-k8s-version-882237", mac: "52:54:00:ce:02:bd", ip: "192.168.72.105"} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:22.415157   57945 main.go:141] libmachine: (old-k8s-version-882237) Waiting for SSH to be available...
	I0816 13:44:22.415183   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | skip adding static IP to network mk-old-k8s-version-882237 - found existing host DHCP lease matching {name: "old-k8s-version-882237", mac: "52:54:00:ce:02:bd", ip: "192.168.72.105"}
	I0816 13:44:22.415195   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | Getting to WaitForSSH function...
	I0816 13:44:22.417524   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.417844   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:22.417875   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.417987   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | Using SSH client type: external
	I0816 13:44:22.418014   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/old-k8s-version-882237/id_rsa (-rw-------)
	I0816 13:44:22.418052   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.105 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-3966/.minikube/machines/old-k8s-version-882237/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 13:44:22.418072   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | About to run SSH command:
	I0816 13:44:22.418086   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | exit 0
	I0816 13:44:22.536890   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | SSH cmd err, output: <nil>: 
	I0816 13:44:22.537284   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetConfigRaw
	I0816 13:44:22.537843   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetIP
	I0816 13:44:22.540100   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.540454   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:22.540490   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.540683   57945 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/config.json ...
	I0816 13:44:22.540939   57945 machine.go:93] provisionDockerMachine start ...
	I0816 13:44:22.540960   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:44:22.541184   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:22.543102   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.543385   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:22.543413   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.543505   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:44:22.543664   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:22.543798   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:22.543991   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:44:22.544177   57945 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:22.544497   57945 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.105 22 <nil> <nil>}
	I0816 13:44:22.544519   57945 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 13:44:22.641319   57945 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 13:44:22.641355   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetMachineName
	I0816 13:44:22.641606   57945 buildroot.go:166] provisioning hostname "old-k8s-version-882237"
	I0816 13:44:22.641630   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetMachineName
	I0816 13:44:22.641820   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:22.644657   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.645053   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:22.645085   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.645279   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:44:22.645476   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:22.645656   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:22.645827   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:44:22.646013   57945 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:22.646233   57945 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.105 22 <nil> <nil>}
	I0816 13:44:22.646248   57945 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-882237 && echo "old-k8s-version-882237" | sudo tee /etc/hostname
	I0816 13:44:22.759488   57945 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-882237
	
	I0816 13:44:22.759526   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:22.762382   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.762774   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:22.762811   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.762959   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:44:22.763163   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:22.763353   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:22.763534   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:44:22.763738   57945 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:22.763967   57945 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.105 22 <nil> <nil>}
	I0816 13:44:22.763995   57945 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-882237' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-882237/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-882237' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 13:44:22.878120   57945 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 13:44:22.878158   57945 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-3966/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-3966/.minikube}
	I0816 13:44:22.878215   57945 buildroot.go:174] setting up certificates
	I0816 13:44:22.878230   57945 provision.go:84] configureAuth start
	I0816 13:44:22.878244   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetMachineName
	I0816 13:44:22.878581   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetIP
	I0816 13:44:22.881426   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.881808   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:22.881843   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.881971   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:22.884352   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.884750   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:22.884778   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.884932   57945 provision.go:143] copyHostCerts
	I0816 13:44:22.884994   57945 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem, removing ...
	I0816 13:44:22.885016   57945 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem
	I0816 13:44:22.885084   57945 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem (1082 bytes)
	I0816 13:44:22.885230   57945 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem, removing ...
	I0816 13:44:22.885242   57945 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem
	I0816 13:44:22.885276   57945 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem (1123 bytes)
	I0816 13:44:22.885374   57945 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem, removing ...
	I0816 13:44:22.885383   57945 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem
	I0816 13:44:22.885415   57945 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem (1675 bytes)
	I0816 13:44:22.885503   57945 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-882237 san=[127.0.0.1 192.168.72.105 localhost minikube old-k8s-version-882237]
	I0816 13:44:23.017446   57945 provision.go:177] copyRemoteCerts
	I0816 13:44:23.017519   57945 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 13:44:23.017555   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:23.020030   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.020423   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:23.020460   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.020678   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:44:23.020888   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:23.021076   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:44:23.021199   57945 sshutil.go:53] new ssh client: &{IP:192.168.72.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/old-k8s-version-882237/id_rsa Username:docker}
	I0816 13:44:23.100006   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0816 13:44:23.128795   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 13:44:23.157542   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 13:44:23.182619   57945 provision.go:87] duration metric: took 304.375843ms to configureAuth
	I0816 13:44:23.182652   57945 buildroot.go:189] setting minikube options for container-runtime
	I0816 13:44:23.182882   57945 config.go:182] Loaded profile config "old-k8s-version-882237": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0816 13:44:23.182984   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:23.186043   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.186441   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:23.186474   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.186648   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:44:23.186844   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:23.187015   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:23.187196   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:44:23.187383   57945 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:23.187566   57945 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.105 22 <nil> <nil>}
	I0816 13:44:23.187587   57945 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 13:44:23.459221   57945 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 13:44:23.459248   57945 machine.go:96] duration metric: took 918.295024ms to provisionDockerMachine
	I0816 13:44:23.459261   57945 start.go:293] postStartSetup for "old-k8s-version-882237" (driver="kvm2")
	I0816 13:44:23.459275   57945 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 13:44:23.459305   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:44:23.459614   57945 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 13:44:23.459649   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:23.462624   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.463010   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:23.463033   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.463210   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:44:23.463405   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:23.463584   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:44:23.463715   57945 sshutil.go:53] new ssh client: &{IP:192.168.72.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/old-k8s-version-882237/id_rsa Username:docker}
	I0816 13:44:23.550785   57945 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 13:44:23.554984   57945 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 13:44:23.555009   57945 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/addons for local assets ...
	I0816 13:44:23.555078   57945 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/files for local assets ...
	I0816 13:44:23.555171   57945 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem -> 111492.pem in /etc/ssl/certs
	I0816 13:44:23.555290   57945 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 13:44:23.564655   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /etc/ssl/certs/111492.pem (1708 bytes)
	I0816 13:44:23.588471   57945 start.go:296] duration metric: took 129.196791ms for postStartSetup
	I0816 13:44:23.588515   57945 fix.go:56] duration metric: took 20.198590598s for fixHost
	I0816 13:44:23.588544   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:23.591443   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.591805   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:23.591835   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.591959   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:44:23.592145   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:23.592354   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:23.592492   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:44:23.592668   57945 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:23.592868   57945 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.105 22 <nil> <nil>}
	I0816 13:44:23.592885   57945 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 13:44:23.689724   57945 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723815863.663875328
	
	I0816 13:44:23.689760   57945 fix.go:216] guest clock: 1723815863.663875328
	I0816 13:44:23.689771   57945 fix.go:229] Guest: 2024-08-16 13:44:23.663875328 +0000 UTC Remote: 2024-08-16 13:44:23.588520483 +0000 UTC m=+233.521229154 (delta=75.354845ms)
	I0816 13:44:23.689796   57945 fix.go:200] guest clock delta is within tolerance: 75.354845ms
	I0816 13:44:23.689806   57945 start.go:83] releasing machines lock for "old-k8s-version-882237", held for 20.299922092s
	I0816 13:44:23.689839   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:44:23.690115   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetIP
	I0816 13:44:23.692683   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.693079   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:23.693108   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.693268   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:44:23.693753   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:44:23.693926   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:44:23.694009   57945 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 13:44:23.694062   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:23.694142   57945 ssh_runner.go:195] Run: cat /version.json
	I0816 13:44:23.694167   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:23.696872   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.696897   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.697247   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:23.697281   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:23.697309   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.697340   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.697622   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:44:23.697801   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:44:23.697830   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:23.697974   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:44:23.698010   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:23.698144   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:44:23.698155   57945 sshutil.go:53] new ssh client: &{IP:192.168.72.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/old-k8s-version-882237/id_rsa Username:docker}
	I0816 13:44:23.698312   57945 sshutil.go:53] new ssh client: &{IP:192.168.72.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/old-k8s-version-882237/id_rsa Username:docker}
	I0816 13:44:23.774706   57945 ssh_runner.go:195] Run: systemctl --version
	I0816 13:44:23.802788   57945 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 13:44:23.955361   57945 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 13:44:23.963291   57945 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 13:44:23.963363   57945 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 13:44:23.979542   57945 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 13:44:23.979579   57945 start.go:495] detecting cgroup driver to use...
	I0816 13:44:23.979645   57945 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 13:44:24.002509   57945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 13:44:24.019715   57945 docker.go:217] disabling cri-docker service (if available) ...
	I0816 13:44:24.019773   57945 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 13:44:24.033677   57945 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 13:44:24.049195   57945 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 13:44:24.168789   57945 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 13:44:24.346709   57945 docker.go:233] disabling docker service ...
	I0816 13:44:24.346772   57945 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 13:44:24.363739   57945 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 13:44:24.378785   57945 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 13:44:24.547705   57945 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 13:44:24.738866   57945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 13:44:24.756139   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 13:44:24.775999   57945 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0816 13:44:24.776060   57945 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:24.786682   57945 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 13:44:24.786783   57945 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:24.797385   57945 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:24.807825   57945 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:24.817919   57945 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 13:44:24.828884   57945 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 13:44:24.838725   57945 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 13:44:24.838782   57945 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 13:44:24.852544   57945 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 13:44:24.868302   57945 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:44:24.980614   57945 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 13:44:25.122584   57945 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 13:44:25.122660   57945 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 13:44:25.128622   57945 start.go:563] Will wait 60s for crictl version
	I0816 13:44:25.128694   57945 ssh_runner.go:195] Run: which crictl
	I0816 13:44:25.133726   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 13:44:25.188714   57945 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 13:44:25.188801   57945 ssh_runner.go:195] Run: crio --version
	I0816 13:44:25.223719   57945 ssh_runner.go:195] Run: crio --version
	I0816 13:44:25.263894   57945 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0816 13:44:23.714877   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .Start
	I0816 13:44:23.715069   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Ensuring networks are active...
	I0816 13:44:23.715788   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Ensuring network default is active
	I0816 13:44:23.716164   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Ensuring network mk-default-k8s-diff-port-893736 is active
	I0816 13:44:23.716648   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Getting domain xml...
	I0816 13:44:23.717424   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Creating domain...
	I0816 13:44:24.979917   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting to get IP...
	I0816 13:44:24.980942   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:24.981375   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:24.981448   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:24.981363   59082 retry.go:31] will retry after 199.038336ms: waiting for machine to come up
	I0816 13:44:25.181886   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:25.182350   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:25.182374   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:25.182330   59082 retry.go:31] will retry after 297.566018ms: waiting for machine to come up
	I0816 13:44:25.481811   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:25.482271   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:25.482296   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:25.482234   59082 retry.go:31] will retry after 297.833233ms: waiting for machine to come up
	I0816 13:44:25.781831   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:25.782445   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:25.782479   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:25.782400   59082 retry.go:31] will retry after 459.810978ms: waiting for machine to come up
	I0816 13:44:22.220022   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:22.317717   57440 api_server.go:52] waiting for apiserver process to appear ...
	I0816 13:44:22.317800   57440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:22.818025   57440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:23.318171   57440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:23.354996   57440 api_server.go:72] duration metric: took 1.037294965s to wait for apiserver process to appear ...
	I0816 13:44:23.355023   57440 api_server.go:88] waiting for apiserver healthz status ...
	I0816 13:44:23.355043   57440 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0816 13:44:23.355677   57440 api_server.go:269] stopped: https://192.168.61.116:8443/healthz: Get "https://192.168.61.116:8443/healthz": dial tcp 192.168.61.116:8443: connect: connection refused
	I0816 13:44:23.855190   57440 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0816 13:44:26.719152   57440 api_server.go:279] https://192.168.61.116:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 13:44:26.719184   57440 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 13:44:26.719204   57440 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0816 13:44:26.756329   57440 api_server.go:279] https://192.168.61.116:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 13:44:26.756366   57440 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 13:44:26.855581   57440 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0816 13:44:26.862856   57440 api_server.go:279] https://192.168.61.116:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 13:44:26.862885   57440 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 13:44:27.355555   57440 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0816 13:44:27.365664   57440 api_server.go:279] https://192.168.61.116:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 13:44:27.365702   57440 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 13:44:27.855844   57440 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0816 13:44:27.863185   57440 api_server.go:279] https://192.168.61.116:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 13:44:27.863227   57440 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 13:44:28.355490   57440 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0816 13:44:28.361410   57440 api_server.go:279] https://192.168.61.116:8443/healthz returned 200:
	ok
	I0816 13:44:28.374558   57440 api_server.go:141] control plane version: v1.31.0
	I0816 13:44:28.374593   57440 api_server.go:131] duration metric: took 5.019562023s to wait for apiserver health ...
	I0816 13:44:28.374604   57440 cni.go:84] Creating CNI manager for ""
	I0816 13:44:28.374613   57440 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:44:28.376749   57440 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 13:44:28.378413   57440 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 13:44:28.401199   57440 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 13:44:28.420798   57440 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 13:44:28.452605   57440 system_pods.go:59] 8 kube-system pods found
	I0816 13:44:28.452645   57440 system_pods.go:61] "coredns-6f6b679f8f-8kbs6" [e732183e-3b22-4a11-909a-246de5fc1c8a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 13:44:28.452655   57440 system_pods.go:61] "etcd-no-preload-311070" [e05deb95-06ff-4a8d-b520-0350b2332814] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 13:44:28.452663   57440 system_pods.go:61] "kube-apiserver-no-preload-311070" [06cb4989-0511-4c14-a3ec-e966829cf2ec] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 13:44:28.452671   57440 system_pods.go:61] "kube-controller-manager-no-preload-311070" [baa9f379-4e14-4873-a456-42e90a816b0b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 13:44:28.452680   57440 system_pods.go:61] "kube-proxy-b8d5b" [9ed1c33b-903f-43e8-880c-b9a49c658806] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0816 13:44:28.452689   57440 system_pods.go:61] "kube-scheduler-no-preload-311070" [166c0f64-ebf6-413f-82d5-f1c32991c63a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 13:44:28.452704   57440 system_pods.go:61] "metrics-server-6867b74b74-mgxhv" [e9654a8e-4db2-494d-93a7-a134b0e2bb50] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 13:44:28.452710   57440 system_pods.go:61] "storage-provisioner" [f340d2e3-2889-4200-b477-830494b827c6] Running
	I0816 13:44:28.452719   57440 system_pods.go:74] duration metric: took 31.89892ms to wait for pod list to return data ...
	I0816 13:44:28.452726   57440 node_conditions.go:102] verifying NodePressure condition ...
	I0816 13:44:28.463229   57440 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 13:44:28.463262   57440 node_conditions.go:123] node cpu capacity is 2
	I0816 13:44:28.463275   57440 node_conditions.go:105] duration metric: took 10.544476ms to run NodePressure ...
	I0816 13:44:28.463296   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:28.809304   57440 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0816 13:44:28.819091   57440 kubeadm.go:739] kubelet initialised
	I0816 13:44:28.819115   57440 kubeadm.go:740] duration metric: took 9.779672ms waiting for restarted kubelet to initialise ...
	I0816 13:44:28.819124   57440 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:44:28.827828   57440 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-8kbs6" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:28.840277   57440 pod_ready.go:98] node "no-preload-311070" hosting pod "coredns-6f6b679f8f-8kbs6" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:28.840310   57440 pod_ready.go:82] duration metric: took 12.450089ms for pod "coredns-6f6b679f8f-8kbs6" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:28.840322   57440 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-311070" hosting pod "coredns-6f6b679f8f-8kbs6" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:28.840333   57440 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:28.847012   57440 pod_ready.go:98] node "no-preload-311070" hosting pod "etcd-no-preload-311070" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:28.847036   57440 pod_ready.go:82] duration metric: took 6.692927ms for pod "etcd-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:28.847045   57440 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-311070" hosting pod "etcd-no-preload-311070" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:28.847050   57440 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:28.861358   57440 pod_ready.go:98] node "no-preload-311070" hosting pod "kube-apiserver-no-preload-311070" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:28.861404   57440 pod_ready.go:82] duration metric: took 14.346379ms for pod "kube-apiserver-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:28.861417   57440 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-311070" hosting pod "kube-apiserver-no-preload-311070" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:28.861428   57440 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:28.870641   57440 pod_ready.go:98] node "no-preload-311070" hosting pod "kube-controller-manager-no-preload-311070" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:28.870663   57440 pod_ready.go:82] duration metric: took 9.224713ms for pod "kube-controller-manager-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:28.870671   57440 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-311070" hosting pod "kube-controller-manager-no-preload-311070" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:28.870678   57440 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-b8d5b" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:29.224281   57440 pod_ready.go:98] node "no-preload-311070" hosting pod "kube-proxy-b8d5b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:29.224310   57440 pod_ready.go:82] duration metric: took 353.622663ms for pod "kube-proxy-b8d5b" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:29.224322   57440 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-311070" hosting pod "kube-proxy-b8d5b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:29.224331   57440 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:29.624518   57440 pod_ready.go:98] node "no-preload-311070" hosting pod "kube-scheduler-no-preload-311070" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:29.624552   57440 pod_ready.go:82] duration metric: took 400.212041ms for pod "kube-scheduler-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:29.624567   57440 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-311070" hosting pod "kube-scheduler-no-preload-311070" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:29.624577   57440 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:30.030291   57440 pod_ready.go:98] node "no-preload-311070" hosting pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:30.030327   57440 pod_ready.go:82] duration metric: took 405.73495ms for pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:30.030341   57440 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-311070" hosting pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:30.030352   57440 pod_ready.go:39] duration metric: took 1.211214389s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:44:30.030372   57440 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 13:44:30.045247   57440 ops.go:34] apiserver oom_adj: -16
	I0816 13:44:30.045279   57440 kubeadm.go:597] duration metric: took 9.441179951s to restartPrimaryControlPlane
	I0816 13:44:30.045291   57440 kubeadm.go:394] duration metric: took 9.489057744s to StartCluster
	I0816 13:44:30.045312   57440 settings.go:142] acquiring lock: {Name:mk96ffc9331ceacd8f3c1c33a59b38e047898a0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:44:30.045410   57440 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 13:44:30.047053   57440 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/kubeconfig: {Name:mk102d7d0e1fecb6a50b9d6c1ee82dcff7f7a898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:44:30.047310   57440 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 13:44:30.047415   57440 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 13:44:30.047486   57440 addons.go:69] Setting storage-provisioner=true in profile "no-preload-311070"
	I0816 13:44:30.047521   57440 addons.go:234] Setting addon storage-provisioner=true in "no-preload-311070"
	W0816 13:44:30.047534   57440 addons.go:243] addon storage-provisioner should already be in state true
	I0816 13:44:30.047569   57440 host.go:66] Checking if "no-preload-311070" exists ...
	I0816 13:44:30.048048   57440 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:30.048079   57440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:30.048302   57440 addons.go:69] Setting default-storageclass=true in profile "no-preload-311070"
	I0816 13:44:30.048339   57440 addons.go:69] Setting metrics-server=true in profile "no-preload-311070"
	I0816 13:44:30.048377   57440 addons.go:234] Setting addon metrics-server=true in "no-preload-311070"
	W0816 13:44:30.048387   57440 addons.go:243] addon metrics-server should already be in state true
	I0816 13:44:30.048424   57440 host.go:66] Checking if "no-preload-311070" exists ...
	I0816 13:44:30.048343   57440 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-311070"
	I0816 13:44:30.048812   57440 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:30.048834   57440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:30.048933   57440 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:30.048957   57440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:30.049282   57440 config.go:182] Loaded profile config "no-preload-311070": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 13:44:30.050905   57440 out.go:177] * Verifying Kubernetes components...
	I0816 13:44:30.052478   57440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:44:30.069405   57440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40065
	I0816 13:44:30.069463   57440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33057
	I0816 13:44:30.069735   57440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46331
	I0816 13:44:30.069949   57440 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:30.070072   57440 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:30.070145   57440 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:30.070488   57440 main.go:141] libmachine: Using API Version  1
	I0816 13:44:30.070506   57440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:30.070586   57440 main.go:141] libmachine: Using API Version  1
	I0816 13:44:30.070598   57440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:30.070618   57440 main.go:141] libmachine: Using API Version  1
	I0816 13:44:30.070627   57440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:30.070977   57440 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:30.071006   57440 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:30.071031   57440 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:30.071212   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetState
	I0816 13:44:30.071602   57440 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:30.071602   57440 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:30.071639   57440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:30.071621   57440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:30.074680   57440 addons.go:234] Setting addon default-storageclass=true in "no-preload-311070"
	W0816 13:44:30.074699   57440 addons.go:243] addon default-storageclass should already be in state true
	I0816 13:44:30.074730   57440 host.go:66] Checking if "no-preload-311070" exists ...
	I0816 13:44:30.075073   57440 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:30.075100   57440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:30.088961   57440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46717
	I0816 13:44:30.089421   57440 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:30.089952   57440 main.go:141] libmachine: Using API Version  1
	I0816 13:44:30.089971   57440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:30.090128   57440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37475
	I0816 13:44:30.090429   57440 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:30.090491   57440 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:30.090744   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetState
	I0816 13:44:30.090933   57440 main.go:141] libmachine: Using API Version  1
	I0816 13:44:30.090950   57440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:30.091263   57440 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:30.091463   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetState
	I0816 13:44:30.093258   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	I0816 13:44:30.093571   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	I0816 13:44:25.265126   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetIP
	I0816 13:44:25.268186   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:25.268630   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:25.268662   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:25.268927   57945 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0816 13:44:25.274101   57945 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 13:44:25.288155   57945 kubeadm.go:883] updating cluster {Name:old-k8s-version-882237 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-882237 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.105 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 13:44:25.288260   57945 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0816 13:44:25.288311   57945 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 13:44:25.342303   57945 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0816 13:44:25.342377   57945 ssh_runner.go:195] Run: which lz4
	I0816 13:44:25.346641   57945 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 13:44:25.350761   57945 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 13:44:25.350793   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0816 13:44:27.052140   57945 crio.go:462] duration metric: took 1.705504554s to copy over tarball
	I0816 13:44:27.052223   57945 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 13:44:30.094479   57440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35323
	I0816 13:44:30.094965   57440 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:30.095482   57440 main.go:141] libmachine: Using API Version  1
	I0816 13:44:30.095502   57440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:30.095857   57440 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:30.096322   57440 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:30.096361   57440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:30.128555   57440 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:30.128676   57440 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0816 13:44:26.244353   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:26.245158   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:26.245183   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:26.245062   59082 retry.go:31] will retry after 680.176025ms: waiting for machine to come up
	I0816 13:44:26.926654   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:26.927139   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:26.927183   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:26.927106   59082 retry.go:31] will retry after 720.530442ms: waiting for machine to come up
	I0816 13:44:27.648858   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:27.649342   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:27.649367   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:27.649289   59082 retry.go:31] will retry after 930.752133ms: waiting for machine to come up
	I0816 13:44:28.581283   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:28.581684   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:28.581709   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:28.581635   59082 retry.go:31] will retry after 972.791503ms: waiting for machine to come up
	I0816 13:44:29.556168   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:29.556563   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:29.556583   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:29.556525   59082 retry.go:31] will retry after 1.18129541s: waiting for machine to come up
	I0816 13:44:30.739498   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:30.739951   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:30.739978   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:30.739883   59082 retry.go:31] will retry after 2.27951459s: waiting for machine to come up
	I0816 13:44:30.133959   57440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39625
	I0816 13:44:30.134516   57440 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:30.135080   57440 main.go:141] libmachine: Using API Version  1
	I0816 13:44:30.135105   57440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:30.135463   57440 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:30.135598   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetState
	I0816 13:44:30.137494   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	I0816 13:44:30.137805   57440 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 13:44:30.137824   57440 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 13:44:30.137839   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:30.141006   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:30.141509   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:30.141544   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:30.141772   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:30.141952   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:30.142150   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:30.142305   57440 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/no-preload-311070/id_rsa Username:docker}
	I0816 13:44:30.164598   57440 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 13:44:30.164627   57440 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 13:44:30.164653   57440 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 13:44:30.164654   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:30.164662   57440 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 13:44:30.164687   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:30.168935   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:30.169259   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:30.169588   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:30.169615   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:30.169806   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:30.169828   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:30.169859   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:30.169953   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:30.170096   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:30.170103   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:30.170243   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:30.170241   57440 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/no-preload-311070/id_rsa Username:docker}
	I0816 13:44:30.170389   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:30.170505   57440 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/no-preload-311070/id_rsa Username:docker}
	I0816 13:44:30.285806   57440 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 13:44:30.312267   57440 node_ready.go:35] waiting up to 6m0s for node "no-preload-311070" to be "Ready" ...
	I0816 13:44:30.406371   57440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 13:44:30.409491   57440 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 13:44:30.409515   57440 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0816 13:44:30.440485   57440 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 13:44:30.440508   57440 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 13:44:30.480735   57440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 13:44:30.484549   57440 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 13:44:30.484573   57440 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 13:44:30.541485   57440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 13:44:32.535406   57440 node_ready.go:53] node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:33.204746   57440 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.723973086s)
	I0816 13:44:33.204802   57440 main.go:141] libmachine: Making call to close driver server
	I0816 13:44:33.204817   57440 main.go:141] libmachine: (no-preload-311070) Calling .Close
	I0816 13:44:33.204843   57440 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.798437569s)
	I0816 13:44:33.204877   57440 main.go:141] libmachine: Making call to close driver server
	I0816 13:44:33.204889   57440 main.go:141] libmachine: (no-preload-311070) Calling .Close
	I0816 13:44:33.205092   57440 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:44:33.205116   57440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:44:33.205126   57440 main.go:141] libmachine: Making call to close driver server
	I0816 13:44:33.205134   57440 main.go:141] libmachine: (no-preload-311070) Calling .Close
	I0816 13:44:33.205357   57440 main.go:141] libmachine: (no-preload-311070) DBG | Closing plugin on server side
	I0816 13:44:33.205359   57440 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:44:33.205379   57440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:44:33.205387   57440 main.go:141] libmachine: Making call to close driver server
	I0816 13:44:33.205395   57440 main.go:141] libmachine: (no-preload-311070) Calling .Close
	I0816 13:44:33.205408   57440 main.go:141] libmachine: (no-preload-311070) DBG | Closing plugin on server side
	I0816 13:44:33.205445   57440 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:44:33.205454   57440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:44:33.205593   57440 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:44:33.205605   57440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:44:33.214075   57440 main.go:141] libmachine: Making call to close driver server
	I0816 13:44:33.214095   57440 main.go:141] libmachine: (no-preload-311070) Calling .Close
	I0816 13:44:33.214307   57440 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:44:33.214320   57440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:44:33.259136   57440 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.717608276s)
	I0816 13:44:33.259188   57440 main.go:141] libmachine: Making call to close driver server
	I0816 13:44:33.259212   57440 main.go:141] libmachine: (no-preload-311070) Calling .Close
	I0816 13:44:33.259468   57440 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:44:33.259485   57440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:44:33.259495   57440 main.go:141] libmachine: Making call to close driver server
	I0816 13:44:33.259503   57440 main.go:141] libmachine: (no-preload-311070) Calling .Close
	I0816 13:44:33.259988   57440 main.go:141] libmachine: (no-preload-311070) DBG | Closing plugin on server side
	I0816 13:44:33.260004   57440 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:44:33.260016   57440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:44:33.260026   57440 addons.go:475] Verifying addon metrics-server=true in "no-preload-311070"
	I0816 13:44:33.262190   57440 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0816 13:44:30.191146   57945 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.138885293s)
	I0816 13:44:30.191188   57945 crio.go:469] duration metric: took 3.139020745s to extract the tarball
	I0816 13:44:30.191198   57945 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 13:44:30.249011   57945 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 13:44:30.285826   57945 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0816 13:44:30.285847   57945 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0816 13:44:30.285918   57945 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:30.285940   57945 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:44:30.285947   57945 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0816 13:44:30.285922   57945 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:44:30.285971   57945 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0816 13:44:30.286019   57945 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:44:30.285922   57945 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:44:30.285979   57945 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0816 13:44:30.288208   57945 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:44:30.288272   57945 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:30.288275   57945 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0816 13:44:30.288205   57945 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0816 13:44:30.288303   57945 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0816 13:44:30.288320   57945 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:44:30.288211   57945 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:44:30.288207   57945 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:44:30.434593   57945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:44:30.434847   57945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:44:30.438852   57945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:44:30.449704   57945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0816 13:44:30.451130   57945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:44:30.454848   57945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0816 13:44:30.513569   57945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0816 13:44:30.594404   57945 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0816 13:44:30.594453   57945 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:44:30.594509   57945 ssh_runner.go:195] Run: which crictl
	I0816 13:44:30.612653   57945 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0816 13:44:30.612699   57945 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:44:30.612746   57945 ssh_runner.go:195] Run: which crictl
	I0816 13:44:30.652117   57945 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0816 13:44:30.652162   57945 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:44:30.652214   57945 ssh_runner.go:195] Run: which crictl
	I0816 13:44:30.681057   57945 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0816 13:44:30.681116   57945 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0816 13:44:30.681163   57945 ssh_runner.go:195] Run: which crictl
	I0816 13:44:30.681239   57945 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0816 13:44:30.681296   57945 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:44:30.681341   57945 ssh_runner.go:195] Run: which crictl
	I0816 13:44:30.688696   57945 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0816 13:44:30.688739   57945 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0816 13:44:30.688785   57945 ssh_runner.go:195] Run: which crictl
	I0816 13:44:30.706749   57945 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0816 13:44:30.706802   57945 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0816 13:44:30.706816   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:44:30.706843   57945 ssh_runner.go:195] Run: which crictl
	I0816 13:44:30.706911   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:44:30.706938   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:44:30.706987   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 13:44:30.707031   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:44:30.707045   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 13:44:30.913446   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 13:44:30.913520   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 13:44:30.913548   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:44:30.913611   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:44:30.913653   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 13:44:30.913675   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:44:30.913813   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:44:31.079066   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:44:31.079100   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 13:44:31.079140   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:44:31.103707   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:44:31.103890   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 13:44:31.106587   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:44:31.106723   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 13:44:31.210359   57945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:31.226549   57945 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0816 13:44:31.226605   57945 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0816 13:44:31.226648   57945 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0816 13:44:31.266238   57945 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0816 13:44:31.266256   57945 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0816 13:44:31.269423   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 13:44:31.270551   57945 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0816 13:44:31.399144   57945 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0816 13:44:31.399220   57945 cache_images.go:92] duration metric: took 1.113354806s to LoadCachedImages
	W0816 13:44:31.399297   57945 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0816 13:44:31.399311   57945 kubeadm.go:934] updating node { 192.168.72.105 8443 v1.20.0 crio true true} ...
	I0816 13:44:31.399426   57945 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-882237 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.105
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-882237 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 13:44:31.399515   57945 ssh_runner.go:195] Run: crio config
	I0816 13:44:31.459182   57945 cni.go:84] Creating CNI manager for ""
	I0816 13:44:31.459226   57945 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:44:31.459244   57945 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 13:44:31.459270   57945 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.105 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-882237 NodeName:old-k8s-version-882237 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.105"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.105 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0816 13:44:31.459439   57945 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.105
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-882237"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.105
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.105"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 13:44:31.459521   57945 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0816 13:44:31.470415   57945 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 13:44:31.470500   57945 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 13:44:31.480890   57945 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0816 13:44:31.498797   57945 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 13:44:31.516425   57945 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0816 13:44:31.536528   57945 ssh_runner.go:195] Run: grep 192.168.72.105	control-plane.minikube.internal$ /etc/hosts
	I0816 13:44:31.540569   57945 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.105	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 13:44:31.553530   57945 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:44:31.693191   57945 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 13:44:31.711162   57945 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237 for IP: 192.168.72.105
	I0816 13:44:31.711190   57945 certs.go:194] generating shared ca certs ...
	I0816 13:44:31.711209   57945 certs.go:226] acquiring lock for ca certs: {Name:mkdb46755373c135acf8239d2d4352f4b0b3d1d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:44:31.711382   57945 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key
	I0816 13:44:31.711465   57945 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key
	I0816 13:44:31.711478   57945 certs.go:256] generating profile certs ...
	I0816 13:44:31.711596   57945 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/client.key
	I0816 13:44:31.711676   57945 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/apiserver.key.e63f19d8
	I0816 13:44:31.711739   57945 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/proxy-client.key
	I0816 13:44:31.711906   57945 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem (1338 bytes)
	W0816 13:44:31.711969   57945 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149_empty.pem, impossibly tiny 0 bytes
	I0816 13:44:31.711984   57945 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 13:44:31.712019   57945 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem (1082 bytes)
	I0816 13:44:31.712058   57945 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem (1123 bytes)
	I0816 13:44:31.712089   57945 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem (1675 bytes)
	I0816 13:44:31.712146   57945 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem (1708 bytes)
	I0816 13:44:31.713101   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 13:44:31.748701   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 13:44:31.789308   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 13:44:31.814410   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 13:44:31.841281   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0816 13:44:31.867939   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 13:44:31.894410   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 13:44:31.921591   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 13:44:31.952356   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /usr/share/ca-certificates/111492.pem (1708 bytes)
	I0816 13:44:31.982171   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 13:44:32.008849   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem --> /usr/share/ca-certificates/11149.pem (1338 bytes)
	I0816 13:44:32.034750   57945 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 13:44:32.051812   57945 ssh_runner.go:195] Run: openssl version
	I0816 13:44:32.057774   57945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11149.pem && ln -fs /usr/share/ca-certificates/11149.pem /etc/ssl/certs/11149.pem"
	I0816 13:44:32.068553   57945 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11149.pem
	I0816 13:44:32.073022   57945 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 12:33 /usr/share/ca-certificates/11149.pem
	I0816 13:44:32.073095   57945 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11149.pem
	I0816 13:44:32.079239   57945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11149.pem /etc/ssl/certs/51391683.0"
	I0816 13:44:32.089825   57945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111492.pem && ln -fs /usr/share/ca-certificates/111492.pem /etc/ssl/certs/111492.pem"
	I0816 13:44:32.100630   57945 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111492.pem
	I0816 13:44:32.105792   57945 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 12:33 /usr/share/ca-certificates/111492.pem
	I0816 13:44:32.105851   57945 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111492.pem
	I0816 13:44:32.112004   57945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111492.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 13:44:32.122723   57945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 13:44:32.133560   57945 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:44:32.138215   57945 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 12:22 /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:44:32.138260   57945 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:44:32.144059   57945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 13:44:32.155210   57945 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 13:44:32.159746   57945 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 13:44:32.165984   57945 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 13:44:32.171617   57945 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 13:44:32.177778   57945 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 13:44:32.183623   57945 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 13:44:32.189537   57945 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 13:44:32.195627   57945 kubeadm.go:392] StartCluster: {Name:old-k8s-version-882237 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-882237 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.105 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 13:44:32.195706   57945 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 13:44:32.195741   57945 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 13:44:32.235910   57945 cri.go:89] found id: ""
	I0816 13:44:32.235978   57945 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 13:44:32.248201   57945 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 13:44:32.248223   57945 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 13:44:32.248286   57945 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 13:44:32.258917   57945 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 13:44:32.260386   57945 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-882237" does not appear in /home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 13:44:32.261475   57945 kubeconfig.go:62] /home/jenkins/minikube-integration/19423-3966/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-882237" cluster setting kubeconfig missing "old-k8s-version-882237" context setting]
	I0816 13:44:32.263041   57945 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/kubeconfig: {Name:mk102d7d0e1fecb6a50b9d6c1ee82dcff7f7a898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:44:32.335150   57945 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 13:44:32.346103   57945 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.105
	I0816 13:44:32.346141   57945 kubeadm.go:1160] stopping kube-system containers ...
	I0816 13:44:32.346155   57945 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 13:44:32.346212   57945 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 13:44:32.390110   57945 cri.go:89] found id: ""
	I0816 13:44:32.390197   57945 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 13:44:32.408685   57945 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 13:44:32.419119   57945 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 13:44:32.419146   57945 kubeadm.go:157] found existing configuration files:
	
	I0816 13:44:32.419227   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 13:44:32.429282   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 13:44:32.429352   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 13:44:32.439444   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 13:44:32.449342   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 13:44:32.449409   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 13:44:32.459836   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 13:44:32.469581   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 13:44:32.469653   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 13:44:32.479655   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 13:44:32.489139   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 13:44:32.489204   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 13:44:32.499439   57945 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 13:44:32.509706   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:32.672388   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:33.787722   57945 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.115294487s)
	I0816 13:44:33.787763   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:34.027016   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:34.141852   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:34.247190   57945 api_server.go:52] waiting for apiserver process to appear ...
	I0816 13:44:34.247286   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:34.747781   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:33.022378   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:33.023000   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:33.023028   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:33.022950   59082 retry.go:31] will retry after 1.906001247s: waiting for machine to come up
	I0816 13:44:34.930169   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:34.930674   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:34.930702   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:34.930612   59082 retry.go:31] will retry after 2.809719622s: waiting for machine to come up
	I0816 13:44:33.263780   57440 addons.go:510] duration metric: took 3.216351591s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0816 13:44:34.816280   57440 node_ready.go:53] node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:36.817474   57440 node_ready.go:53] node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:35.248075   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:35.747575   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:36.247693   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:36.748219   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:37.247519   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:37.748189   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:38.248143   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:38.748193   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:39.247412   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:39.748043   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:37.742122   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:37.742506   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:37.742545   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:37.742464   59082 retry.go:31] will retry after 4.139761236s: waiting for machine to come up
	I0816 13:44:37.815407   57440 node_ready.go:49] node "no-preload-311070" has status "Ready":"True"
	I0816 13:44:37.815428   57440 node_ready.go:38] duration metric: took 7.503128864s for node "no-preload-311070" to be "Ready" ...
	I0816 13:44:37.815437   57440 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:44:37.820318   57440 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-8kbs6" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:37.825460   57440 pod_ready.go:93] pod "coredns-6f6b679f8f-8kbs6" in "kube-system" namespace has status "Ready":"True"
	I0816 13:44:37.825478   57440 pod_ready.go:82] duration metric: took 5.136508ms for pod "coredns-6f6b679f8f-8kbs6" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:37.825486   57440 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:37.829609   57440 pod_ready.go:93] pod "etcd-no-preload-311070" in "kube-system" namespace has status "Ready":"True"
	I0816 13:44:37.829628   57440 pod_ready.go:82] duration metric: took 4.133294ms for pod "etcd-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:37.829635   57440 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:39.835973   57440 pod_ready.go:103] pod "kube-apiserver-no-preload-311070" in "kube-system" namespace has status "Ready":"False"
	I0816 13:44:40.335270   57440 pod_ready.go:93] pod "kube-apiserver-no-preload-311070" in "kube-system" namespace has status "Ready":"True"
	I0816 13:44:40.335289   57440 pod_ready.go:82] duration metric: took 2.505647853s for pod "kube-apiserver-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:40.335298   57440 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:43.233555   57240 start.go:364] duration metric: took 55.654362151s to acquireMachinesLock for "embed-certs-302520"
	I0816 13:44:43.233638   57240 start.go:96] Skipping create...Using existing machine configuration
	I0816 13:44:43.233649   57240 fix.go:54] fixHost starting: 
	I0816 13:44:43.234047   57240 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:43.234078   57240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:43.253929   57240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34851
	I0816 13:44:43.254405   57240 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:43.254878   57240 main.go:141] libmachine: Using API Version  1
	I0816 13:44:43.254900   57240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:43.255235   57240 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:43.255400   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	I0816 13:44:43.255578   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetState
	I0816 13:44:43.257434   57240 fix.go:112] recreateIfNeeded on embed-certs-302520: state=Stopped err=<nil>
	I0816 13:44:43.257472   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	W0816 13:44:43.257637   57240 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 13:44:43.259743   57240 out.go:177] * Restarting existing kvm2 VM for "embed-certs-302520" ...
	I0816 13:44:41.885729   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:41.886143   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Found IP for machine: 192.168.50.186
	I0816 13:44:41.886162   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Reserving static IP address...
	I0816 13:44:41.886178   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has current primary IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:41.886540   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-893736", mac: "52:54:00:5f:b2:25", ip: "192.168.50.186"} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:41.886570   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | skip adding static IP to network mk-default-k8s-diff-port-893736 - found existing host DHCP lease matching {name: "default-k8s-diff-port-893736", mac: "52:54:00:5f:b2:25", ip: "192.168.50.186"}
	I0816 13:44:41.886585   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Reserved static IP address: 192.168.50.186
	I0816 13:44:41.886600   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for SSH to be available...
	I0816 13:44:41.886617   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | Getting to WaitForSSH function...
	I0816 13:44:41.888671   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:41.889003   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:41.889047   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:41.889118   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | Using SSH client type: external
	I0816 13:44:41.889142   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/default-k8s-diff-port-893736/id_rsa (-rw-------)
	I0816 13:44:41.889181   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.186 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-3966/.minikube/machines/default-k8s-diff-port-893736/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 13:44:41.889201   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | About to run SSH command:
	I0816 13:44:41.889215   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | exit 0
	I0816 13:44:42.017010   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | SSH cmd err, output: <nil>: 
	I0816 13:44:42.017374   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetConfigRaw
	I0816 13:44:42.017979   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetIP
	I0816 13:44:42.020580   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.020958   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:42.020992   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.021174   58430 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/default-k8s-diff-port-893736/config.json ...
	I0816 13:44:42.021342   58430 machine.go:93] provisionDockerMachine start ...
	I0816 13:44:42.021356   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:44:42.021521   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:42.023732   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.024033   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:42.024057   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.024201   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:42.024354   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:42.024526   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:42.024667   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:42.024811   58430 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:42.024994   58430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.186 22 <nil> <nil>}
	I0816 13:44:42.025005   58430 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 13:44:42.137459   58430 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 13:44:42.137495   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetMachineName
	I0816 13:44:42.137722   58430 buildroot.go:166] provisioning hostname "default-k8s-diff-port-893736"
	I0816 13:44:42.137745   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetMachineName
	I0816 13:44:42.137925   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:42.140599   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.140987   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:42.141017   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.141148   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:42.141309   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:42.141430   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:42.141536   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:42.141677   58430 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:42.141843   58430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.186 22 <nil> <nil>}
	I0816 13:44:42.141855   58430 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-893736 && echo "default-k8s-diff-port-893736" | sudo tee /etc/hostname
	I0816 13:44:42.267643   58430 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-893736
	
	I0816 13:44:42.267670   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:42.270489   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.270834   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:42.270867   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.271089   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:42.271266   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:42.271405   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:42.271527   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:42.271675   58430 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:42.271829   58430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.186 22 <nil> <nil>}
	I0816 13:44:42.271847   58430 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-893736' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-893736/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-893736' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 13:44:42.398010   58430 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 13:44:42.398057   58430 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-3966/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-3966/.minikube}
	I0816 13:44:42.398122   58430 buildroot.go:174] setting up certificates
	I0816 13:44:42.398139   58430 provision.go:84] configureAuth start
	I0816 13:44:42.398157   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetMachineName
	I0816 13:44:42.398484   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetIP
	I0816 13:44:42.401217   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.401566   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:42.401587   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.401749   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:42.404082   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.404380   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:42.404425   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.404541   58430 provision.go:143] copyHostCerts
	I0816 13:44:42.404596   58430 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem, removing ...
	I0816 13:44:42.404606   58430 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem
	I0816 13:44:42.404666   58430 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem (1123 bytes)
	I0816 13:44:42.404758   58430 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem, removing ...
	I0816 13:44:42.404767   58430 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem
	I0816 13:44:42.404788   58430 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem (1675 bytes)
	I0816 13:44:42.404850   58430 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem, removing ...
	I0816 13:44:42.404857   58430 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem
	I0816 13:44:42.404873   58430 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem (1082 bytes)
	I0816 13:44:42.404965   58430 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-893736 san=[127.0.0.1 192.168.50.186 default-k8s-diff-port-893736 localhost minikube]
	I0816 13:44:42.551867   58430 provision.go:177] copyRemoteCerts
	I0816 13:44:42.551928   58430 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 13:44:42.551954   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:42.554945   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.555276   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:42.555316   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.555517   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:42.555699   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:42.555838   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:42.555964   58430 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/default-k8s-diff-port-893736/id_rsa Username:docker}
	I0816 13:44:42.643591   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 13:44:42.667108   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0816 13:44:42.690852   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 13:44:42.714001   58430 provision.go:87] duration metric: took 315.84846ms to configureAuth
	I0816 13:44:42.714030   58430 buildroot.go:189] setting minikube options for container-runtime
	I0816 13:44:42.714189   58430 config.go:182] Loaded profile config "default-k8s-diff-port-893736": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 13:44:42.714263   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:42.716726   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.717082   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:42.717110   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.717282   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:42.717486   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:42.717621   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:42.717740   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:42.717883   58430 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:42.718038   58430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.186 22 <nil> <nil>}
	I0816 13:44:42.718055   58430 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 13:44:42.988769   58430 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 13:44:42.988798   58430 machine.go:96] duration metric: took 967.444538ms to provisionDockerMachine
	I0816 13:44:42.988814   58430 start.go:293] postStartSetup for "default-k8s-diff-port-893736" (driver="kvm2")
	I0816 13:44:42.988833   58430 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 13:44:42.988864   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:44:42.989226   58430 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 13:44:42.989261   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:42.991868   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.992162   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:42.992184   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.992364   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:42.992537   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:42.992689   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:42.992838   58430 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/default-k8s-diff-port-893736/id_rsa Username:docker}
	I0816 13:44:43.079199   58430 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 13:44:43.083277   58430 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 13:44:43.083296   58430 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/addons for local assets ...
	I0816 13:44:43.083357   58430 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/files for local assets ...
	I0816 13:44:43.083459   58430 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem -> 111492.pem in /etc/ssl/certs
	I0816 13:44:43.083576   58430 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 13:44:43.092684   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /etc/ssl/certs/111492.pem (1708 bytes)
	I0816 13:44:43.115693   58430 start.go:296] duration metric: took 126.86489ms for postStartSetup
	I0816 13:44:43.115735   58430 fix.go:56] duration metric: took 19.425768942s for fixHost
	I0816 13:44:43.115761   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:43.118597   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:43.118915   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:43.118947   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:43.119100   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:43.119306   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:43.119442   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:43.119563   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:43.119683   58430 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:43.119840   58430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.186 22 <nil> <nil>}
	I0816 13:44:43.119850   58430 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 13:44:43.233362   58430 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723815883.193133132
	
	I0816 13:44:43.233394   58430 fix.go:216] guest clock: 1723815883.193133132
	I0816 13:44:43.233406   58430 fix.go:229] Guest: 2024-08-16 13:44:43.193133132 +0000 UTC Remote: 2024-08-16 13:44:43.115740856 +0000 UTC m=+147.151935383 (delta=77.392276ms)
	I0816 13:44:43.233479   58430 fix.go:200] guest clock delta is within tolerance: 77.392276ms
	I0816 13:44:43.233486   58430 start.go:83] releasing machines lock for "default-k8s-diff-port-893736", held for 19.543554553s
	I0816 13:44:43.233517   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:44:43.233783   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetIP
	I0816 13:44:43.236492   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:43.236875   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:43.236901   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:43.237136   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:44:43.237703   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:44:43.237943   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:44:43.238074   58430 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 13:44:43.238153   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:43.238182   58430 ssh_runner.go:195] Run: cat /version.json
	I0816 13:44:43.238215   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:43.240639   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:43.241000   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:43.241029   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:43.241052   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:43.241193   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:43.241360   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:43.241573   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:43.241581   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:43.241601   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:43.241733   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:43.241732   58430 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/default-k8s-diff-port-893736/id_rsa Username:docker}
	I0816 13:44:43.241895   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:43.242052   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:43.242178   58430 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/default-k8s-diff-port-893736/id_rsa Username:docker}
	I0816 13:44:43.352903   58430 ssh_runner.go:195] Run: systemctl --version
	I0816 13:44:43.359071   58430 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 13:44:43.509233   58430 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 13:44:43.516592   58430 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 13:44:43.516666   58430 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 13:44:43.534069   58430 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 13:44:43.534096   58430 start.go:495] detecting cgroup driver to use...
	I0816 13:44:43.534167   58430 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 13:44:43.553305   58430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 13:44:43.569958   58430 docker.go:217] disabling cri-docker service (if available) ...
	I0816 13:44:43.570007   58430 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 13:44:43.590642   58430 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 13:44:43.606411   58430 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 13:44:43.733331   58430 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 13:44:43.882032   58430 docker.go:233] disabling docker service ...
	I0816 13:44:43.882110   58430 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 13:44:43.896780   58430 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 13:44:43.909702   58430 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 13:44:44.044071   58430 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 13:44:44.170798   58430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 13:44:44.184421   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 13:44:44.203201   58430 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 13:44:44.203269   58430 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:44.213647   58430 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 13:44:44.213708   58430 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:44.224261   58430 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:44.235295   58430 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:44.247670   58430 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 13:44:44.264065   58430 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:44.276212   58430 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:44.296049   58430 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:44.307920   58430 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 13:44:44.319689   58430 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 13:44:44.319746   58430 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 13:44:44.335735   58430 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 13:44:44.352364   58430 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:44:44.476754   58430 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 13:44:44.618847   58430 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 13:44:44.618914   58430 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 13:44:44.623946   58430 start.go:563] Will wait 60s for crictl version
	I0816 13:44:44.624004   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:44:44.627796   58430 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 13:44:44.666274   58430 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 13:44:44.666350   58430 ssh_runner.go:195] Run: crio --version
	I0816 13:44:44.694476   58430 ssh_runner.go:195] Run: crio --version
	I0816 13:44:44.723937   58430 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 13:44:43.261237   57240 main.go:141] libmachine: (embed-certs-302520) Calling .Start
	I0816 13:44:43.261399   57240 main.go:141] libmachine: (embed-certs-302520) Ensuring networks are active...
	I0816 13:44:43.262183   57240 main.go:141] libmachine: (embed-certs-302520) Ensuring network default is active
	I0816 13:44:43.262591   57240 main.go:141] libmachine: (embed-certs-302520) Ensuring network mk-embed-certs-302520 is active
	I0816 13:44:43.263044   57240 main.go:141] libmachine: (embed-certs-302520) Getting domain xml...
	I0816 13:44:43.263849   57240 main.go:141] libmachine: (embed-certs-302520) Creating domain...
	I0816 13:44:44.565632   57240 main.go:141] libmachine: (embed-certs-302520) Waiting to get IP...
	I0816 13:44:44.566705   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:44.567120   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:44.567211   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:44.567113   59274 retry.go:31] will retry after 259.265867ms: waiting for machine to come up
	I0816 13:44:44.827603   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:44.828117   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:44.828152   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:44.828043   59274 retry.go:31] will retry after 271.270487ms: waiting for machine to come up
	I0816 13:44:40.247541   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:40.747938   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:41.247408   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:41.747777   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:42.248295   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:42.747393   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:43.247508   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:43.748151   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:44.247958   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:44.747978   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:44.725112   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetIP
	I0816 13:44:44.728077   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:44.728446   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:44.728469   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:44.728728   58430 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0816 13:44:44.733365   58430 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 13:44:44.746196   58430 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-893736 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-893736 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.186 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 13:44:44.746325   58430 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 13:44:44.746385   58430 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 13:44:44.787402   58430 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 13:44:44.787481   58430 ssh_runner.go:195] Run: which lz4
	I0816 13:44:44.791755   58430 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 13:44:44.797290   58430 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 13:44:44.797320   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0816 13:44:42.342663   57440 pod_ready.go:93] pod "kube-controller-manager-no-preload-311070" in "kube-system" namespace has status "Ready":"True"
	I0816 13:44:42.342685   57440 pod_ready.go:82] duration metric: took 2.007381193s for pod "kube-controller-manager-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:42.342694   57440 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-b8d5b" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:42.346807   57440 pod_ready.go:93] pod "kube-proxy-b8d5b" in "kube-system" namespace has status "Ready":"True"
	I0816 13:44:42.346824   57440 pod_ready.go:82] duration metric: took 4.124529ms for pod "kube-proxy-b8d5b" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:42.346832   57440 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:42.351010   57440 pod_ready.go:93] pod "kube-scheduler-no-preload-311070" in "kube-system" namespace has status "Ready":"True"
	I0816 13:44:42.351025   57440 pod_ready.go:82] duration metric: took 4.186812ms for pod "kube-scheduler-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:42.351032   57440 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:44.358663   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:44:46.359708   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:44:45.100554   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:45.101150   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:45.101265   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:45.101207   59274 retry.go:31] will retry after 309.469795ms: waiting for machine to come up
	I0816 13:44:45.412518   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:45.413009   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:45.413040   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:45.412975   59274 retry.go:31] will retry after 502.564219ms: waiting for machine to come up
	I0816 13:44:45.917731   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:45.918284   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:45.918316   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:45.918235   59274 retry.go:31] will retry after 723.442166ms: waiting for machine to come up
	I0816 13:44:46.642971   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:46.643467   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:46.643498   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:46.643400   59274 retry.go:31] will retry after 600.365383ms: waiting for machine to come up
	I0816 13:44:47.245233   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:47.245756   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:47.245785   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:47.245710   59274 retry.go:31] will retry after 1.06438693s: waiting for machine to come up
	I0816 13:44:48.312043   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:48.312842   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:48.312886   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:48.312840   59274 retry.go:31] will retry after 903.877948ms: waiting for machine to come up
	I0816 13:44:49.218419   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:49.218805   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:49.218835   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:49.218758   59274 retry.go:31] will retry after 1.73892963s: waiting for machine to come up
	I0816 13:44:45.247523   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:45.747694   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:46.248397   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:46.747660   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:47.247382   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:47.748220   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:48.248130   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:48.747818   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:49.248360   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:49.747962   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:46.230345   58430 crio.go:462] duration metric: took 1.438624377s to copy over tarball
	I0816 13:44:46.230429   58430 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 13:44:48.358060   58430 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.127589486s)
	I0816 13:44:48.358131   58430 crio.go:469] duration metric: took 2.127754698s to extract the tarball
	I0816 13:44:48.358145   58430 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 13:44:48.398054   58430 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 13:44:48.449391   58430 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 13:44:48.449416   58430 cache_images.go:84] Images are preloaded, skipping loading
	I0816 13:44:48.449425   58430 kubeadm.go:934] updating node { 192.168.50.186 8444 v1.31.0 crio true true} ...
	I0816 13:44:48.449576   58430 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-893736 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.186
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-893736 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 13:44:48.449662   58430 ssh_runner.go:195] Run: crio config
	I0816 13:44:48.499389   58430 cni.go:84] Creating CNI manager for ""
	I0816 13:44:48.499413   58430 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:44:48.499424   58430 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 13:44:48.499452   58430 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.186 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-893736 NodeName:default-k8s-diff-port-893736 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.186"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.186 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 13:44:48.499576   58430 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.186
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-893736"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.186
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.186"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 13:44:48.499653   58430 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 13:44:48.509639   58430 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 13:44:48.509706   58430 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 13:44:48.519099   58430 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0816 13:44:48.535866   58430 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 13:44:48.552977   58430 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0816 13:44:48.571198   58430 ssh_runner.go:195] Run: grep 192.168.50.186	control-plane.minikube.internal$ /etc/hosts
	I0816 13:44:48.575881   58430 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.186	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 13:44:48.587850   58430 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:44:48.703848   58430 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 13:44:48.721449   58430 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/default-k8s-diff-port-893736 for IP: 192.168.50.186
	I0816 13:44:48.721476   58430 certs.go:194] generating shared ca certs ...
	I0816 13:44:48.721496   58430 certs.go:226] acquiring lock for ca certs: {Name:mkdb46755373c135acf8239d2d4352f4b0b3d1d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:44:48.721677   58430 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key
	I0816 13:44:48.721731   58430 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key
	I0816 13:44:48.721745   58430 certs.go:256] generating profile certs ...
	I0816 13:44:48.721843   58430 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/default-k8s-diff-port-893736/client.key
	I0816 13:44:48.721926   58430 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/default-k8s-diff-port-893736/apiserver.key.64c9b41b
	I0816 13:44:48.721980   58430 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/default-k8s-diff-port-893736/proxy-client.key
	I0816 13:44:48.722107   58430 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem (1338 bytes)
	W0816 13:44:48.722138   58430 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149_empty.pem, impossibly tiny 0 bytes
	I0816 13:44:48.722149   58430 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 13:44:48.722182   58430 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem (1082 bytes)
	I0816 13:44:48.722204   58430 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem (1123 bytes)
	I0816 13:44:48.722225   58430 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem (1675 bytes)
	I0816 13:44:48.722258   58430 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem (1708 bytes)
	I0816 13:44:48.722818   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 13:44:48.779462   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 13:44:48.814653   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 13:44:48.887435   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 13:44:48.913644   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/default-k8s-diff-port-893736/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0816 13:44:48.937536   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/default-k8s-diff-port-893736/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 13:44:48.960729   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/default-k8s-diff-port-893736/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 13:44:48.984375   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/default-k8s-diff-port-893736/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 13:44:49.007997   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 13:44:49.031631   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem --> /usr/share/ca-certificates/11149.pem (1338 bytes)
	I0816 13:44:49.054333   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /usr/share/ca-certificates/111492.pem (1708 bytes)
	I0816 13:44:49.076566   58430 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 13:44:49.092986   58430 ssh_runner.go:195] Run: openssl version
	I0816 13:44:49.098555   58430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111492.pem && ln -fs /usr/share/ca-certificates/111492.pem /etc/ssl/certs/111492.pem"
	I0816 13:44:49.109454   58430 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111492.pem
	I0816 13:44:49.114868   58430 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 12:33 /usr/share/ca-certificates/111492.pem
	I0816 13:44:49.114934   58430 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111492.pem
	I0816 13:44:49.120811   58430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111492.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 13:44:49.131829   58430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 13:44:49.142825   58430 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:44:49.147276   58430 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 12:22 /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:44:49.147322   58430 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:44:49.152678   58430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 13:44:49.163622   58430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11149.pem && ln -fs /usr/share/ca-certificates/11149.pem /etc/ssl/certs/11149.pem"
	I0816 13:44:49.174426   58430 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11149.pem
	I0816 13:44:49.179353   58430 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 12:33 /usr/share/ca-certificates/11149.pem
	I0816 13:44:49.179406   58430 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11149.pem
	I0816 13:44:49.185129   58430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11149.pem /etc/ssl/certs/51391683.0"
	I0816 13:44:49.196668   58430 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 13:44:49.201447   58430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 13:44:49.207718   58430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 13:44:49.213869   58430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 13:44:49.220325   58430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 13:44:49.226220   58430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 13:44:49.231971   58430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 13:44:49.238080   58430 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-893736 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-893736 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.186 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 13:44:49.238178   58430 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 13:44:49.238231   58430 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 13:44:49.276621   58430 cri.go:89] found id: ""
	I0816 13:44:49.276719   58430 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 13:44:49.287765   58430 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 13:44:49.287785   58430 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 13:44:49.287829   58430 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 13:44:49.298038   58430 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 13:44:49.299171   58430 kubeconfig.go:125] found "default-k8s-diff-port-893736" server: "https://192.168.50.186:8444"
	I0816 13:44:49.301521   58430 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 13:44:49.311800   58430 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.186
	I0816 13:44:49.311833   58430 kubeadm.go:1160] stopping kube-system containers ...
	I0816 13:44:49.311845   58430 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 13:44:49.311899   58430 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 13:44:49.363716   58430 cri.go:89] found id: ""
	I0816 13:44:49.363784   58430 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 13:44:49.381053   58430 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 13:44:49.391306   58430 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 13:44:49.391322   58430 kubeadm.go:157] found existing configuration files:
	
	I0816 13:44:49.391370   58430 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0816 13:44:49.400770   58430 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 13:44:49.400829   58430 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 13:44:49.410252   58430 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0816 13:44:49.419405   58430 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 13:44:49.419481   58430 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 13:44:49.429330   58430 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0816 13:44:49.438521   58430 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 13:44:49.438587   58430 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 13:44:49.448144   58430 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0816 13:44:49.456744   58430 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 13:44:49.456805   58430 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 13:44:49.466062   58430 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 13:44:49.476159   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:49.597639   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:50.673182   58430 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.075495766s)
	I0816 13:44:50.673218   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:50.887802   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:50.958384   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:48.858145   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:44:51.358083   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:44:50.959807   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:50.960217   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:50.960236   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:50.960188   59274 retry.go:31] will retry after 2.32558417s: waiting for machine to come up
	I0816 13:44:53.287947   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:53.288441   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:53.288470   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:53.288388   59274 retry.go:31] will retry after 1.85414625s: waiting for machine to come up
	I0816 13:44:50.247710   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:50.747741   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:51.248099   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:51.748052   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:52.247958   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:52.748141   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:53.247751   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:53.747353   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:54.247624   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:54.747699   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:51.054015   58430 api_server.go:52] waiting for apiserver process to appear ...
	I0816 13:44:51.054101   58430 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:51.554846   58430 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:52.055178   58430 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:52.082087   58430 api_server.go:72] duration metric: took 1.028080423s to wait for apiserver process to appear ...
	I0816 13:44:52.082114   58430 api_server.go:88] waiting for apiserver healthz status ...
	I0816 13:44:52.082133   58430 api_server.go:253] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 13:44:52.082624   58430 api_server.go:269] stopped: https://192.168.50.186:8444/healthz: Get "https://192.168.50.186:8444/healthz": dial tcp 192.168.50.186:8444: connect: connection refused
	I0816 13:44:52.582261   58430 api_server.go:253] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 13:44:55.336530   58430 api_server.go:279] https://192.168.50.186:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 13:44:55.336565   58430 api_server.go:103] status: https://192.168.50.186:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 13:44:55.336580   58430 api_server.go:253] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 13:44:55.374699   58430 api_server.go:279] https://192.168.50.186:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 13:44:55.374733   58430 api_server.go:103] status: https://192.168.50.186:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 13:44:55.583112   58430 api_server.go:253] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 13:44:55.588756   58430 api_server.go:279] https://192.168.50.186:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 13:44:55.588782   58430 api_server.go:103] status: https://192.168.50.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 13:44:56.082182   58430 api_server.go:253] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 13:44:56.088062   58430 api_server.go:279] https://192.168.50.186:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 13:44:56.088108   58430 api_server.go:103] status: https://192.168.50.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 13:44:56.582273   58430 api_server.go:253] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 13:44:56.587049   58430 api_server.go:279] https://192.168.50.186:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 13:44:56.587087   58430 api_server.go:103] status: https://192.168.50.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 13:44:57.082664   58430 api_server.go:253] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 13:44:57.092562   58430 api_server.go:279] https://192.168.50.186:8444/healthz returned 200:
	ok
	I0816 13:44:57.100740   58430 api_server.go:141] control plane version: v1.31.0
	I0816 13:44:57.100767   58430 api_server.go:131] duration metric: took 5.018647278s to wait for apiserver health ...
	I0816 13:44:57.100777   58430 cni.go:84] Creating CNI manager for ""
	I0816 13:44:57.100784   58430 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:44:57.102775   58430 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 13:44:53.358390   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:44:55.358437   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:44:57.104079   58430 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 13:44:57.115212   58430 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 13:44:57.137445   58430 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 13:44:57.150376   58430 system_pods.go:59] 8 kube-system pods found
	I0816 13:44:57.150412   58430 system_pods.go:61] "coredns-6f6b679f8f-xdwhx" [66987c52-9a8c-4ddd-a6cf-ac84172d8c8c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 13:44:57.150422   58430 system_pods.go:61] "etcd-default-k8s-diff-port-893736" [54f5bb47-00f5-4c65-88ae-460571872fd1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 13:44:57.150435   58430 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-893736" [c8c57619-3a4c-4cc9-b637-7842194071ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 13:44:57.150448   58430 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-893736" [e553aced-34bd-4d30-b473-082001fe2948] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 13:44:57.150454   58430 system_pods.go:61] "kube-proxy-btq6r" [a2b7b283-da62-4cb8-a039-07a509491e5e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0816 13:44:57.150458   58430 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-893736" [0a30cbd0-a2ac-4ca9-bddc-674e6fc9ae77] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 13:44:57.150463   58430 system_pods.go:61] "metrics-server-6867b74b74-j9tqh" [ef077e6d-f368-4872-bb87-9e031d3ea764] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 13:44:57.150472   58430 system_pods.go:61] "storage-provisioner" [e2fbf16a-3bc7-4300-8023-5dbb20ba70bc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0816 13:44:57.150481   58430 system_pods.go:74] duration metric: took 13.019757ms to wait for pod list to return data ...
	I0816 13:44:57.150489   58430 node_conditions.go:102] verifying NodePressure condition ...
	I0816 13:44:57.153699   58430 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 13:44:57.153721   58430 node_conditions.go:123] node cpu capacity is 2
	I0816 13:44:57.153731   58430 node_conditions.go:105] duration metric: took 3.237407ms to run NodePressure ...
	I0816 13:44:57.153752   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:57.439130   58430 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0816 13:44:57.446848   58430 kubeadm.go:739] kubelet initialised
	I0816 13:44:57.446876   58430 kubeadm.go:740] duration metric: took 7.718113ms waiting for restarted kubelet to initialise ...
	I0816 13:44:57.446885   58430 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:44:57.452263   58430 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-xdwhx" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:57.459002   58430 pod_ready.go:98] node "default-k8s-diff-port-893736" hosting pod "coredns-6f6b679f8f-xdwhx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:57.459024   58430 pod_ready.go:82] duration metric: took 6.735487ms for pod "coredns-6f6b679f8f-xdwhx" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:57.459033   58430 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-893736" hosting pod "coredns-6f6b679f8f-xdwhx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:57.459039   58430 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:57.463723   58430 pod_ready.go:98] node "default-k8s-diff-port-893736" hosting pod "etcd-default-k8s-diff-port-893736" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:57.463742   58430 pod_ready.go:82] duration metric: took 4.695932ms for pod "etcd-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:57.463751   58430 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-893736" hosting pod "etcd-default-k8s-diff-port-893736" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:57.463756   58430 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:57.468593   58430 pod_ready.go:98] node "default-k8s-diff-port-893736" hosting pod "kube-apiserver-default-k8s-diff-port-893736" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:57.468619   58430 pod_ready.go:82] duration metric: took 4.856498ms for pod "kube-apiserver-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:57.468632   58430 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-893736" hosting pod "kube-apiserver-default-k8s-diff-port-893736" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:57.468643   58430 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:57.541251   58430 pod_ready.go:98] node "default-k8s-diff-port-893736" hosting pod "kube-controller-manager-default-k8s-diff-port-893736" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:57.541278   58430 pod_ready.go:82] duration metric: took 72.626413ms for pod "kube-controller-manager-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:57.541290   58430 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-893736" hosting pod "kube-controller-manager-default-k8s-diff-port-893736" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:57.541296   58430 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-btq6r" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:57.940580   58430 pod_ready.go:98] node "default-k8s-diff-port-893736" hosting pod "kube-proxy-btq6r" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:57.940616   58430 pod_ready.go:82] duration metric: took 399.312571ms for pod "kube-proxy-btq6r" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:57.940627   58430 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-893736" hosting pod "kube-proxy-btq6r" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:57.940635   58430 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:58.340647   58430 pod_ready.go:98] node "default-k8s-diff-port-893736" hosting pod "kube-scheduler-default-k8s-diff-port-893736" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:58.340671   58430 pod_ready.go:82] duration metric: took 400.026004ms for pod "kube-scheduler-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:58.340683   58430 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-893736" hosting pod "kube-scheduler-default-k8s-diff-port-893736" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:58.340694   58430 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:58.750549   58430 pod_ready.go:98] node "default-k8s-diff-port-893736" hosting pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:58.750573   58430 pod_ready.go:82] duration metric: took 409.872187ms for pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:58.750588   58430 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-893736" hosting pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:58.750598   58430 pod_ready.go:39] duration metric: took 1.303702313s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:44:58.750626   58430 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 13:44:58.766462   58430 ops.go:34] apiserver oom_adj: -16
	I0816 13:44:58.766482   58430 kubeadm.go:597] duration metric: took 9.478690644s to restartPrimaryControlPlane
	I0816 13:44:58.766491   58430 kubeadm.go:394] duration metric: took 9.528416258s to StartCluster
	I0816 13:44:58.766509   58430 settings.go:142] acquiring lock: {Name:mk96ffc9331ceacd8f3c1c33a59b38e047898a0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:44:58.766572   58430 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 13:44:58.770737   58430 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/kubeconfig: {Name:mk102d7d0e1fecb6a50b9d6c1ee82dcff7f7a898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:44:58.771036   58430 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.186 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 13:44:58.771138   58430 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 13:44:58.771218   58430 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-893736"
	I0816 13:44:58.771232   58430 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-893736"
	I0816 13:44:58.771245   58430 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-893736"
	I0816 13:44:58.771281   58430 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-893736"
	I0816 13:44:58.771252   58430 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-893736"
	W0816 13:44:58.771337   58430 addons.go:243] addon storage-provisioner should already be in state true
	I0816 13:44:58.771371   58430 host.go:66] Checking if "default-k8s-diff-port-893736" exists ...
	I0816 13:44:58.771285   58430 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-893736"
	W0816 13:44:58.771447   58430 addons.go:243] addon metrics-server should already be in state true
	I0816 13:44:58.771485   58430 host.go:66] Checking if "default-k8s-diff-port-893736" exists ...
	I0816 13:44:58.771231   58430 config.go:182] Loaded profile config "default-k8s-diff-port-893736": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 13:44:58.771653   58430 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:58.771682   58430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:58.771750   58430 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:58.771781   58430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:58.771839   58430 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:58.771886   58430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:58.772665   58430 out.go:177] * Verifying Kubernetes components...
	I0816 13:44:58.773992   58430 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:44:58.788717   58430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41949
	I0816 13:44:58.789233   58430 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:58.789833   58430 main.go:141] libmachine: Using API Version  1
	I0816 13:44:58.789859   58430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:58.790269   58430 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:58.790882   58430 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:58.790913   58430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:58.791553   58430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35753
	I0816 13:44:58.791556   58430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36985
	I0816 13:44:58.791945   58430 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:58.791979   58430 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:58.792413   58430 main.go:141] libmachine: Using API Version  1
	I0816 13:44:58.792440   58430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:58.792813   58430 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:58.792963   58430 main.go:141] libmachine: Using API Version  1
	I0816 13:44:58.792986   58430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:58.793013   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetState
	I0816 13:44:58.793374   58430 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:58.793940   58430 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:58.793986   58430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:58.796723   58430 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-893736"
	W0816 13:44:58.796740   58430 addons.go:243] addon default-storageclass should already be in state true
	I0816 13:44:58.796763   58430 host.go:66] Checking if "default-k8s-diff-port-893736" exists ...
	I0816 13:44:58.797138   58430 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:58.797184   58430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:58.806753   58430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41511
	I0816 13:44:58.807162   58430 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:58.807605   58430 main.go:141] libmachine: Using API Version  1
	I0816 13:44:58.807624   58430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:58.807984   58430 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:58.808229   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetState
	I0816 13:44:58.809833   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:44:58.811642   58430 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:58.812716   58430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39567
	I0816 13:44:58.812888   58430 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 13:44:58.812902   58430 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 13:44:58.812937   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:58.813184   58430 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:58.813668   58430 main.go:141] libmachine: Using API Version  1
	I0816 13:44:58.813695   58430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:58.813725   58430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40851
	I0816 13:44:58.814101   58430 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:58.814207   58430 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:58.814696   58430 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:58.814715   58430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:58.814948   58430 main.go:141] libmachine: Using API Version  1
	I0816 13:44:58.814961   58430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:58.815304   58430 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:58.815518   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetState
	I0816 13:44:58.816936   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:58.817482   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:44:58.817529   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:58.817543   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:58.817871   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:58.818057   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:58.818219   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:58.818397   58430 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/default-k8s-diff-port-893736/id_rsa Username:docker}
	I0816 13:44:58.819251   58430 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0816 13:44:55.143862   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:55.144403   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:55.144433   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:55.144354   59274 retry.go:31] will retry after 3.573850343s: waiting for machine to come up
	I0816 13:44:58.720104   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:58.720571   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:58.720606   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:58.720510   59274 retry.go:31] will retry after 4.52867767s: waiting for machine to come up
	I0816 13:44:55.248021   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:55.747406   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:56.247470   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:56.747399   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:57.247462   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:57.747637   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:58.248194   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:58.747381   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:59.247772   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:59.748373   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:58.820720   58430 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 13:44:58.820733   58430 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 13:44:58.820747   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:58.823868   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:58.824290   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:58.824305   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:58.824489   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:58.824629   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:58.824764   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:58.824860   58430 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/default-k8s-diff-port-893736/id_rsa Username:docker}
	I0816 13:44:58.830530   58430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38427
	I0816 13:44:58.830894   58430 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:58.831274   58430 main.go:141] libmachine: Using API Version  1
	I0816 13:44:58.831294   58430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:58.831583   58430 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:58.831729   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetState
	I0816 13:44:58.833321   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:44:58.833512   58430 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 13:44:58.833526   58430 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 13:44:58.833543   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:58.836244   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:58.836626   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:58.836649   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:58.836762   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:58.836947   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:58.837098   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:58.837234   58430 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/default-k8s-diff-port-893736/id_rsa Username:docker}
	I0816 13:44:58.973561   58430 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 13:44:58.995763   58430 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-893736" to be "Ready" ...
	I0816 13:44:59.118558   58430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 13:44:59.126100   58430 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 13:44:59.126125   58430 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0816 13:44:59.154048   58430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 13:44:59.162623   58430 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 13:44:59.162649   58430 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 13:44:59.213614   58430 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 13:44:59.213635   58430 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 13:44:59.233653   58430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 13:44:59.485000   58430 main.go:141] libmachine: Making call to close driver server
	I0816 13:44:59.485030   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .Close
	I0816 13:44:59.485329   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | Closing plugin on server side
	I0816 13:44:59.485384   58430 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:44:59.485397   58430 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:44:59.485406   58430 main.go:141] libmachine: Making call to close driver server
	I0816 13:44:59.485414   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .Close
	I0816 13:44:59.485736   58430 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:44:59.485777   58430 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:44:59.485741   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | Closing plugin on server side
	I0816 13:44:59.491692   58430 main.go:141] libmachine: Making call to close driver server
	I0816 13:44:59.491711   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .Close
	I0816 13:44:59.491938   58430 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:44:59.491957   58430 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:45:00.273964   58430 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.04027784s)
	I0816 13:45:00.274018   58430 main.go:141] libmachine: Making call to close driver server
	I0816 13:45:00.274036   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .Close
	I0816 13:45:00.274032   58430 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.119945545s)
	I0816 13:45:00.274065   58430 main.go:141] libmachine: Making call to close driver server
	I0816 13:45:00.274080   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .Close
	I0816 13:45:00.274373   58430 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:45:00.274388   58430 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:45:00.274398   58430 main.go:141] libmachine: Making call to close driver server
	I0816 13:45:00.274406   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .Close
	I0816 13:45:00.274441   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | Closing plugin on server side
	I0816 13:45:00.274481   58430 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:45:00.274499   58430 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:45:00.274513   58430 main.go:141] libmachine: Making call to close driver server
	I0816 13:45:00.274526   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .Close
	I0816 13:45:00.274620   58430 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:45:00.274633   58430 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:45:00.274643   58430 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-893736"
	I0816 13:45:00.274749   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | Closing plugin on server side
	I0816 13:45:00.274842   58430 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:45:00.274851   58430 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:45:00.276747   58430 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0816 13:45:00.278150   58430 addons.go:510] duration metric: took 1.506994649s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0816 13:44:57.858846   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:00.357028   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:03.253913   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.254379   57240 main.go:141] libmachine: (embed-certs-302520) Found IP for machine: 192.168.39.125
	I0816 13:45:03.254401   57240 main.go:141] libmachine: (embed-certs-302520) Reserving static IP address...
	I0816 13:45:03.254418   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has current primary IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.254776   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "embed-certs-302520", mac: "52:54:00:15:a3:1b", ip: "192.168.39.125"} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:03.254804   57240 main.go:141] libmachine: (embed-certs-302520) Reserved static IP address: 192.168.39.125
	I0816 13:45:03.254822   57240 main.go:141] libmachine: (embed-certs-302520) DBG | skip adding static IP to network mk-embed-certs-302520 - found existing host DHCP lease matching {name: "embed-certs-302520", mac: "52:54:00:15:a3:1b", ip: "192.168.39.125"}
	I0816 13:45:03.254840   57240 main.go:141] libmachine: (embed-certs-302520) DBG | Getting to WaitForSSH function...
	I0816 13:45:03.254848   57240 main.go:141] libmachine: (embed-certs-302520) Waiting for SSH to be available...
	I0816 13:45:03.256951   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.257302   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:03.257327   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.257462   57240 main.go:141] libmachine: (embed-certs-302520) DBG | Using SSH client type: external
	I0816 13:45:03.257483   57240 main.go:141] libmachine: (embed-certs-302520) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/embed-certs-302520/id_rsa (-rw-------)
	I0816 13:45:03.257519   57240 main.go:141] libmachine: (embed-certs-302520) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.125 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-3966/.minikube/machines/embed-certs-302520/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 13:45:03.257528   57240 main.go:141] libmachine: (embed-certs-302520) DBG | About to run SSH command:
	I0816 13:45:03.257537   57240 main.go:141] libmachine: (embed-certs-302520) DBG | exit 0
	I0816 13:45:03.389262   57240 main.go:141] libmachine: (embed-certs-302520) DBG | SSH cmd err, output: <nil>: 
	I0816 13:45:03.389630   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetConfigRaw
	I0816 13:45:03.390305   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetIP
	I0816 13:45:03.392462   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.392767   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:03.392795   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.393012   57240 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/embed-certs-302520/config.json ...
	I0816 13:45:03.393212   57240 machine.go:93] provisionDockerMachine start ...
	I0816 13:45:03.393230   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	I0816 13:45:03.393453   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:45:03.395589   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.395949   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:03.395971   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.396086   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:45:03.396258   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:03.396447   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:03.396589   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:45:03.396785   57240 main.go:141] libmachine: Using SSH client type: native
	I0816 13:45:03.397004   57240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0816 13:45:03.397042   57240 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 13:45:03.513624   57240 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 13:45:03.513655   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetMachineName
	I0816 13:45:03.513954   57240 buildroot.go:166] provisioning hostname "embed-certs-302520"
	I0816 13:45:03.513976   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetMachineName
	I0816 13:45:03.514199   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:45:03.517138   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.517499   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:03.517520   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.517672   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:45:03.517867   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:03.518007   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:03.518168   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:45:03.518364   57240 main.go:141] libmachine: Using SSH client type: native
	I0816 13:45:03.518583   57240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0816 13:45:03.518599   57240 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-302520 && echo "embed-certs-302520" | sudo tee /etc/hostname
	I0816 13:45:03.647799   57240 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-302520
	
	I0816 13:45:03.647840   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:45:03.650491   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.650846   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:03.650880   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.651103   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:45:03.651301   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:03.651469   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:03.651614   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:45:03.651778   57240 main.go:141] libmachine: Using SSH client type: native
	I0816 13:45:03.651935   57240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0816 13:45:03.651951   57240 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-302520' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-302520/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-302520' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 13:45:03.778350   57240 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 13:45:03.778381   57240 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-3966/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-3966/.minikube}
	I0816 13:45:03.778411   57240 buildroot.go:174] setting up certificates
	I0816 13:45:03.778423   57240 provision.go:84] configureAuth start
	I0816 13:45:03.778435   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetMachineName
	I0816 13:45:03.778689   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetIP
	I0816 13:45:03.781319   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.781673   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:03.781695   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.781829   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:45:03.783724   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.784035   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:03.784064   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.784180   57240 provision.go:143] copyHostCerts
	I0816 13:45:03.784243   57240 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem, removing ...
	I0816 13:45:03.784262   57240 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem
	I0816 13:45:03.784335   57240 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem (1082 bytes)
	I0816 13:45:03.784462   57240 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem, removing ...
	I0816 13:45:03.784474   57240 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem
	I0816 13:45:03.784503   57240 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem (1123 bytes)
	I0816 13:45:03.784568   57240 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem, removing ...
	I0816 13:45:03.784578   57240 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem
	I0816 13:45:03.784600   57240 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem (1675 bytes)
	I0816 13:45:03.784647   57240 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem org=jenkins.embed-certs-302520 san=[127.0.0.1 192.168.39.125 embed-certs-302520 localhost minikube]
	I0816 13:45:03.901261   57240 provision.go:177] copyRemoteCerts
	I0816 13:45:03.901314   57240 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 13:45:03.901339   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:45:03.904505   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.904893   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:03.904933   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.905118   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:45:03.905329   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:03.905499   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:45:03.905650   57240 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/embed-certs-302520/id_rsa Username:docker}
	I0816 13:45:03.996083   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 13:45:04.024594   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0816 13:45:04.054080   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 13:45:04.079810   57240 provision.go:87] duration metric: took 301.374056ms to configureAuth
	I0816 13:45:04.079865   57240 buildroot.go:189] setting minikube options for container-runtime
	I0816 13:45:04.080048   57240 config.go:182] Loaded profile config "embed-certs-302520": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 13:45:04.080116   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:45:04.082649   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.083037   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:04.083090   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.083239   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:45:04.083430   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:04.083598   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:04.083775   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:45:04.083951   57240 main.go:141] libmachine: Using SSH client type: native
	I0816 13:45:04.084149   57240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0816 13:45:04.084171   57240 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 13:45:04.404121   57240 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 13:45:04.404150   57240 machine.go:96] duration metric: took 1.010924979s to provisionDockerMachine
	I0816 13:45:04.404163   57240 start.go:293] postStartSetup for "embed-certs-302520" (driver="kvm2")
	I0816 13:45:04.404182   57240 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 13:45:04.404202   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	I0816 13:45:04.404542   57240 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 13:45:04.404574   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:45:04.407763   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.408118   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:04.408145   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.408311   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:45:04.408508   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:04.408685   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:45:04.408864   57240 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/embed-certs-302520/id_rsa Username:docker}
	I0816 13:45:04.496519   57240 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 13:45:04.501262   57240 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 13:45:04.501282   57240 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/addons for local assets ...
	I0816 13:45:04.501352   57240 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/files for local assets ...
	I0816 13:45:04.501440   57240 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem -> 111492.pem in /etc/ssl/certs
	I0816 13:45:04.501554   57240 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 13:45:04.511338   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /etc/ssl/certs/111492.pem (1708 bytes)
	I0816 13:45:04.535372   57240 start.go:296] duration metric: took 131.188411ms for postStartSetup
	I0816 13:45:04.535411   57240 fix.go:56] duration metric: took 21.301761751s for fixHost
	I0816 13:45:04.535435   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:45:04.538286   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.538651   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:04.538676   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.538868   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:45:04.539069   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:04.539208   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:04.539344   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:45:04.539504   57240 main.go:141] libmachine: Using SSH client type: native
	I0816 13:45:04.539702   57240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0816 13:45:04.539715   57240 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 13:45:04.653529   57240 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723815904.606422212
	
	I0816 13:45:04.653556   57240 fix.go:216] guest clock: 1723815904.606422212
	I0816 13:45:04.653566   57240 fix.go:229] Guest: 2024-08-16 13:45:04.606422212 +0000 UTC Remote: 2024-08-16 13:45:04.535416156 +0000 UTC m=+359.547804920 (delta=71.006056ms)
	I0816 13:45:04.653598   57240 fix.go:200] guest clock delta is within tolerance: 71.006056ms
	I0816 13:45:04.653605   57240 start.go:83] releasing machines lock for "embed-certs-302520", held for 21.419990329s
	I0816 13:45:04.653631   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	I0816 13:45:04.653922   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetIP
	I0816 13:45:04.656682   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.657009   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:04.657034   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.657211   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	I0816 13:45:04.657800   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	I0816 13:45:04.657981   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	I0816 13:45:04.658069   57240 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 13:45:04.658114   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:45:04.658172   57240 ssh_runner.go:195] Run: cat /version.json
	I0816 13:45:04.658193   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:45:04.660629   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.660942   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.661051   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:04.661076   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.661315   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:45:04.661433   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:04.661470   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.661474   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:04.661598   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:45:04.661646   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:45:04.661841   57240 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/embed-certs-302520/id_rsa Username:docker}
	I0816 13:45:04.661904   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:04.662054   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:45:04.662199   57240 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/embed-certs-302520/id_rsa Username:docker}
	I0816 13:45:04.767691   57240 ssh_runner.go:195] Run: systemctl --version
	I0816 13:45:04.773984   57240 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 13:45:04.925431   57240 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 13:45:04.931848   57240 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 13:45:04.931931   57240 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 13:45:04.951355   57240 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 13:45:04.951381   57240 start.go:495] detecting cgroup driver to use...
	I0816 13:45:04.951442   57240 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 13:45:04.972903   57240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 13:45:04.987531   57240 docker.go:217] disabling cri-docker service (if available) ...
	I0816 13:45:04.987600   57240 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 13:45:05.001880   57240 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 13:45:05.018403   57240 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 13:45:00.247513   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:00.748342   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:01.248179   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:01.747757   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:02.247789   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:02.748162   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:03.247936   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:03.747434   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:04.247832   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:04.747704   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:00.999833   58430 node_ready.go:53] node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:45:03.500652   58430 node_ready.go:53] node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:45:05.143662   57240 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 13:45:05.297447   57240 docker.go:233] disabling docker service ...
	I0816 13:45:05.297527   57240 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 13:45:05.313382   57240 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 13:45:05.327116   57240 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 13:45:05.486443   57240 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 13:45:05.620465   57240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 13:45:05.634813   57240 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 13:45:05.653822   57240 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 13:45:05.653887   57240 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:45:05.664976   57240 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 13:45:05.665045   57240 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:45:05.676414   57240 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:45:05.688631   57240 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:45:05.700400   57240 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 13:45:05.712822   57240 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:45:05.724573   57240 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:45:05.742934   57240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:45:05.755669   57240 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 13:45:05.766837   57240 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 13:45:05.766890   57240 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 13:45:05.782296   57240 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 13:45:05.793695   57240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:45:05.919559   57240 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 13:45:06.057480   57240 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 13:45:06.057543   57240 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 13:45:06.062348   57240 start.go:563] Will wait 60s for crictl version
	I0816 13:45:06.062414   57240 ssh_runner.go:195] Run: which crictl
	I0816 13:45:06.066456   57240 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 13:45:06.104075   57240 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 13:45:06.104156   57240 ssh_runner.go:195] Run: crio --version
	I0816 13:45:06.132406   57240 ssh_runner.go:195] Run: crio --version
	I0816 13:45:06.161878   57240 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 13:45:02.357119   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:04.361365   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:06.859546   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:06.163233   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetIP
	I0816 13:45:06.165924   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:06.166310   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:06.166333   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:06.166529   57240 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0816 13:45:06.170722   57240 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 13:45:06.183152   57240 kubeadm.go:883] updating cluster {Name:embed-certs-302520 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-302520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 13:45:06.183256   57240 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 13:45:06.183306   57240 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 13:45:06.223405   57240 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 13:45:06.223489   57240 ssh_runner.go:195] Run: which lz4
	I0816 13:45:06.227851   57240 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 13:45:06.232132   57240 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 13:45:06.232156   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0816 13:45:07.642616   57240 crio.go:462] duration metric: took 1.414789512s to copy over tarball
	I0816 13:45:07.642698   57240 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 13:45:09.794329   57240 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.151601472s)
	I0816 13:45:09.794359   57240 crio.go:469] duration metric: took 2.151717024s to extract the tarball
	I0816 13:45:09.794369   57240 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 13:45:09.833609   57240 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 13:45:09.878781   57240 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 13:45:09.878806   57240 cache_images.go:84] Images are preloaded, skipping loading
	I0816 13:45:09.878815   57240 kubeadm.go:934] updating node { 192.168.39.125 8443 v1.31.0 crio true true} ...
	I0816 13:45:09.878944   57240 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-302520 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.125
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-302520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 13:45:09.879032   57240 ssh_runner.go:195] Run: crio config
	I0816 13:45:09.924876   57240 cni.go:84] Creating CNI manager for ""
	I0816 13:45:09.924900   57240 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:45:09.924927   57240 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 13:45:09.924958   57240 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.125 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-302520 NodeName:embed-certs-302520 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.125"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.125 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 13:45:09.925150   57240 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.125
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-302520"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.125
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.125"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 13:45:09.925226   57240 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 13:45:09.935204   57240 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 13:45:09.935280   57240 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 13:45:09.945366   57240 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0816 13:45:09.961881   57240 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 13:45:09.978495   57240 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0816 13:45:09.995664   57240 ssh_runner.go:195] Run: grep 192.168.39.125	control-plane.minikube.internal$ /etc/hosts
	I0816 13:45:10.000132   57240 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.125	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 13:45:10.013039   57240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:45:05.247343   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:05.747420   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:06.247801   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:06.747978   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:07.248393   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:07.747801   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:08.248388   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:08.747624   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:09.247530   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:09.748311   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:06.000553   58430 node_ready.go:49] node "default-k8s-diff-port-893736" has status "Ready":"True"
	I0816 13:45:06.000579   58430 node_ready.go:38] duration metric: took 7.004778161s for node "default-k8s-diff-port-893736" to be "Ready" ...
	I0816 13:45:06.000590   58430 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:45:06.006987   58430 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-xdwhx" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:06.012552   58430 pod_ready.go:93] pod "coredns-6f6b679f8f-xdwhx" in "kube-system" namespace has status "Ready":"True"
	I0816 13:45:06.012577   58430 pod_ready.go:82] duration metric: took 5.565882ms for pod "coredns-6f6b679f8f-xdwhx" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:06.012588   58430 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:06.519889   58430 pod_ready.go:93] pod "etcd-default-k8s-diff-port-893736" in "kube-system" namespace has status "Ready":"True"
	I0816 13:45:06.519919   58430 pod_ready.go:82] duration metric: took 507.322547ms for pod "etcd-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:06.519932   58430 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:08.527411   58430 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-893736" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:09.527923   58430 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-893736" in "kube-system" namespace has status "Ready":"True"
	I0816 13:45:09.527950   58430 pod_ready.go:82] duration metric: took 3.008009418s for pod "kube-apiserver-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:09.527963   58430 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:09.534422   58430 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-893736" in "kube-system" namespace has status "Ready":"True"
	I0816 13:45:09.534460   58430 pod_ready.go:82] duration metric: took 6.488169ms for pod "kube-controller-manager-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:09.534476   58430 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-btq6r" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:09.538660   58430 pod_ready.go:93] pod "kube-proxy-btq6r" in "kube-system" namespace has status "Ready":"True"
	I0816 13:45:09.538688   58430 pod_ready.go:82] duration metric: took 4.202597ms for pod "kube-proxy-btq6r" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:09.538700   58430 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:09.600350   58430 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-893736" in "kube-system" namespace has status "Ready":"True"
	I0816 13:45:09.600377   58430 pod_ready.go:82] duration metric: took 61.666987ms for pod "kube-scheduler-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:09.600391   58430 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:09.361968   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:11.859112   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:10.143519   57240 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 13:45:10.160358   57240 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/embed-certs-302520 for IP: 192.168.39.125
	I0816 13:45:10.160381   57240 certs.go:194] generating shared ca certs ...
	I0816 13:45:10.160400   57240 certs.go:226] acquiring lock for ca certs: {Name:mkdb46755373c135acf8239d2d4352f4b0b3d1d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:45:10.160591   57240 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key
	I0816 13:45:10.160646   57240 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key
	I0816 13:45:10.160656   57240 certs.go:256] generating profile certs ...
	I0816 13:45:10.160767   57240 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/embed-certs-302520/client.key
	I0816 13:45:10.160845   57240 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/embed-certs-302520/apiserver.key.f0c5f9ff
	I0816 13:45:10.160893   57240 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/embed-certs-302520/proxy-client.key
	I0816 13:45:10.161075   57240 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem (1338 bytes)
	W0816 13:45:10.161133   57240 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149_empty.pem, impossibly tiny 0 bytes
	I0816 13:45:10.161148   57240 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 13:45:10.161182   57240 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem (1082 bytes)
	I0816 13:45:10.161213   57240 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem (1123 bytes)
	I0816 13:45:10.161243   57240 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem (1675 bytes)
	I0816 13:45:10.161298   57240 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem (1708 bytes)
	I0816 13:45:10.161944   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 13:45:10.202268   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 13:45:10.242684   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 13:45:10.287223   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 13:45:10.316762   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/embed-certs-302520/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0816 13:45:10.343352   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/embed-certs-302520/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 13:45:10.371042   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/embed-certs-302520/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 13:45:10.394922   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/embed-certs-302520/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 13:45:10.419358   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /usr/share/ca-certificates/111492.pem (1708 bytes)
	I0816 13:45:10.442301   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 13:45:10.465266   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem --> /usr/share/ca-certificates/11149.pem (1338 bytes)
	I0816 13:45:10.487647   57240 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 13:45:10.504713   57240 ssh_runner.go:195] Run: openssl version
	I0816 13:45:10.510493   57240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111492.pem && ln -fs /usr/share/ca-certificates/111492.pem /etc/ssl/certs/111492.pem"
	I0816 13:45:10.521818   57240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111492.pem
	I0816 13:45:10.526637   57240 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 12:33 /usr/share/ca-certificates/111492.pem
	I0816 13:45:10.526681   57240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111492.pem
	I0816 13:45:10.532660   57240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111492.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 13:45:10.543403   57240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 13:45:10.554344   57240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:45:10.559089   57240 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 12:22 /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:45:10.559149   57240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:45:10.564982   57240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 13:45:10.576074   57240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11149.pem && ln -fs /usr/share/ca-certificates/11149.pem /etc/ssl/certs/11149.pem"
	I0816 13:45:10.586596   57240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11149.pem
	I0816 13:45:10.591586   57240 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 12:33 /usr/share/ca-certificates/11149.pem
	I0816 13:45:10.591637   57240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11149.pem
	I0816 13:45:10.597624   57240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11149.pem /etc/ssl/certs/51391683.0"
	I0816 13:45:10.608838   57240 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 13:45:10.613785   57240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 13:45:10.619902   57240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 13:45:10.625554   57240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 13:45:10.631526   57240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 13:45:10.637251   57240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 13:45:10.643210   57240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 13:45:10.649187   57240 kubeadm.go:392] StartCluster: {Name:embed-certs-302520 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-302520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 13:45:10.649298   57240 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 13:45:10.649349   57240 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 13:45:10.686074   57240 cri.go:89] found id: ""
	I0816 13:45:10.686153   57240 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 13:45:10.696504   57240 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 13:45:10.696527   57240 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 13:45:10.696581   57240 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 13:45:10.706447   57240 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 13:45:10.707413   57240 kubeconfig.go:125] found "embed-certs-302520" server: "https://192.168.39.125:8443"
	I0816 13:45:10.710045   57240 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 13:45:10.719563   57240 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.125
	I0816 13:45:10.719599   57240 kubeadm.go:1160] stopping kube-system containers ...
	I0816 13:45:10.719613   57240 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 13:45:10.719665   57240 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 13:45:10.759584   57240 cri.go:89] found id: ""
	I0816 13:45:10.759661   57240 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 13:45:10.776355   57240 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 13:45:10.786187   57240 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 13:45:10.786205   57240 kubeadm.go:157] found existing configuration files:
	
	I0816 13:45:10.786244   57240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 13:45:10.795644   57240 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 13:45:10.795723   57240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 13:45:10.807988   57240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 13:45:10.817234   57240 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 13:45:10.817299   57240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 13:45:10.826601   57240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 13:45:10.835702   57240 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 13:45:10.835763   57240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 13:45:10.845160   57240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 13:45:10.855522   57240 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 13:45:10.855578   57240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 13:45:10.865445   57240 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 13:45:10.875429   57240 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:45:10.988958   57240 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:45:12.195215   57240 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.206217359s)
	I0816 13:45:12.195241   57240 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:45:12.432322   57240 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:45:12.514631   57240 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:45:12.606133   57240 api_server.go:52] waiting for apiserver process to appear ...
	I0816 13:45:12.606238   57240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:13.106823   57240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:13.606856   57240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:13.624866   57240 api_server.go:72] duration metric: took 1.018743147s to wait for apiserver process to appear ...
	I0816 13:45:13.624897   57240 api_server.go:88] waiting for apiserver healthz status ...
	I0816 13:45:13.624930   57240 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0816 13:45:13.625953   57240 api_server.go:269] stopped: https://192.168.39.125:8443/healthz: Get "https://192.168.39.125:8443/healthz": dial tcp 192.168.39.125:8443: connect: connection refused
	I0816 13:45:14.124979   57240 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0816 13:45:10.247689   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:10.747756   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:11.247963   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:11.747523   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:12.247397   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:12.748146   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:13.247976   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:13.748109   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:14.247662   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:14.748041   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:11.607443   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:14.107647   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:14.357916   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:16.358986   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:16.404020   57240 api_server.go:279] https://192.168.39.125:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 13:45:16.404049   57240 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 13:45:16.404062   57240 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0816 13:45:16.462649   57240 api_server.go:279] https://192.168.39.125:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 13:45:16.462685   57240 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 13:45:16.625998   57240 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0816 13:45:16.632560   57240 api_server.go:279] https://192.168.39.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 13:45:16.632586   57240 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 13:45:17.124984   57240 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0816 13:45:17.133533   57240 api_server.go:279] https://192.168.39.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 13:45:17.133563   57240 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 13:45:17.624993   57240 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0816 13:45:17.629720   57240 api_server.go:279] https://192.168.39.125:8443/healthz returned 200:
	ok
	I0816 13:45:17.635848   57240 api_server.go:141] control plane version: v1.31.0
	I0816 13:45:17.635874   57240 api_server.go:131] duration metric: took 4.010970063s to wait for apiserver health ...
	I0816 13:45:17.635885   57240 cni.go:84] Creating CNI manager for ""
	I0816 13:45:17.635892   57240 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:45:17.637609   57240 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 13:45:17.638828   57240 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 13:45:17.650034   57240 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 13:45:17.681352   57240 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 13:45:17.691752   57240 system_pods.go:59] 8 kube-system pods found
	I0816 13:45:17.691784   57240 system_pods.go:61] "coredns-6f6b679f8f-phxht" [df7bd896-d1c6-4a0e-aead-e3db36e915da] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 13:45:17.691792   57240 system_pods.go:61] "etcd-embed-certs-302520" [ef7bae1c-7cd3-4d8e-b2fc-e5837f4c5a1e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 13:45:17.691801   57240 system_pods.go:61] "kube-apiserver-embed-certs-302520" [957ba8ec-91ae-4cea-902f-81a286e35659] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 13:45:17.691806   57240 system_pods.go:61] "kube-controller-manager-embed-certs-302520" [afbfc2da-5435-4ebb-ada0-e0edc9d09a7d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 13:45:17.691817   57240 system_pods.go:61] "kube-proxy-nnc6b" [ec8b820d-6f1d-4777-9f76-7efffb4e6e79] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0816 13:45:17.691824   57240 system_pods.go:61] "kube-scheduler-embed-certs-302520" [077024c8-3dfd-4e8c-850a-333b63d3f23b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 13:45:17.691832   57240 system_pods.go:61] "metrics-server-6867b74b74-9277d" [5d7ee9e5-b40c-4840-9fb4-0b7b8be9faca] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 13:45:17.691837   57240 system_pods.go:61] "storage-provisioner" [6f3dc7f6-a3e0-4bc3-b362-e1d97633d0eb] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0816 13:45:17.691854   57240 system_pods.go:74] duration metric: took 10.481601ms to wait for pod list to return data ...
	I0816 13:45:17.691861   57240 node_conditions.go:102] verifying NodePressure condition ...
	I0816 13:45:17.695253   57240 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 13:45:17.695278   57240 node_conditions.go:123] node cpu capacity is 2
	I0816 13:45:17.695292   57240 node_conditions.go:105] duration metric: took 3.4236ms to run NodePressure ...
	I0816 13:45:17.695311   57240 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:45:17.996024   57240 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0816 13:45:17.999887   57240 kubeadm.go:739] kubelet initialised
	I0816 13:45:17.999906   57240 kubeadm.go:740] duration metric: took 3.859222ms waiting for restarted kubelet to initialise ...
	I0816 13:45:17.999913   57240 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:45:18.004476   57240 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-phxht" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:18.009142   57240 pod_ready.go:98] node "embed-certs-302520" hosting pod "coredns-6f6b679f8f-phxht" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-302520" has status "Ready":"False"
	I0816 13:45:18.009162   57240 pod_ready.go:82] duration metric: took 4.665087ms for pod "coredns-6f6b679f8f-phxht" in "kube-system" namespace to be "Ready" ...
	E0816 13:45:18.009170   57240 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-302520" hosting pod "coredns-6f6b679f8f-phxht" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-302520" has status "Ready":"False"
	I0816 13:45:18.009175   57240 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:18.014083   57240 pod_ready.go:98] node "embed-certs-302520" hosting pod "etcd-embed-certs-302520" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-302520" has status "Ready":"False"
	I0816 13:45:18.014102   57240 pod_ready.go:82] duration metric: took 4.91913ms for pod "etcd-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	E0816 13:45:18.014118   57240 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-302520" hosting pod "etcd-embed-certs-302520" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-302520" has status "Ready":"False"
	I0816 13:45:18.014124   57240 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:18.018257   57240 pod_ready.go:98] node "embed-certs-302520" hosting pod "kube-apiserver-embed-certs-302520" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-302520" has status "Ready":"False"
	I0816 13:45:18.018276   57240 pod_ready.go:82] duration metric: took 4.14471ms for pod "kube-apiserver-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	E0816 13:45:18.018283   57240 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-302520" hosting pod "kube-apiserver-embed-certs-302520" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-302520" has status "Ready":"False"
	I0816 13:45:18.018288   57240 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:18.085229   57240 pod_ready.go:98] node "embed-certs-302520" hosting pod "kube-controller-manager-embed-certs-302520" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-302520" has status "Ready":"False"
	I0816 13:45:18.085257   57240 pod_ready.go:82] duration metric: took 66.95357ms for pod "kube-controller-manager-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	E0816 13:45:18.085269   57240 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-302520" hosting pod "kube-controller-manager-embed-certs-302520" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-302520" has status "Ready":"False"
	I0816 13:45:18.085276   57240 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-nnc6b" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:18.485094   57240 pod_ready.go:93] pod "kube-proxy-nnc6b" in "kube-system" namespace has status "Ready":"True"
	I0816 13:45:18.485124   57240 pod_ready.go:82] duration metric: took 399.831747ms for pod "kube-proxy-nnc6b" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:18.485135   57240 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:15.248141   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:15.747452   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:16.247654   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:16.747569   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:17.248203   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:17.747951   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:18.248147   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:18.747490   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:19.248135   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:19.748201   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:16.107986   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:18.606838   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:18.857109   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:20.858242   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:20.491635   57240 pod_ready.go:103] pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:22.492484   57240 pod_ready.go:103] pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:24.992054   57240 pod_ready.go:103] pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:20.247741   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:20.747432   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:21.247600   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:21.748309   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:22.247438   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:22.748379   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:23.247577   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:23.747950   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:24.247733   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:24.748079   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:21.107371   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:23.607589   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:23.357770   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:25.358102   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:26.992544   57240 pod_ready.go:103] pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:29.491552   57240 pod_ready.go:103] pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:25.247402   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:25.747623   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:26.248101   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:26.747403   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:27.248040   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:27.747380   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:28.247857   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:28.748374   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:29.247819   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:29.747331   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:26.106454   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:28.107564   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:30.115954   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:27.358671   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:29.857631   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:31.862487   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:30.491291   57240 pod_ready.go:93] pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace has status "Ready":"True"
	I0816 13:45:30.491320   57240 pod_ready.go:82] duration metric: took 12.006175772s for pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:30.491333   57240 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:32.497481   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:34.500397   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:30.247771   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:30.747706   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:31.247762   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:31.748013   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:32.247551   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:32.748020   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:33.247432   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:33.747594   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:34.247750   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:45:34.247831   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:45:34.295412   57945 cri.go:89] found id: ""
	I0816 13:45:34.295439   57945 logs.go:276] 0 containers: []
	W0816 13:45:34.295461   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:45:34.295468   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:45:34.295529   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:45:34.332061   57945 cri.go:89] found id: ""
	I0816 13:45:34.332085   57945 logs.go:276] 0 containers: []
	W0816 13:45:34.332093   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:45:34.332100   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:45:34.332158   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:45:34.369512   57945 cri.go:89] found id: ""
	I0816 13:45:34.369535   57945 logs.go:276] 0 containers: []
	W0816 13:45:34.369546   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:45:34.369553   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:45:34.369617   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:45:34.406324   57945 cri.go:89] found id: ""
	I0816 13:45:34.406351   57945 logs.go:276] 0 containers: []
	W0816 13:45:34.406362   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:45:34.406370   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:45:34.406436   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:45:34.442193   57945 cri.go:89] found id: ""
	I0816 13:45:34.442229   57945 logs.go:276] 0 containers: []
	W0816 13:45:34.442239   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:45:34.442244   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:45:34.442301   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:45:34.476563   57945 cri.go:89] found id: ""
	I0816 13:45:34.476600   57945 logs.go:276] 0 containers: []
	W0816 13:45:34.476616   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:45:34.476622   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:45:34.476670   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:45:34.515841   57945 cri.go:89] found id: ""
	I0816 13:45:34.515869   57945 logs.go:276] 0 containers: []
	W0816 13:45:34.515877   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:45:34.515883   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:45:34.515940   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:45:34.551242   57945 cri.go:89] found id: ""
	I0816 13:45:34.551276   57945 logs.go:276] 0 containers: []
	W0816 13:45:34.551288   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:45:34.551305   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:45:34.551321   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:45:34.564902   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:45:34.564944   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:45:34.694323   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:45:34.694349   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:45:34.694366   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:45:34.770548   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:45:34.770589   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:45:34.818339   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:45:34.818366   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:45:32.606912   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:34.607600   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:34.358649   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:36.856727   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:37.003939   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:39.498178   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:37.370390   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:37.383474   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:45:37.383558   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:45:37.419911   57945 cri.go:89] found id: ""
	I0816 13:45:37.419943   57945 logs.go:276] 0 containers: []
	W0816 13:45:37.419954   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:45:37.419961   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:45:37.420027   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:45:37.453845   57945 cri.go:89] found id: ""
	I0816 13:45:37.453876   57945 logs.go:276] 0 containers: []
	W0816 13:45:37.453884   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:45:37.453889   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:45:37.453949   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:45:37.489053   57945 cri.go:89] found id: ""
	I0816 13:45:37.489088   57945 logs.go:276] 0 containers: []
	W0816 13:45:37.489099   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:45:37.489106   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:45:37.489176   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:45:37.525631   57945 cri.go:89] found id: ""
	I0816 13:45:37.525664   57945 logs.go:276] 0 containers: []
	W0816 13:45:37.525676   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:45:37.525684   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:45:37.525743   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:45:37.560064   57945 cri.go:89] found id: ""
	I0816 13:45:37.560089   57945 logs.go:276] 0 containers: []
	W0816 13:45:37.560101   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:45:37.560109   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:45:37.560168   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:45:37.593856   57945 cri.go:89] found id: ""
	I0816 13:45:37.593888   57945 logs.go:276] 0 containers: []
	W0816 13:45:37.593899   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:45:37.593907   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:45:37.593969   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:45:37.627775   57945 cri.go:89] found id: ""
	I0816 13:45:37.627808   57945 logs.go:276] 0 containers: []
	W0816 13:45:37.627818   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:45:37.627828   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:45:37.627888   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:45:37.660926   57945 cri.go:89] found id: ""
	I0816 13:45:37.660962   57945 logs.go:276] 0 containers: []
	W0816 13:45:37.660973   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:45:37.660991   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:45:37.661008   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:45:37.738954   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:45:37.738993   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:45:37.778976   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:45:37.779006   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:45:37.831361   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:45:37.831397   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:45:37.845096   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:45:37.845122   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:45:37.930797   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:45:37.106303   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:39.107343   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:38.857564   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:40.858908   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:41.998945   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:43.999474   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:40.431616   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:40.445298   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:45:40.445365   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:45:40.478229   57945 cri.go:89] found id: ""
	I0816 13:45:40.478252   57945 logs.go:276] 0 containers: []
	W0816 13:45:40.478259   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:45:40.478265   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:45:40.478313   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:45:40.514721   57945 cri.go:89] found id: ""
	I0816 13:45:40.514744   57945 logs.go:276] 0 containers: []
	W0816 13:45:40.514754   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:45:40.514761   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:45:40.514819   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:45:40.550604   57945 cri.go:89] found id: ""
	I0816 13:45:40.550629   57945 logs.go:276] 0 containers: []
	W0816 13:45:40.550637   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:45:40.550644   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:45:40.550700   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:45:40.589286   57945 cri.go:89] found id: ""
	I0816 13:45:40.589312   57945 logs.go:276] 0 containers: []
	W0816 13:45:40.589320   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:45:40.589326   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:45:40.589382   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:45:40.622689   57945 cri.go:89] found id: ""
	I0816 13:45:40.622709   57945 logs.go:276] 0 containers: []
	W0816 13:45:40.622717   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:45:40.622722   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:45:40.622778   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:45:40.660872   57945 cri.go:89] found id: ""
	I0816 13:45:40.660897   57945 logs.go:276] 0 containers: []
	W0816 13:45:40.660915   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:45:40.660925   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:45:40.660986   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:45:40.697369   57945 cri.go:89] found id: ""
	I0816 13:45:40.697395   57945 logs.go:276] 0 containers: []
	W0816 13:45:40.697404   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:45:40.697415   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:45:40.697521   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:45:40.733565   57945 cri.go:89] found id: ""
	I0816 13:45:40.733594   57945 logs.go:276] 0 containers: []
	W0816 13:45:40.733604   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:45:40.733615   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:45:40.733629   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:45:40.770951   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:45:40.770993   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:45:40.824983   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:45:40.825025   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:45:40.838846   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:45:40.838876   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:45:40.915687   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:45:40.915718   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:45:40.915733   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:45:43.496409   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:43.511419   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:45:43.511485   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:45:43.556996   57945 cri.go:89] found id: ""
	I0816 13:45:43.557031   57945 logs.go:276] 0 containers: []
	W0816 13:45:43.557042   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:45:43.557050   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:45:43.557102   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:45:43.609200   57945 cri.go:89] found id: ""
	I0816 13:45:43.609228   57945 logs.go:276] 0 containers: []
	W0816 13:45:43.609237   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:45:43.609244   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:45:43.609305   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:45:43.648434   57945 cri.go:89] found id: ""
	I0816 13:45:43.648458   57945 logs.go:276] 0 containers: []
	W0816 13:45:43.648467   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:45:43.648474   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:45:43.648538   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:45:43.687179   57945 cri.go:89] found id: ""
	I0816 13:45:43.687214   57945 logs.go:276] 0 containers: []
	W0816 13:45:43.687222   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:45:43.687228   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:45:43.687293   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:45:43.721723   57945 cri.go:89] found id: ""
	I0816 13:45:43.721751   57945 logs.go:276] 0 containers: []
	W0816 13:45:43.721762   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:45:43.721769   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:45:43.721847   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:45:43.756469   57945 cri.go:89] found id: ""
	I0816 13:45:43.756492   57945 logs.go:276] 0 containers: []
	W0816 13:45:43.756501   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:45:43.756506   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:45:43.756560   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:45:43.790241   57945 cri.go:89] found id: ""
	I0816 13:45:43.790267   57945 logs.go:276] 0 containers: []
	W0816 13:45:43.790275   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:45:43.790281   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:45:43.790329   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:45:43.828620   57945 cri.go:89] found id: ""
	I0816 13:45:43.828646   57945 logs.go:276] 0 containers: []
	W0816 13:45:43.828654   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:45:43.828662   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:45:43.828677   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:45:43.879573   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:45:43.879607   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:45:43.893813   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:45:43.893842   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:45:43.975188   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:45:43.975209   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:45:43.975220   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:45:44.054231   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:45:44.054266   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:45:41.609813   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:44.116781   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:43.358670   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:45.857710   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:46.497146   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:48.498302   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:46.593190   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:46.607472   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:45:46.607568   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:45:46.642764   57945 cri.go:89] found id: ""
	I0816 13:45:46.642787   57945 logs.go:276] 0 containers: []
	W0816 13:45:46.642795   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:45:46.642800   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:45:46.642848   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:45:46.678965   57945 cri.go:89] found id: ""
	I0816 13:45:46.678992   57945 logs.go:276] 0 containers: []
	W0816 13:45:46.679000   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:45:46.679005   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:45:46.679051   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:45:46.717632   57945 cri.go:89] found id: ""
	I0816 13:45:46.717657   57945 logs.go:276] 0 containers: []
	W0816 13:45:46.717666   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:45:46.717671   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:45:46.717720   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:45:46.758359   57945 cri.go:89] found id: ""
	I0816 13:45:46.758407   57945 logs.go:276] 0 containers: []
	W0816 13:45:46.758419   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:45:46.758427   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:45:46.758487   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:45:46.798405   57945 cri.go:89] found id: ""
	I0816 13:45:46.798437   57945 logs.go:276] 0 containers: []
	W0816 13:45:46.798448   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:45:46.798472   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:45:46.798547   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:45:46.834977   57945 cri.go:89] found id: ""
	I0816 13:45:46.835008   57945 logs.go:276] 0 containers: []
	W0816 13:45:46.835019   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:45:46.835026   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:45:46.835077   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:45:46.873589   57945 cri.go:89] found id: ""
	I0816 13:45:46.873622   57945 logs.go:276] 0 containers: []
	W0816 13:45:46.873631   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:45:46.873638   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:45:46.873689   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:45:46.912649   57945 cri.go:89] found id: ""
	I0816 13:45:46.912680   57945 logs.go:276] 0 containers: []
	W0816 13:45:46.912691   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:45:46.912701   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:45:46.912720   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:45:46.966998   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:45:46.967038   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:45:46.980897   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:45:46.980937   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:45:47.053055   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:45:47.053079   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:45:47.053091   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:45:47.136251   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:45:47.136291   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:45:49.678283   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:49.691134   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:45:49.691244   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:45:49.726598   57945 cri.go:89] found id: ""
	I0816 13:45:49.726644   57945 logs.go:276] 0 containers: []
	W0816 13:45:49.726656   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:45:49.726665   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:45:49.726729   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:45:49.760499   57945 cri.go:89] found id: ""
	I0816 13:45:49.760526   57945 logs.go:276] 0 containers: []
	W0816 13:45:49.760536   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:45:49.760543   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:45:49.760602   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:45:49.794064   57945 cri.go:89] found id: ""
	I0816 13:45:49.794087   57945 logs.go:276] 0 containers: []
	W0816 13:45:49.794094   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:45:49.794099   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:45:49.794162   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:45:49.830016   57945 cri.go:89] found id: ""
	I0816 13:45:49.830045   57945 logs.go:276] 0 containers: []
	W0816 13:45:49.830057   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:45:49.830071   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:45:49.830125   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:45:49.865230   57945 cri.go:89] found id: ""
	I0816 13:45:49.865248   57945 logs.go:276] 0 containers: []
	W0816 13:45:49.865255   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:45:49.865261   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:45:49.865310   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:45:49.898715   57945 cri.go:89] found id: ""
	I0816 13:45:49.898743   57945 logs.go:276] 0 containers: []
	W0816 13:45:49.898752   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:45:49.898758   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:45:49.898807   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:45:49.932831   57945 cri.go:89] found id: ""
	I0816 13:45:49.932857   57945 logs.go:276] 0 containers: []
	W0816 13:45:49.932868   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:45:49.932875   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:45:49.932948   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:45:49.965580   57945 cri.go:89] found id: ""
	I0816 13:45:49.965609   57945 logs.go:276] 0 containers: []
	W0816 13:45:49.965617   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:45:49.965626   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:45:49.965642   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:45:50.058462   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:45:50.058516   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:45:46.606815   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:49.107387   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:47.858274   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:49.861382   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:50.999007   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:53.497248   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:50.111179   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:45:50.111206   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:45:50.162529   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:45:50.162561   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:45:50.176552   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:45:50.176579   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:45:50.243818   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:45:52.744808   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:52.757430   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:45:52.757513   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:45:52.793177   57945 cri.go:89] found id: ""
	I0816 13:45:52.793209   57945 logs.go:276] 0 containers: []
	W0816 13:45:52.793217   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:45:52.793224   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:45:52.793276   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:45:52.827846   57945 cri.go:89] found id: ""
	I0816 13:45:52.827874   57945 logs.go:276] 0 containers: []
	W0816 13:45:52.827886   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:45:52.827894   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:45:52.827959   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:45:52.864662   57945 cri.go:89] found id: ""
	I0816 13:45:52.864693   57945 logs.go:276] 0 containers: []
	W0816 13:45:52.864705   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:45:52.864711   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:45:52.864761   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:45:52.901124   57945 cri.go:89] found id: ""
	I0816 13:45:52.901154   57945 logs.go:276] 0 containers: []
	W0816 13:45:52.901166   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:45:52.901174   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:45:52.901234   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:45:52.939763   57945 cri.go:89] found id: ""
	I0816 13:45:52.939791   57945 logs.go:276] 0 containers: []
	W0816 13:45:52.939799   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:45:52.939805   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:45:52.939858   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:45:52.975045   57945 cri.go:89] found id: ""
	I0816 13:45:52.975075   57945 logs.go:276] 0 containers: []
	W0816 13:45:52.975086   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:45:52.975092   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:45:52.975141   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:45:53.014686   57945 cri.go:89] found id: ""
	I0816 13:45:53.014714   57945 logs.go:276] 0 containers: []
	W0816 13:45:53.014725   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:45:53.014732   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:45:53.014794   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:45:53.049445   57945 cri.go:89] found id: ""
	I0816 13:45:53.049466   57945 logs.go:276] 0 containers: []
	W0816 13:45:53.049473   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:45:53.049482   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:45:53.049492   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:45:53.101819   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:45:53.101850   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:45:53.116165   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:45:53.116191   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:45:53.191022   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:45:53.191047   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:45:53.191062   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:45:53.268901   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:45:53.268952   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:45:51.607047   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:54.106991   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:52.363317   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:54.857924   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:55.497520   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:57.498597   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:59.997729   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:55.814862   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:55.828817   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:45:55.828875   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:45:55.877556   57945 cri.go:89] found id: ""
	I0816 13:45:55.877586   57945 logs.go:276] 0 containers: []
	W0816 13:45:55.877595   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:45:55.877606   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:45:55.877667   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:45:55.912820   57945 cri.go:89] found id: ""
	I0816 13:45:55.912848   57945 logs.go:276] 0 containers: []
	W0816 13:45:55.912855   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:45:55.912862   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:45:55.912918   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:45:55.947419   57945 cri.go:89] found id: ""
	I0816 13:45:55.947449   57945 logs.go:276] 0 containers: []
	W0816 13:45:55.947460   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:45:55.947467   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:45:55.947532   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:45:55.980964   57945 cri.go:89] found id: ""
	I0816 13:45:55.980990   57945 logs.go:276] 0 containers: []
	W0816 13:45:55.981001   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:45:55.981008   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:45:55.981068   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:45:56.019021   57945 cri.go:89] found id: ""
	I0816 13:45:56.019045   57945 logs.go:276] 0 containers: []
	W0816 13:45:56.019053   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:45:56.019059   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:45:56.019116   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:45:56.054950   57945 cri.go:89] found id: ""
	I0816 13:45:56.054974   57945 logs.go:276] 0 containers: []
	W0816 13:45:56.054985   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:45:56.054992   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:45:56.055057   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:45:56.091165   57945 cri.go:89] found id: ""
	I0816 13:45:56.091192   57945 logs.go:276] 0 containers: []
	W0816 13:45:56.091202   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:45:56.091211   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:45:56.091268   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:45:56.125748   57945 cri.go:89] found id: ""
	I0816 13:45:56.125775   57945 logs.go:276] 0 containers: []
	W0816 13:45:56.125787   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:45:56.125797   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:45:56.125811   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:45:56.174836   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:45:56.174870   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:45:56.188501   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:45:56.188529   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:45:56.266017   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:45:56.266038   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:45:56.266050   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:45:56.346482   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:45:56.346519   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:45:58.887176   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:58.900464   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:45:58.900531   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:45:58.939526   57945 cri.go:89] found id: ""
	I0816 13:45:58.939558   57945 logs.go:276] 0 containers: []
	W0816 13:45:58.939568   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:45:58.939576   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:45:58.939639   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:45:58.975256   57945 cri.go:89] found id: ""
	I0816 13:45:58.975281   57945 logs.go:276] 0 containers: []
	W0816 13:45:58.975289   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:45:58.975294   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:45:58.975350   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:45:59.012708   57945 cri.go:89] found id: ""
	I0816 13:45:59.012736   57945 logs.go:276] 0 containers: []
	W0816 13:45:59.012746   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:45:59.012754   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:45:59.012820   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:45:59.049385   57945 cri.go:89] found id: ""
	I0816 13:45:59.049417   57945 logs.go:276] 0 containers: []
	W0816 13:45:59.049430   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:45:59.049438   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:45:59.049505   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:45:59.084750   57945 cri.go:89] found id: ""
	I0816 13:45:59.084773   57945 logs.go:276] 0 containers: []
	W0816 13:45:59.084781   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:45:59.084786   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:45:59.084834   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:45:59.129464   57945 cri.go:89] found id: ""
	I0816 13:45:59.129495   57945 logs.go:276] 0 containers: []
	W0816 13:45:59.129506   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:45:59.129514   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:45:59.129578   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:45:59.166772   57945 cri.go:89] found id: ""
	I0816 13:45:59.166794   57945 logs.go:276] 0 containers: []
	W0816 13:45:59.166802   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:45:59.166808   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:45:59.166867   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:45:59.203843   57945 cri.go:89] found id: ""
	I0816 13:45:59.203876   57945 logs.go:276] 0 containers: []
	W0816 13:45:59.203886   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:45:59.203897   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:45:59.203911   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:45:59.285798   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:45:59.285837   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:45:59.324704   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:45:59.324729   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:45:59.377532   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:45:59.377566   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:45:59.391209   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:45:59.391236   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:45:59.463420   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:45:56.107187   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:58.606550   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:57.358875   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:59.857940   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:01.859677   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:01.998260   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:04.498473   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:01.964395   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:01.977380   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:01.977452   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:02.014480   57945 cri.go:89] found id: ""
	I0816 13:46:02.014504   57945 logs.go:276] 0 containers: []
	W0816 13:46:02.014511   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:02.014517   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:02.014578   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:02.057233   57945 cri.go:89] found id: ""
	I0816 13:46:02.057262   57945 logs.go:276] 0 containers: []
	W0816 13:46:02.057270   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:02.057277   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:02.057326   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:02.095936   57945 cri.go:89] found id: ""
	I0816 13:46:02.095962   57945 logs.go:276] 0 containers: []
	W0816 13:46:02.095970   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:02.095976   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:02.096020   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:02.136949   57945 cri.go:89] found id: ""
	I0816 13:46:02.136980   57945 logs.go:276] 0 containers: []
	W0816 13:46:02.136992   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:02.136998   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:02.137047   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:02.172610   57945 cri.go:89] found id: ""
	I0816 13:46:02.172648   57945 logs.go:276] 0 containers: []
	W0816 13:46:02.172658   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:02.172666   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:02.172729   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:02.211216   57945 cri.go:89] found id: ""
	I0816 13:46:02.211247   57945 logs.go:276] 0 containers: []
	W0816 13:46:02.211257   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:02.211266   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:02.211334   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:02.245705   57945 cri.go:89] found id: ""
	I0816 13:46:02.245735   57945 logs.go:276] 0 containers: []
	W0816 13:46:02.245746   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:02.245753   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:02.245823   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:02.281057   57945 cri.go:89] found id: ""
	I0816 13:46:02.281082   57945 logs.go:276] 0 containers: []
	W0816 13:46:02.281093   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:02.281103   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:02.281128   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:02.333334   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:02.333377   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:02.347520   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:02.347546   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:02.427543   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:02.427572   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:02.427587   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:02.514871   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:02.514908   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:05.057817   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:05.070491   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:05.070554   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:01.106533   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:03.107325   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:05.107629   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:04.359077   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:06.857557   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:06.997606   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:08.998915   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:05.108262   57945 cri.go:89] found id: ""
	I0816 13:46:05.108290   57945 logs.go:276] 0 containers: []
	W0816 13:46:05.108301   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:05.108308   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:05.108361   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:05.143962   57945 cri.go:89] found id: ""
	I0816 13:46:05.143995   57945 logs.go:276] 0 containers: []
	W0816 13:46:05.144005   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:05.144011   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:05.144067   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:05.180032   57945 cri.go:89] found id: ""
	I0816 13:46:05.180058   57945 logs.go:276] 0 containers: []
	W0816 13:46:05.180068   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:05.180076   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:05.180128   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:05.214077   57945 cri.go:89] found id: ""
	I0816 13:46:05.214107   57945 logs.go:276] 0 containers: []
	W0816 13:46:05.214115   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:05.214121   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:05.214171   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:05.250887   57945 cri.go:89] found id: ""
	I0816 13:46:05.250920   57945 logs.go:276] 0 containers: []
	W0816 13:46:05.250930   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:05.250937   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:05.251000   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:05.285592   57945 cri.go:89] found id: ""
	I0816 13:46:05.285615   57945 logs.go:276] 0 containers: []
	W0816 13:46:05.285623   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:05.285629   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:05.285675   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:05.325221   57945 cri.go:89] found id: ""
	I0816 13:46:05.325248   57945 logs.go:276] 0 containers: []
	W0816 13:46:05.325258   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:05.325264   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:05.325307   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:05.364025   57945 cri.go:89] found id: ""
	I0816 13:46:05.364047   57945 logs.go:276] 0 containers: []
	W0816 13:46:05.364055   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:05.364062   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:05.364074   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:05.413364   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:05.413395   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:05.427328   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:05.427358   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:05.504040   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:05.504067   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:05.504086   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:05.580975   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:05.581010   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:08.123111   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:08.136822   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:08.136902   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:08.169471   57945 cri.go:89] found id: ""
	I0816 13:46:08.169495   57945 logs.go:276] 0 containers: []
	W0816 13:46:08.169503   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:08.169508   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:08.169556   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:08.211041   57945 cri.go:89] found id: ""
	I0816 13:46:08.211069   57945 logs.go:276] 0 containers: []
	W0816 13:46:08.211081   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:08.211087   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:08.211148   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:08.247564   57945 cri.go:89] found id: ""
	I0816 13:46:08.247590   57945 logs.go:276] 0 containers: []
	W0816 13:46:08.247600   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:08.247607   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:08.247670   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:08.284283   57945 cri.go:89] found id: ""
	I0816 13:46:08.284312   57945 logs.go:276] 0 containers: []
	W0816 13:46:08.284324   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:08.284332   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:08.284384   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:08.320287   57945 cri.go:89] found id: ""
	I0816 13:46:08.320311   57945 logs.go:276] 0 containers: []
	W0816 13:46:08.320319   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:08.320325   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:08.320371   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:08.358294   57945 cri.go:89] found id: ""
	I0816 13:46:08.358324   57945 logs.go:276] 0 containers: []
	W0816 13:46:08.358342   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:08.358356   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:08.358423   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:08.394386   57945 cri.go:89] found id: ""
	I0816 13:46:08.394414   57945 logs.go:276] 0 containers: []
	W0816 13:46:08.394424   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:08.394432   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:08.394502   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:08.439608   57945 cri.go:89] found id: ""
	I0816 13:46:08.439635   57945 logs.go:276] 0 containers: []
	W0816 13:46:08.439643   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:08.439653   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:08.439668   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:08.493878   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:08.493918   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:08.508080   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:08.508114   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:08.584703   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:08.584727   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:08.584745   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:08.663741   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:08.663776   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:07.606112   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:09.608137   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:09.357201   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:11.359055   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:11.497851   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:13.998849   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:11.204946   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:11.218720   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:11.218800   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:11.257825   57945 cri.go:89] found id: ""
	I0816 13:46:11.257852   57945 logs.go:276] 0 containers: []
	W0816 13:46:11.257862   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:11.257870   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:11.257930   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:11.293910   57945 cri.go:89] found id: ""
	I0816 13:46:11.293946   57945 logs.go:276] 0 containers: []
	W0816 13:46:11.293958   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:11.293966   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:11.294023   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:11.330005   57945 cri.go:89] found id: ""
	I0816 13:46:11.330031   57945 logs.go:276] 0 containers: []
	W0816 13:46:11.330039   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:11.330045   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:11.330101   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:11.365057   57945 cri.go:89] found id: ""
	I0816 13:46:11.365083   57945 logs.go:276] 0 containers: []
	W0816 13:46:11.365093   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:11.365101   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:11.365159   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:11.401440   57945 cri.go:89] found id: ""
	I0816 13:46:11.401467   57945 logs.go:276] 0 containers: []
	W0816 13:46:11.401475   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:11.401481   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:11.401532   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:11.435329   57945 cri.go:89] found id: ""
	I0816 13:46:11.435354   57945 logs.go:276] 0 containers: []
	W0816 13:46:11.435361   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:11.435368   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:11.435427   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:11.468343   57945 cri.go:89] found id: ""
	I0816 13:46:11.468373   57945 logs.go:276] 0 containers: []
	W0816 13:46:11.468393   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:11.468401   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:11.468465   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:11.503326   57945 cri.go:89] found id: ""
	I0816 13:46:11.503347   57945 logs.go:276] 0 containers: []
	W0816 13:46:11.503362   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:11.503370   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:11.503386   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:11.554681   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:11.554712   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:11.568056   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:11.568087   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:11.646023   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:11.646049   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:11.646060   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:11.726154   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:11.726191   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:14.266008   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:14.280328   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:14.280408   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:14.316359   57945 cri.go:89] found id: ""
	I0816 13:46:14.316388   57945 logs.go:276] 0 containers: []
	W0816 13:46:14.316398   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:14.316406   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:14.316470   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:14.360143   57945 cri.go:89] found id: ""
	I0816 13:46:14.360165   57945 logs.go:276] 0 containers: []
	W0816 13:46:14.360172   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:14.360183   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:14.360234   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:14.394692   57945 cri.go:89] found id: ""
	I0816 13:46:14.394717   57945 logs.go:276] 0 containers: []
	W0816 13:46:14.394724   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:14.394730   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:14.394789   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:14.431928   57945 cri.go:89] found id: ""
	I0816 13:46:14.431957   57945 logs.go:276] 0 containers: []
	W0816 13:46:14.431968   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:14.431975   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:14.432041   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:14.469223   57945 cri.go:89] found id: ""
	I0816 13:46:14.469253   57945 logs.go:276] 0 containers: []
	W0816 13:46:14.469265   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:14.469272   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:14.469334   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:14.506893   57945 cri.go:89] found id: ""
	I0816 13:46:14.506917   57945 logs.go:276] 0 containers: []
	W0816 13:46:14.506925   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:14.506931   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:14.506991   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:14.544801   57945 cri.go:89] found id: ""
	I0816 13:46:14.544825   57945 logs.go:276] 0 containers: []
	W0816 13:46:14.544833   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:14.544839   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:14.544891   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:14.579489   57945 cri.go:89] found id: ""
	I0816 13:46:14.579528   57945 logs.go:276] 0 containers: []
	W0816 13:46:14.579541   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:14.579556   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:14.579572   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:14.656527   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:14.656551   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:14.656573   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:14.736792   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:14.736823   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:14.775976   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:14.776010   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:14.827804   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:14.827836   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:12.106330   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:14.106732   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:13.857302   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:15.858233   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:16.497347   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:18.497948   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:17.341506   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:17.357136   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:17.357214   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:17.397810   57945 cri.go:89] found id: ""
	I0816 13:46:17.397839   57945 logs.go:276] 0 containers: []
	W0816 13:46:17.397867   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:17.397874   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:17.397936   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:17.435170   57945 cri.go:89] found id: ""
	I0816 13:46:17.435198   57945 logs.go:276] 0 containers: []
	W0816 13:46:17.435208   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:17.435214   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:17.435260   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:17.468837   57945 cri.go:89] found id: ""
	I0816 13:46:17.468871   57945 logs.go:276] 0 containers: []
	W0816 13:46:17.468882   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:17.468891   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:17.468962   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:17.503884   57945 cri.go:89] found id: ""
	I0816 13:46:17.503910   57945 logs.go:276] 0 containers: []
	W0816 13:46:17.503921   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:17.503930   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:17.503977   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:17.541204   57945 cri.go:89] found id: ""
	I0816 13:46:17.541232   57945 logs.go:276] 0 containers: []
	W0816 13:46:17.541244   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:17.541251   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:17.541312   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:17.577007   57945 cri.go:89] found id: ""
	I0816 13:46:17.577031   57945 logs.go:276] 0 containers: []
	W0816 13:46:17.577038   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:17.577045   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:17.577092   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:17.611352   57945 cri.go:89] found id: ""
	I0816 13:46:17.611373   57945 logs.go:276] 0 containers: []
	W0816 13:46:17.611380   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:17.611386   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:17.611433   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:17.648108   57945 cri.go:89] found id: ""
	I0816 13:46:17.648147   57945 logs.go:276] 0 containers: []
	W0816 13:46:17.648155   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:17.648164   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:17.648176   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:17.720475   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:17.720500   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:17.720512   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:17.797602   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:17.797636   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:17.842985   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:17.843019   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:17.893581   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:17.893617   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:16.107456   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:18.107650   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:20.608155   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:18.357472   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:20.857964   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:20.498563   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:22.998319   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:20.408415   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:20.423303   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:20.423384   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:20.459057   57945 cri.go:89] found id: ""
	I0816 13:46:20.459083   57945 logs.go:276] 0 containers: []
	W0816 13:46:20.459091   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:20.459096   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:20.459152   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:20.496447   57945 cri.go:89] found id: ""
	I0816 13:46:20.496471   57945 logs.go:276] 0 containers: []
	W0816 13:46:20.496479   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:20.496485   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:20.496532   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:20.538508   57945 cri.go:89] found id: ""
	I0816 13:46:20.538531   57945 logs.go:276] 0 containers: []
	W0816 13:46:20.538539   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:20.538544   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:20.538600   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:20.579350   57945 cri.go:89] found id: ""
	I0816 13:46:20.579382   57945 logs.go:276] 0 containers: []
	W0816 13:46:20.579390   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:20.579396   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:20.579465   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:20.615088   57945 cri.go:89] found id: ""
	I0816 13:46:20.615118   57945 logs.go:276] 0 containers: []
	W0816 13:46:20.615130   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:20.615137   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:20.615203   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:20.650849   57945 cri.go:89] found id: ""
	I0816 13:46:20.650877   57945 logs.go:276] 0 containers: []
	W0816 13:46:20.650884   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:20.650890   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:20.650950   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:20.691439   57945 cri.go:89] found id: ""
	I0816 13:46:20.691471   57945 logs.go:276] 0 containers: []
	W0816 13:46:20.691482   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:20.691490   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:20.691553   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:20.727795   57945 cri.go:89] found id: ""
	I0816 13:46:20.727820   57945 logs.go:276] 0 containers: []
	W0816 13:46:20.727829   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:20.727836   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:20.727847   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:20.806369   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:20.806390   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:20.806402   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:20.886313   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:20.886345   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:20.926079   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:20.926104   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:20.981052   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:20.981088   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:23.496179   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:23.509918   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:23.509983   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:23.546175   57945 cri.go:89] found id: ""
	I0816 13:46:23.546214   57945 logs.go:276] 0 containers: []
	W0816 13:46:23.546224   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:23.546231   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:23.546293   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:23.581553   57945 cri.go:89] found id: ""
	I0816 13:46:23.581581   57945 logs.go:276] 0 containers: []
	W0816 13:46:23.581594   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:23.581600   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:23.581648   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:23.614559   57945 cri.go:89] found id: ""
	I0816 13:46:23.614584   57945 logs.go:276] 0 containers: []
	W0816 13:46:23.614592   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:23.614597   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:23.614651   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:23.649239   57945 cri.go:89] found id: ""
	I0816 13:46:23.649272   57945 logs.go:276] 0 containers: []
	W0816 13:46:23.649283   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:23.649291   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:23.649354   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:23.688017   57945 cri.go:89] found id: ""
	I0816 13:46:23.688044   57945 logs.go:276] 0 containers: []
	W0816 13:46:23.688054   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:23.688062   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:23.688126   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:23.723475   57945 cri.go:89] found id: ""
	I0816 13:46:23.723507   57945 logs.go:276] 0 containers: []
	W0816 13:46:23.723517   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:23.723525   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:23.723585   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:23.756028   57945 cri.go:89] found id: ""
	I0816 13:46:23.756055   57945 logs.go:276] 0 containers: []
	W0816 13:46:23.756063   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:23.756069   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:23.756121   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:23.789965   57945 cri.go:89] found id: ""
	I0816 13:46:23.789993   57945 logs.go:276] 0 containers: []
	W0816 13:46:23.790000   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:23.790009   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:23.790029   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:23.803669   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:23.803696   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:23.882614   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:23.882642   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:23.882659   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:23.957733   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:23.957773   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:23.994270   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:23.994298   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:23.106190   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:25.106765   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:23.356443   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:25.356705   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:25.496930   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:27.497933   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:29.500639   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:26.546600   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:26.560153   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:26.560221   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:26.594482   57945 cri.go:89] found id: ""
	I0816 13:46:26.594506   57945 logs.go:276] 0 containers: []
	W0816 13:46:26.594520   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:26.594528   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:26.594585   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:26.628020   57945 cri.go:89] found id: ""
	I0816 13:46:26.628051   57945 logs.go:276] 0 containers: []
	W0816 13:46:26.628061   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:26.628068   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:26.628126   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:26.664248   57945 cri.go:89] found id: ""
	I0816 13:46:26.664277   57945 logs.go:276] 0 containers: []
	W0816 13:46:26.664288   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:26.664295   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:26.664357   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:26.700365   57945 cri.go:89] found id: ""
	I0816 13:46:26.700389   57945 logs.go:276] 0 containers: []
	W0816 13:46:26.700397   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:26.700403   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:26.700464   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:26.736170   57945 cri.go:89] found id: ""
	I0816 13:46:26.736204   57945 logs.go:276] 0 containers: []
	W0816 13:46:26.736212   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:26.736219   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:26.736268   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:26.773411   57945 cri.go:89] found id: ""
	I0816 13:46:26.773441   57945 logs.go:276] 0 containers: []
	W0816 13:46:26.773449   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:26.773455   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:26.773514   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:26.811994   57945 cri.go:89] found id: ""
	I0816 13:46:26.812022   57945 logs.go:276] 0 containers: []
	W0816 13:46:26.812030   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:26.812036   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:26.812087   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:26.846621   57945 cri.go:89] found id: ""
	I0816 13:46:26.846647   57945 logs.go:276] 0 containers: []
	W0816 13:46:26.846654   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:26.846662   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:26.846680   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:26.902255   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:26.902293   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:26.916117   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:26.916148   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:26.986755   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:26.986786   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:26.986802   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:27.069607   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:27.069644   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:29.610859   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:29.624599   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:29.624654   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:29.660421   57945 cri.go:89] found id: ""
	I0816 13:46:29.660454   57945 logs.go:276] 0 containers: []
	W0816 13:46:29.660465   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:29.660474   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:29.660534   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:29.694828   57945 cri.go:89] found id: ""
	I0816 13:46:29.694853   57945 logs.go:276] 0 containers: []
	W0816 13:46:29.694861   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:29.694867   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:29.694933   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:29.734054   57945 cri.go:89] found id: ""
	I0816 13:46:29.734083   57945 logs.go:276] 0 containers: []
	W0816 13:46:29.734093   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:29.734100   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:29.734159   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:29.771358   57945 cri.go:89] found id: ""
	I0816 13:46:29.771383   57945 logs.go:276] 0 containers: []
	W0816 13:46:29.771391   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:29.771397   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:29.771464   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:29.806781   57945 cri.go:89] found id: ""
	I0816 13:46:29.806804   57945 logs.go:276] 0 containers: []
	W0816 13:46:29.806812   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:29.806819   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:29.806879   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:29.841716   57945 cri.go:89] found id: ""
	I0816 13:46:29.841743   57945 logs.go:276] 0 containers: []
	W0816 13:46:29.841754   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:29.841762   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:29.841827   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:29.880115   57945 cri.go:89] found id: ""
	I0816 13:46:29.880144   57945 logs.go:276] 0 containers: []
	W0816 13:46:29.880152   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:29.880158   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:29.880226   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:29.916282   57945 cri.go:89] found id: ""
	I0816 13:46:29.916311   57945 logs.go:276] 0 containers: []
	W0816 13:46:29.916321   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:29.916331   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:29.916347   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:29.996027   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:29.996059   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:30.035284   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:30.035315   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:30.085336   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:30.085368   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:30.099534   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:30.099562   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 13:46:27.606739   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:29.606870   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:27.357970   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:29.861012   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:31.998584   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:34.497511   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	W0816 13:46:30.174105   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:32.674746   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:32.688631   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:32.688699   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:32.722967   57945 cri.go:89] found id: ""
	I0816 13:46:32.722997   57945 logs.go:276] 0 containers: []
	W0816 13:46:32.723007   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:32.723014   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:32.723075   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:32.757223   57945 cri.go:89] found id: ""
	I0816 13:46:32.757257   57945 logs.go:276] 0 containers: []
	W0816 13:46:32.757267   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:32.757275   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:32.757342   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:32.793773   57945 cri.go:89] found id: ""
	I0816 13:46:32.793795   57945 logs.go:276] 0 containers: []
	W0816 13:46:32.793804   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:32.793811   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:32.793879   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:32.829541   57945 cri.go:89] found id: ""
	I0816 13:46:32.829565   57945 logs.go:276] 0 containers: []
	W0816 13:46:32.829573   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:32.829578   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:32.829626   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:32.864053   57945 cri.go:89] found id: ""
	I0816 13:46:32.864079   57945 logs.go:276] 0 containers: []
	W0816 13:46:32.864090   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:32.864097   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:32.864155   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:32.901420   57945 cri.go:89] found id: ""
	I0816 13:46:32.901451   57945 logs.go:276] 0 containers: []
	W0816 13:46:32.901459   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:32.901466   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:32.901522   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:32.933082   57945 cri.go:89] found id: ""
	I0816 13:46:32.933110   57945 logs.go:276] 0 containers: []
	W0816 13:46:32.933118   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:32.933125   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:32.933186   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:32.966640   57945 cri.go:89] found id: ""
	I0816 13:46:32.966664   57945 logs.go:276] 0 containers: []
	W0816 13:46:32.966672   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:32.966680   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:32.966692   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:33.048593   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:33.048627   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:33.089329   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:33.089366   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:33.144728   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:33.144764   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:33.158639   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:33.158666   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:33.227076   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:32.106718   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:34.606961   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:32.357555   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:34.857062   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:36.857679   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:36.997085   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:38.999741   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:35.727465   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:35.740850   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:35.740940   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:35.777294   57945 cri.go:89] found id: ""
	I0816 13:46:35.777317   57945 logs.go:276] 0 containers: []
	W0816 13:46:35.777325   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:35.777330   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:35.777394   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:35.815582   57945 cri.go:89] found id: ""
	I0816 13:46:35.815604   57945 logs.go:276] 0 containers: []
	W0816 13:46:35.815612   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:35.815618   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:35.815672   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:35.848338   57945 cri.go:89] found id: ""
	I0816 13:46:35.848363   57945 logs.go:276] 0 containers: []
	W0816 13:46:35.848370   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:35.848376   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:35.848432   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:35.884834   57945 cri.go:89] found id: ""
	I0816 13:46:35.884862   57945 logs.go:276] 0 containers: []
	W0816 13:46:35.884870   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:35.884876   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:35.884953   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:35.919022   57945 cri.go:89] found id: ""
	I0816 13:46:35.919046   57945 logs.go:276] 0 containers: []
	W0816 13:46:35.919058   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:35.919063   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:35.919150   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:35.953087   57945 cri.go:89] found id: ""
	I0816 13:46:35.953111   57945 logs.go:276] 0 containers: []
	W0816 13:46:35.953119   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:35.953124   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:35.953182   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:35.984776   57945 cri.go:89] found id: ""
	I0816 13:46:35.984804   57945 logs.go:276] 0 containers: []
	W0816 13:46:35.984814   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:35.984821   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:35.984882   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:36.028921   57945 cri.go:89] found id: ""
	I0816 13:46:36.028946   57945 logs.go:276] 0 containers: []
	W0816 13:46:36.028954   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:36.028964   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:36.028976   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:36.091313   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:36.091342   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:36.116881   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:36.116915   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:36.186758   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:36.186778   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:36.186791   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:36.268618   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:36.268653   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:38.808419   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:38.821646   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:38.821708   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:38.860623   57945 cri.go:89] found id: ""
	I0816 13:46:38.860647   57945 logs.go:276] 0 containers: []
	W0816 13:46:38.860655   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:38.860660   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:38.860712   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:38.894728   57945 cri.go:89] found id: ""
	I0816 13:46:38.894782   57945 logs.go:276] 0 containers: []
	W0816 13:46:38.894795   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:38.894804   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:38.894870   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:38.928945   57945 cri.go:89] found id: ""
	I0816 13:46:38.928974   57945 logs.go:276] 0 containers: []
	W0816 13:46:38.928988   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:38.928994   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:38.929048   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:38.966450   57945 cri.go:89] found id: ""
	I0816 13:46:38.966474   57945 logs.go:276] 0 containers: []
	W0816 13:46:38.966482   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:38.966487   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:38.966548   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:39.001554   57945 cri.go:89] found id: ""
	I0816 13:46:39.001577   57945 logs.go:276] 0 containers: []
	W0816 13:46:39.001589   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:39.001595   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:39.001656   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:39.036621   57945 cri.go:89] found id: ""
	I0816 13:46:39.036646   57945 logs.go:276] 0 containers: []
	W0816 13:46:39.036654   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:39.036660   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:39.036725   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:39.071244   57945 cri.go:89] found id: ""
	I0816 13:46:39.071271   57945 logs.go:276] 0 containers: []
	W0816 13:46:39.071281   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:39.071289   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:39.071355   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:39.107325   57945 cri.go:89] found id: ""
	I0816 13:46:39.107352   57945 logs.go:276] 0 containers: []
	W0816 13:46:39.107361   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:39.107371   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:39.107401   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:39.189172   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:39.189208   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:39.229060   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:39.229094   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:39.281983   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:39.282025   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:39.296515   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:39.296545   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:39.368488   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:37.113026   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:39.606526   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:38.857809   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:41.358047   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:41.497724   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:43.498815   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:41.868721   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:41.883796   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:41.883869   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:41.922181   57945 cri.go:89] found id: ""
	I0816 13:46:41.922211   57945 logs.go:276] 0 containers: []
	W0816 13:46:41.922222   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:41.922232   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:41.922297   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:41.962213   57945 cri.go:89] found id: ""
	I0816 13:46:41.962239   57945 logs.go:276] 0 containers: []
	W0816 13:46:41.962249   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:41.962257   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:41.962321   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:42.003214   57945 cri.go:89] found id: ""
	I0816 13:46:42.003243   57945 logs.go:276] 0 containers: []
	W0816 13:46:42.003251   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:42.003257   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:42.003316   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:42.038594   57945 cri.go:89] found id: ""
	I0816 13:46:42.038622   57945 logs.go:276] 0 containers: []
	W0816 13:46:42.038635   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:42.038641   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:42.038691   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:42.071377   57945 cri.go:89] found id: ""
	I0816 13:46:42.071409   57945 logs.go:276] 0 containers: []
	W0816 13:46:42.071421   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:42.071429   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:42.071489   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:42.104777   57945 cri.go:89] found id: ""
	I0816 13:46:42.104804   57945 logs.go:276] 0 containers: []
	W0816 13:46:42.104815   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:42.104823   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:42.104879   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:42.140292   57945 cri.go:89] found id: ""
	I0816 13:46:42.140324   57945 logs.go:276] 0 containers: []
	W0816 13:46:42.140335   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:42.140342   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:42.140404   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:42.174823   57945 cri.go:89] found id: ""
	I0816 13:46:42.174861   57945 logs.go:276] 0 containers: []
	W0816 13:46:42.174870   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:42.174887   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:42.174906   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:42.216308   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:42.216337   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:42.269277   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:42.269304   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:42.282347   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:42.282374   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:42.358776   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:42.358796   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:42.358807   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:44.942195   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:44.955384   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:44.955465   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:44.994181   57945 cri.go:89] found id: ""
	I0816 13:46:44.994212   57945 logs.go:276] 0 containers: []
	W0816 13:46:44.994223   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:44.994230   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:44.994286   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:45.028937   57945 cri.go:89] found id: ""
	I0816 13:46:45.028972   57945 logs.go:276] 0 containers: []
	W0816 13:46:45.028984   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:45.028991   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:45.029049   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:45.068193   57945 cri.go:89] found id: ""
	I0816 13:46:45.068223   57945 logs.go:276] 0 containers: []
	W0816 13:46:45.068237   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:45.068249   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:45.068309   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:42.108651   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:44.606597   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:43.856419   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:45.858360   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:45.998195   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:48.497584   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:45.100553   57945 cri.go:89] found id: ""
	I0816 13:46:45.100653   57945 logs.go:276] 0 containers: []
	W0816 13:46:45.100667   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:45.100674   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:45.100734   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:45.135676   57945 cri.go:89] found id: ""
	I0816 13:46:45.135704   57945 logs.go:276] 0 containers: []
	W0816 13:46:45.135714   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:45.135721   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:45.135784   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:45.174611   57945 cri.go:89] found id: ""
	I0816 13:46:45.174642   57945 logs.go:276] 0 containers: []
	W0816 13:46:45.174653   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:45.174660   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:45.174713   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:45.209544   57945 cri.go:89] found id: ""
	I0816 13:46:45.209573   57945 logs.go:276] 0 containers: []
	W0816 13:46:45.209582   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:45.209588   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:45.209649   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:45.245622   57945 cri.go:89] found id: ""
	I0816 13:46:45.245654   57945 logs.go:276] 0 containers: []
	W0816 13:46:45.245664   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:45.245677   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:45.245692   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:45.284294   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:45.284322   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:45.335720   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:45.335751   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:45.350014   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:45.350039   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:45.419816   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:45.419839   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:45.419854   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:48.005991   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:48.019754   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:48.019814   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:48.053269   57945 cri.go:89] found id: ""
	I0816 13:46:48.053331   57945 logs.go:276] 0 containers: []
	W0816 13:46:48.053344   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:48.053351   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:48.053404   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:48.086992   57945 cri.go:89] found id: ""
	I0816 13:46:48.087024   57945 logs.go:276] 0 containers: []
	W0816 13:46:48.087032   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:48.087037   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:48.087098   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:48.123008   57945 cri.go:89] found id: ""
	I0816 13:46:48.123037   57945 logs.go:276] 0 containers: []
	W0816 13:46:48.123046   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:48.123053   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:48.123110   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:48.158035   57945 cri.go:89] found id: ""
	I0816 13:46:48.158064   57945 logs.go:276] 0 containers: []
	W0816 13:46:48.158075   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:48.158082   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:48.158146   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:48.194576   57945 cri.go:89] found id: ""
	I0816 13:46:48.194605   57945 logs.go:276] 0 containers: []
	W0816 13:46:48.194616   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:48.194624   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:48.194687   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:48.232844   57945 cri.go:89] found id: ""
	I0816 13:46:48.232870   57945 logs.go:276] 0 containers: []
	W0816 13:46:48.232878   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:48.232883   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:48.232955   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:48.267525   57945 cri.go:89] found id: ""
	I0816 13:46:48.267551   57945 logs.go:276] 0 containers: []
	W0816 13:46:48.267559   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:48.267564   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:48.267629   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:48.305436   57945 cri.go:89] found id: ""
	I0816 13:46:48.305465   57945 logs.go:276] 0 containers: []
	W0816 13:46:48.305477   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:48.305487   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:48.305502   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:48.357755   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:48.357781   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:48.372672   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:48.372703   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:48.439076   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:48.439099   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:48.439114   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:48.524142   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:48.524181   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:47.106288   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:49.108117   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:48.357517   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:50.857069   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:50.501014   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:52.998618   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:51.065770   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:51.078797   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:51.078868   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:51.118864   57945 cri.go:89] found id: ""
	I0816 13:46:51.118891   57945 logs.go:276] 0 containers: []
	W0816 13:46:51.118899   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:51.118905   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:51.118964   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:51.153024   57945 cri.go:89] found id: ""
	I0816 13:46:51.153049   57945 logs.go:276] 0 containers: []
	W0816 13:46:51.153057   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:51.153062   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:51.153111   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:51.189505   57945 cri.go:89] found id: ""
	I0816 13:46:51.189531   57945 logs.go:276] 0 containers: []
	W0816 13:46:51.189542   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:51.189550   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:51.189611   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:51.228456   57945 cri.go:89] found id: ""
	I0816 13:46:51.228483   57945 logs.go:276] 0 containers: []
	W0816 13:46:51.228494   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:51.228502   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:51.228565   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:51.264436   57945 cri.go:89] found id: ""
	I0816 13:46:51.264463   57945 logs.go:276] 0 containers: []
	W0816 13:46:51.264474   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:51.264482   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:51.264542   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:51.300291   57945 cri.go:89] found id: ""
	I0816 13:46:51.300315   57945 logs.go:276] 0 containers: []
	W0816 13:46:51.300323   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:51.300329   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:51.300379   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:51.334878   57945 cri.go:89] found id: ""
	I0816 13:46:51.334902   57945 logs.go:276] 0 containers: []
	W0816 13:46:51.334909   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:51.334917   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:51.334969   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:51.376467   57945 cri.go:89] found id: ""
	I0816 13:46:51.376491   57945 logs.go:276] 0 containers: []
	W0816 13:46:51.376499   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:51.376507   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:51.376518   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:51.420168   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:51.420194   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:51.470869   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:51.470900   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:51.484877   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:51.484903   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:51.557587   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:51.557614   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:51.557631   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:54.141123   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:54.154790   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:54.154864   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:54.189468   57945 cri.go:89] found id: ""
	I0816 13:46:54.189495   57945 logs.go:276] 0 containers: []
	W0816 13:46:54.189503   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:54.189509   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:54.189562   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:54.223774   57945 cri.go:89] found id: ""
	I0816 13:46:54.223805   57945 logs.go:276] 0 containers: []
	W0816 13:46:54.223817   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:54.223826   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:54.223883   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:54.257975   57945 cri.go:89] found id: ""
	I0816 13:46:54.258004   57945 logs.go:276] 0 containers: []
	W0816 13:46:54.258014   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:54.258022   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:54.258078   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:54.296144   57945 cri.go:89] found id: ""
	I0816 13:46:54.296174   57945 logs.go:276] 0 containers: []
	W0816 13:46:54.296193   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:54.296201   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:54.296276   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:54.336734   57945 cri.go:89] found id: ""
	I0816 13:46:54.336760   57945 logs.go:276] 0 containers: []
	W0816 13:46:54.336770   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:54.336775   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:54.336839   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:54.370572   57945 cri.go:89] found id: ""
	I0816 13:46:54.370602   57945 logs.go:276] 0 containers: []
	W0816 13:46:54.370609   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:54.370615   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:54.370676   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:54.405703   57945 cri.go:89] found id: ""
	I0816 13:46:54.405735   57945 logs.go:276] 0 containers: []
	W0816 13:46:54.405745   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:54.405753   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:54.405816   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:54.441466   57945 cri.go:89] found id: ""
	I0816 13:46:54.441492   57945 logs.go:276] 0 containers: []
	W0816 13:46:54.441500   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:54.441509   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:54.441521   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:54.492539   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:54.492570   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:54.506313   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:54.506341   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:54.580127   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:54.580151   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:54.580172   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:54.658597   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:54.658633   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:51.607335   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:54.106631   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:53.357847   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:55.857456   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:55.497897   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:57.999173   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:57.198267   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:57.213292   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:57.213354   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:57.248838   57945 cri.go:89] found id: ""
	I0816 13:46:57.248862   57945 logs.go:276] 0 containers: []
	W0816 13:46:57.248870   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:57.248876   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:57.248951   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:57.283868   57945 cri.go:89] found id: ""
	I0816 13:46:57.283895   57945 logs.go:276] 0 containers: []
	W0816 13:46:57.283903   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:57.283908   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:57.283958   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:57.319363   57945 cri.go:89] found id: ""
	I0816 13:46:57.319392   57945 logs.go:276] 0 containers: []
	W0816 13:46:57.319405   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:57.319412   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:57.319465   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:57.359895   57945 cri.go:89] found id: ""
	I0816 13:46:57.359937   57945 logs.go:276] 0 containers: []
	W0816 13:46:57.359949   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:57.359957   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:57.360024   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:57.398025   57945 cri.go:89] found id: ""
	I0816 13:46:57.398057   57945 logs.go:276] 0 containers: []
	W0816 13:46:57.398068   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:57.398075   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:57.398140   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:57.436101   57945 cri.go:89] found id: ""
	I0816 13:46:57.436132   57945 logs.go:276] 0 containers: []
	W0816 13:46:57.436140   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:57.436147   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:57.436223   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:57.471737   57945 cri.go:89] found id: ""
	I0816 13:46:57.471767   57945 logs.go:276] 0 containers: []
	W0816 13:46:57.471778   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:57.471785   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:57.471845   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:57.508664   57945 cri.go:89] found id: ""
	I0816 13:46:57.508694   57945 logs.go:276] 0 containers: []
	W0816 13:46:57.508705   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:57.508716   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:57.508730   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:57.559122   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:57.559155   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:57.572504   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:57.572529   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:57.646721   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:57.646743   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:57.646756   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:57.725107   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:57.725153   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:56.107168   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:58.606805   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:00.607098   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:57.857681   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:00.357433   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:00.497738   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:02.998036   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:04.998316   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:00.269137   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:00.284285   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:00.284363   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:00.325613   57945 cri.go:89] found id: ""
	I0816 13:47:00.325645   57945 logs.go:276] 0 containers: []
	W0816 13:47:00.325654   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:00.325662   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:00.325721   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:00.361706   57945 cri.go:89] found id: ""
	I0816 13:47:00.361732   57945 logs.go:276] 0 containers: []
	W0816 13:47:00.361742   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:00.361750   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:00.361808   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:00.398453   57945 cri.go:89] found id: ""
	I0816 13:47:00.398478   57945 logs.go:276] 0 containers: []
	W0816 13:47:00.398486   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:00.398491   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:00.398544   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:00.434233   57945 cri.go:89] found id: ""
	I0816 13:47:00.434265   57945 logs.go:276] 0 containers: []
	W0816 13:47:00.434278   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:00.434286   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:00.434391   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:00.473020   57945 cri.go:89] found id: ""
	I0816 13:47:00.473042   57945 logs.go:276] 0 containers: []
	W0816 13:47:00.473050   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:00.473056   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:00.473117   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:00.511480   57945 cri.go:89] found id: ""
	I0816 13:47:00.511507   57945 logs.go:276] 0 containers: []
	W0816 13:47:00.511518   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:00.511525   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:00.511595   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:00.546166   57945 cri.go:89] found id: ""
	I0816 13:47:00.546202   57945 logs.go:276] 0 containers: []
	W0816 13:47:00.546209   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:00.546216   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:00.546263   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:00.585285   57945 cri.go:89] found id: ""
	I0816 13:47:00.585310   57945 logs.go:276] 0 containers: []
	W0816 13:47:00.585320   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:00.585329   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:00.585348   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:00.633346   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:00.633373   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:00.687904   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:00.687937   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:00.703773   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:00.703801   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:00.775179   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:00.775210   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:00.775226   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:03.354676   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:03.370107   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:03.370178   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:03.406212   57945 cri.go:89] found id: ""
	I0816 13:47:03.406245   57945 logs.go:276] 0 containers: []
	W0816 13:47:03.406256   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:03.406263   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:03.406333   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:03.442887   57945 cri.go:89] found id: ""
	I0816 13:47:03.442925   57945 logs.go:276] 0 containers: []
	W0816 13:47:03.442937   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:03.442943   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:03.443000   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:03.479225   57945 cri.go:89] found id: ""
	I0816 13:47:03.479259   57945 logs.go:276] 0 containers: []
	W0816 13:47:03.479270   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:03.479278   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:03.479340   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:03.516145   57945 cri.go:89] found id: ""
	I0816 13:47:03.516181   57945 logs.go:276] 0 containers: []
	W0816 13:47:03.516192   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:03.516203   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:03.516265   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:03.548225   57945 cri.go:89] found id: ""
	I0816 13:47:03.548252   57945 logs.go:276] 0 containers: []
	W0816 13:47:03.548260   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:03.548267   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:03.548324   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:03.582038   57945 cri.go:89] found id: ""
	I0816 13:47:03.582071   57945 logs.go:276] 0 containers: []
	W0816 13:47:03.582082   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:03.582089   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:03.582160   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:03.618693   57945 cri.go:89] found id: ""
	I0816 13:47:03.618720   57945 logs.go:276] 0 containers: []
	W0816 13:47:03.618730   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:03.618737   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:03.618793   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:03.653717   57945 cri.go:89] found id: ""
	I0816 13:47:03.653742   57945 logs.go:276] 0 containers: []
	W0816 13:47:03.653751   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:03.653759   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:03.653771   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:03.705909   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:03.705942   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:03.720727   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:03.720751   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:03.795064   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:03.795089   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:03.795104   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:03.874061   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:03.874105   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:02.607546   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:05.106955   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:02.358368   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:04.359618   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:06.858437   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:06.999109   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:09.498087   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:06.420149   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:06.437062   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:06.437124   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:06.473620   57945 cri.go:89] found id: ""
	I0816 13:47:06.473651   57945 logs.go:276] 0 containers: []
	W0816 13:47:06.473659   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:06.473664   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:06.473720   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:06.510281   57945 cri.go:89] found id: ""
	I0816 13:47:06.510307   57945 logs.go:276] 0 containers: []
	W0816 13:47:06.510315   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:06.510321   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:06.510372   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:06.546589   57945 cri.go:89] found id: ""
	I0816 13:47:06.546623   57945 logs.go:276] 0 containers: []
	W0816 13:47:06.546634   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:06.546642   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:06.546702   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:06.580629   57945 cri.go:89] found id: ""
	I0816 13:47:06.580652   57945 logs.go:276] 0 containers: []
	W0816 13:47:06.580665   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:06.580671   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:06.580718   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:06.617411   57945 cri.go:89] found id: ""
	I0816 13:47:06.617439   57945 logs.go:276] 0 containers: []
	W0816 13:47:06.617459   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:06.617468   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:06.617533   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:06.654017   57945 cri.go:89] found id: ""
	I0816 13:47:06.654045   57945 logs.go:276] 0 containers: []
	W0816 13:47:06.654057   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:06.654064   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:06.654124   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:06.695109   57945 cri.go:89] found id: ""
	I0816 13:47:06.695139   57945 logs.go:276] 0 containers: []
	W0816 13:47:06.695147   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:06.695153   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:06.695205   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:06.731545   57945 cri.go:89] found id: ""
	I0816 13:47:06.731620   57945 logs.go:276] 0 containers: []
	W0816 13:47:06.731635   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:06.731647   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:06.731668   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:06.782862   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:06.782900   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:06.797524   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:06.797550   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:06.877445   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:06.877476   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:06.877493   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:06.957932   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:06.957965   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:09.498843   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:09.513398   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:09.513468   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:09.551246   57945 cri.go:89] found id: ""
	I0816 13:47:09.551275   57945 logs.go:276] 0 containers: []
	W0816 13:47:09.551284   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:09.551290   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:09.551339   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:09.585033   57945 cri.go:89] found id: ""
	I0816 13:47:09.585059   57945 logs.go:276] 0 containers: []
	W0816 13:47:09.585066   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:09.585072   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:09.585120   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:09.623498   57945 cri.go:89] found id: ""
	I0816 13:47:09.623524   57945 logs.go:276] 0 containers: []
	W0816 13:47:09.623531   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:09.623537   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:09.623584   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:09.657476   57945 cri.go:89] found id: ""
	I0816 13:47:09.657504   57945 logs.go:276] 0 containers: []
	W0816 13:47:09.657515   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:09.657523   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:09.657578   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:09.693715   57945 cri.go:89] found id: ""
	I0816 13:47:09.693746   57945 logs.go:276] 0 containers: []
	W0816 13:47:09.693757   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:09.693765   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:09.693825   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:09.727396   57945 cri.go:89] found id: ""
	I0816 13:47:09.727426   57945 logs.go:276] 0 containers: []
	W0816 13:47:09.727437   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:09.727451   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:09.727511   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:09.764334   57945 cri.go:89] found id: ""
	I0816 13:47:09.764361   57945 logs.go:276] 0 containers: []
	W0816 13:47:09.764368   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:09.764374   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:09.764428   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:09.799460   57945 cri.go:89] found id: ""
	I0816 13:47:09.799485   57945 logs.go:276] 0 containers: []
	W0816 13:47:09.799497   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:09.799508   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:09.799521   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:09.849637   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:09.849678   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:09.869665   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:09.869702   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:09.954878   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:09.954907   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:09.954922   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:10.032473   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:10.032507   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:07.107809   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:09.606867   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:09.358384   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:11.359451   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:11.997273   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:13.998709   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:12.574303   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:12.587684   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:12.587746   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:12.625568   57945 cri.go:89] found id: ""
	I0816 13:47:12.625593   57945 logs.go:276] 0 containers: []
	W0816 13:47:12.625604   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:12.625611   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:12.625719   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:12.665018   57945 cri.go:89] found id: ""
	I0816 13:47:12.665048   57945 logs.go:276] 0 containers: []
	W0816 13:47:12.665059   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:12.665067   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:12.665128   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:12.701125   57945 cri.go:89] found id: ""
	I0816 13:47:12.701150   57945 logs.go:276] 0 containers: []
	W0816 13:47:12.701158   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:12.701163   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:12.701218   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:12.740613   57945 cri.go:89] found id: ""
	I0816 13:47:12.740644   57945 logs.go:276] 0 containers: []
	W0816 13:47:12.740654   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:12.740662   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:12.740727   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:12.779620   57945 cri.go:89] found id: ""
	I0816 13:47:12.779652   57945 logs.go:276] 0 containers: []
	W0816 13:47:12.779664   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:12.779678   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:12.779743   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:12.816222   57945 cri.go:89] found id: ""
	I0816 13:47:12.816248   57945 logs.go:276] 0 containers: []
	W0816 13:47:12.816269   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:12.816278   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:12.816327   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:12.853083   57945 cri.go:89] found id: ""
	I0816 13:47:12.853113   57945 logs.go:276] 0 containers: []
	W0816 13:47:12.853125   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:12.853133   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:12.853192   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:12.888197   57945 cri.go:89] found id: ""
	I0816 13:47:12.888223   57945 logs.go:276] 0 containers: []
	W0816 13:47:12.888232   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:12.888240   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:12.888255   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:12.941464   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:12.941502   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:12.955423   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:12.955456   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:13.025515   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:13.025537   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:13.025550   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:13.112409   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:13.112452   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:12.107421   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:14.606538   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:13.857389   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:15.857870   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:16.498127   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:18.498877   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:15.656240   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:15.669505   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:15.669568   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:15.703260   57945 cri.go:89] found id: ""
	I0816 13:47:15.703288   57945 logs.go:276] 0 containers: []
	W0816 13:47:15.703299   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:15.703306   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:15.703368   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:15.740555   57945 cri.go:89] found id: ""
	I0816 13:47:15.740580   57945 logs.go:276] 0 containers: []
	W0816 13:47:15.740590   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:15.740596   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:15.740660   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:15.776207   57945 cri.go:89] found id: ""
	I0816 13:47:15.776233   57945 logs.go:276] 0 containers: []
	W0816 13:47:15.776241   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:15.776247   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:15.776302   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:15.816845   57945 cri.go:89] found id: ""
	I0816 13:47:15.816871   57945 logs.go:276] 0 containers: []
	W0816 13:47:15.816879   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:15.816884   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:15.816953   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:15.851279   57945 cri.go:89] found id: ""
	I0816 13:47:15.851306   57945 logs.go:276] 0 containers: []
	W0816 13:47:15.851318   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:15.851325   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:15.851391   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:15.884960   57945 cri.go:89] found id: ""
	I0816 13:47:15.884987   57945 logs.go:276] 0 containers: []
	W0816 13:47:15.884997   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:15.885004   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:15.885063   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:15.922027   57945 cri.go:89] found id: ""
	I0816 13:47:15.922051   57945 logs.go:276] 0 containers: []
	W0816 13:47:15.922060   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:15.922067   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:15.922130   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:15.956774   57945 cri.go:89] found id: ""
	I0816 13:47:15.956799   57945 logs.go:276] 0 containers: []
	W0816 13:47:15.956806   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:15.956814   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:15.956828   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:16.036342   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:16.036375   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:16.079006   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:16.079033   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:16.130374   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:16.130409   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:16.144707   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:16.144740   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:16.216466   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:18.716696   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:18.729670   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:18.729731   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:18.764481   57945 cri.go:89] found id: ""
	I0816 13:47:18.764513   57945 logs.go:276] 0 containers: []
	W0816 13:47:18.764521   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:18.764527   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:18.764574   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:18.803141   57945 cri.go:89] found id: ""
	I0816 13:47:18.803172   57945 logs.go:276] 0 containers: []
	W0816 13:47:18.803183   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:18.803192   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:18.803257   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:18.847951   57945 cri.go:89] found id: ""
	I0816 13:47:18.847977   57945 logs.go:276] 0 containers: []
	W0816 13:47:18.847985   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:18.847991   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:18.848038   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:18.881370   57945 cri.go:89] found id: ""
	I0816 13:47:18.881402   57945 logs.go:276] 0 containers: []
	W0816 13:47:18.881420   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:18.881434   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:18.881491   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:18.916206   57945 cri.go:89] found id: ""
	I0816 13:47:18.916237   57945 logs.go:276] 0 containers: []
	W0816 13:47:18.916247   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:18.916253   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:18.916314   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:18.946851   57945 cri.go:89] found id: ""
	I0816 13:47:18.946873   57945 logs.go:276] 0 containers: []
	W0816 13:47:18.946883   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:18.946891   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:18.946944   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:18.980684   57945 cri.go:89] found id: ""
	I0816 13:47:18.980710   57945 logs.go:276] 0 containers: []
	W0816 13:47:18.980718   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:18.980724   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:18.980789   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:19.015762   57945 cri.go:89] found id: ""
	I0816 13:47:19.015794   57945 logs.go:276] 0 containers: []
	W0816 13:47:19.015805   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:19.015817   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:19.015837   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:19.101544   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:19.101582   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:19.143587   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:19.143621   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:19.198788   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:19.198826   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:19.212697   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:19.212723   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:19.282719   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:16.607841   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:19.107952   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:18.358184   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:20.857525   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:20.499116   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:22.996642   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:24.998888   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:21.783729   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:21.797977   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:21.798056   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:21.833944   57945 cri.go:89] found id: ""
	I0816 13:47:21.833976   57945 logs.go:276] 0 containers: []
	W0816 13:47:21.833987   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:21.833996   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:21.834053   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:21.870079   57945 cri.go:89] found id: ""
	I0816 13:47:21.870110   57945 logs.go:276] 0 containers: []
	W0816 13:47:21.870120   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:21.870128   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:21.870191   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:21.905834   57945 cri.go:89] found id: ""
	I0816 13:47:21.905864   57945 logs.go:276] 0 containers: []
	W0816 13:47:21.905872   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:21.905878   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:21.905932   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:21.943319   57945 cri.go:89] found id: ""
	I0816 13:47:21.943341   57945 logs.go:276] 0 containers: []
	W0816 13:47:21.943349   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:21.943354   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:21.943412   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:21.982065   57945 cri.go:89] found id: ""
	I0816 13:47:21.982094   57945 logs.go:276] 0 containers: []
	W0816 13:47:21.982103   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:21.982110   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:21.982268   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:22.035131   57945 cri.go:89] found id: ""
	I0816 13:47:22.035167   57945 logs.go:276] 0 containers: []
	W0816 13:47:22.035179   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:22.035186   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:22.035250   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:22.082619   57945 cri.go:89] found id: ""
	I0816 13:47:22.082647   57945 logs.go:276] 0 containers: []
	W0816 13:47:22.082655   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:22.082661   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:22.082720   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:22.128521   57945 cri.go:89] found id: ""
	I0816 13:47:22.128550   57945 logs.go:276] 0 containers: []
	W0816 13:47:22.128559   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:22.128568   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:22.128581   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:22.182794   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:22.182824   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:22.196602   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:22.196628   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:22.264434   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:22.264457   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:22.264472   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:22.343796   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:22.343832   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:24.891164   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:24.904170   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:24.904244   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:24.941046   57945 cri.go:89] found id: ""
	I0816 13:47:24.941082   57945 logs.go:276] 0 containers: []
	W0816 13:47:24.941093   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:24.941101   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:24.941177   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:24.976520   57945 cri.go:89] found id: ""
	I0816 13:47:24.976553   57945 logs.go:276] 0 containers: []
	W0816 13:47:24.976564   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:24.976572   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:24.976635   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:25.024663   57945 cri.go:89] found id: ""
	I0816 13:47:25.024692   57945 logs.go:276] 0 containers: []
	W0816 13:47:25.024704   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:25.024712   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:25.024767   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:25.063892   57945 cri.go:89] found id: ""
	I0816 13:47:25.063920   57945 logs.go:276] 0 containers: []
	W0816 13:47:25.063928   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:25.063934   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:25.064014   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:21.607247   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:23.608388   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:22.857995   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:24.858506   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:27.497595   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:29.997611   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:25.105565   57945 cri.go:89] found id: ""
	I0816 13:47:25.105600   57945 logs.go:276] 0 containers: []
	W0816 13:47:25.105612   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:25.105619   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:25.105676   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:25.150965   57945 cri.go:89] found id: ""
	I0816 13:47:25.150995   57945 logs.go:276] 0 containers: []
	W0816 13:47:25.151006   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:25.151014   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:25.151074   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:25.191170   57945 cri.go:89] found id: ""
	I0816 13:47:25.191202   57945 logs.go:276] 0 containers: []
	W0816 13:47:25.191213   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:25.191220   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:25.191280   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:25.226614   57945 cri.go:89] found id: ""
	I0816 13:47:25.226643   57945 logs.go:276] 0 containers: []
	W0816 13:47:25.226653   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:25.226664   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:25.226680   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:25.239478   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:25.239516   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:25.315450   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:25.315478   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:25.315494   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:25.394755   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:25.394792   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:25.434737   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:25.434768   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:27.984829   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:28.000304   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:28.000378   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:28.042396   57945 cri.go:89] found id: ""
	I0816 13:47:28.042430   57945 logs.go:276] 0 containers: []
	W0816 13:47:28.042447   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:28.042455   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:28.042514   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:28.094491   57945 cri.go:89] found id: ""
	I0816 13:47:28.094515   57945 logs.go:276] 0 containers: []
	W0816 13:47:28.094523   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:28.094528   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:28.094586   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:28.146228   57945 cri.go:89] found id: ""
	I0816 13:47:28.146254   57945 logs.go:276] 0 containers: []
	W0816 13:47:28.146262   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:28.146267   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:28.146314   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:28.179302   57945 cri.go:89] found id: ""
	I0816 13:47:28.179335   57945 logs.go:276] 0 containers: []
	W0816 13:47:28.179347   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:28.179355   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:28.179417   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:28.216707   57945 cri.go:89] found id: ""
	I0816 13:47:28.216737   57945 logs.go:276] 0 containers: []
	W0816 13:47:28.216749   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:28.216757   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:28.216808   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:28.253800   57945 cri.go:89] found id: ""
	I0816 13:47:28.253832   57945 logs.go:276] 0 containers: []
	W0816 13:47:28.253843   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:28.253851   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:28.253906   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:28.289403   57945 cri.go:89] found id: ""
	I0816 13:47:28.289438   57945 logs.go:276] 0 containers: []
	W0816 13:47:28.289450   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:28.289458   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:28.289520   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:28.325174   57945 cri.go:89] found id: ""
	I0816 13:47:28.325206   57945 logs.go:276] 0 containers: []
	W0816 13:47:28.325214   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:28.325222   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:28.325233   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:28.377043   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:28.377077   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:28.390991   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:28.391028   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:28.463563   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:28.463584   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:28.463598   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:28.546593   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:28.546628   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:26.107830   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:28.607294   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:30.613619   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:27.356723   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:29.358026   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:31.857750   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:32.497685   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:34.500214   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:31.084932   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:31.100742   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:31.100809   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:31.134888   57945 cri.go:89] found id: ""
	I0816 13:47:31.134914   57945 logs.go:276] 0 containers: []
	W0816 13:47:31.134921   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:31.134929   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:31.134979   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:31.169533   57945 cri.go:89] found id: ""
	I0816 13:47:31.169558   57945 logs.go:276] 0 containers: []
	W0816 13:47:31.169566   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:31.169572   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:31.169630   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:31.203888   57945 cri.go:89] found id: ""
	I0816 13:47:31.203914   57945 logs.go:276] 0 containers: []
	W0816 13:47:31.203924   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:31.203931   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:31.203993   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:31.239346   57945 cri.go:89] found id: ""
	I0816 13:47:31.239374   57945 logs.go:276] 0 containers: []
	W0816 13:47:31.239387   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:31.239393   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:31.239443   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:31.274011   57945 cri.go:89] found id: ""
	I0816 13:47:31.274038   57945 logs.go:276] 0 containers: []
	W0816 13:47:31.274046   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:31.274053   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:31.274117   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:31.308812   57945 cri.go:89] found id: ""
	I0816 13:47:31.308845   57945 logs.go:276] 0 containers: []
	W0816 13:47:31.308856   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:31.308863   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:31.308950   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:31.343041   57945 cri.go:89] found id: ""
	I0816 13:47:31.343067   57945 logs.go:276] 0 containers: []
	W0816 13:47:31.343075   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:31.343082   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:31.343143   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:31.380969   57945 cri.go:89] found id: ""
	I0816 13:47:31.380998   57945 logs.go:276] 0 containers: []
	W0816 13:47:31.381006   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:31.381015   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:31.381028   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:31.434431   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:31.434465   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:31.449374   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:31.449404   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:31.522134   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:31.522159   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:31.522174   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:31.602707   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:31.602736   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:34.142413   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:34.155531   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:34.155595   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:34.195926   57945 cri.go:89] found id: ""
	I0816 13:47:34.195953   57945 logs.go:276] 0 containers: []
	W0816 13:47:34.195964   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:34.195972   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:34.196040   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:34.230064   57945 cri.go:89] found id: ""
	I0816 13:47:34.230092   57945 logs.go:276] 0 containers: []
	W0816 13:47:34.230103   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:34.230109   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:34.230163   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:34.263973   57945 cri.go:89] found id: ""
	I0816 13:47:34.263998   57945 logs.go:276] 0 containers: []
	W0816 13:47:34.264005   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:34.264012   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:34.264069   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:34.298478   57945 cri.go:89] found id: ""
	I0816 13:47:34.298523   57945 logs.go:276] 0 containers: []
	W0816 13:47:34.298532   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:34.298539   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:34.298597   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:34.337196   57945 cri.go:89] found id: ""
	I0816 13:47:34.337225   57945 logs.go:276] 0 containers: []
	W0816 13:47:34.337233   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:34.337239   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:34.337291   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:34.374716   57945 cri.go:89] found id: ""
	I0816 13:47:34.374751   57945 logs.go:276] 0 containers: []
	W0816 13:47:34.374763   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:34.374771   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:34.374830   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:34.413453   57945 cri.go:89] found id: ""
	I0816 13:47:34.413480   57945 logs.go:276] 0 containers: []
	W0816 13:47:34.413491   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:34.413498   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:34.413563   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:34.450074   57945 cri.go:89] found id: ""
	I0816 13:47:34.450107   57945 logs.go:276] 0 containers: []
	W0816 13:47:34.450119   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:34.450156   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:34.450176   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:34.490214   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:34.490239   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:34.542861   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:34.542895   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:34.557371   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:34.557400   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:34.627976   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:34.627995   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:34.628011   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:33.106665   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:35.107026   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:34.358059   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:36.858347   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:36.998289   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:39.499047   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:37.205741   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:37.219207   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:37.219286   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:37.258254   57945 cri.go:89] found id: ""
	I0816 13:47:37.258288   57945 logs.go:276] 0 containers: []
	W0816 13:47:37.258300   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:37.258307   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:37.258359   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:37.293604   57945 cri.go:89] found id: ""
	I0816 13:47:37.293635   57945 logs.go:276] 0 containers: []
	W0816 13:47:37.293647   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:37.293654   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:37.293715   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:37.334043   57945 cri.go:89] found id: ""
	I0816 13:47:37.334072   57945 logs.go:276] 0 containers: []
	W0816 13:47:37.334084   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:37.334091   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:37.334153   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:37.369745   57945 cri.go:89] found id: ""
	I0816 13:47:37.369770   57945 logs.go:276] 0 containers: []
	W0816 13:47:37.369777   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:37.369784   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:37.369835   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:37.406277   57945 cri.go:89] found id: ""
	I0816 13:47:37.406305   57945 logs.go:276] 0 containers: []
	W0816 13:47:37.406317   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:37.406325   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:37.406407   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:37.440418   57945 cri.go:89] found id: ""
	I0816 13:47:37.440449   57945 logs.go:276] 0 containers: []
	W0816 13:47:37.440456   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:37.440463   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:37.440515   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:37.474527   57945 cri.go:89] found id: ""
	I0816 13:47:37.474561   57945 logs.go:276] 0 containers: []
	W0816 13:47:37.474572   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:37.474580   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:37.474642   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:37.513959   57945 cri.go:89] found id: ""
	I0816 13:47:37.513987   57945 logs.go:276] 0 containers: []
	W0816 13:47:37.513995   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:37.514004   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:37.514020   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:37.569561   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:37.569597   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:37.584095   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:37.584127   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:37.652289   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:37.652317   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:37.652333   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:37.737388   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:37.737434   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:37.107091   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:39.108555   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:39.358316   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:41.858946   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:41.998041   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:44.498467   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:40.281872   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:40.295704   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:40.295763   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:40.336641   57945 cri.go:89] found id: ""
	I0816 13:47:40.336667   57945 logs.go:276] 0 containers: []
	W0816 13:47:40.336678   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:40.336686   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:40.336748   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:40.373500   57945 cri.go:89] found id: ""
	I0816 13:47:40.373524   57945 logs.go:276] 0 containers: []
	W0816 13:47:40.373531   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:40.373536   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:40.373593   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:40.417553   57945 cri.go:89] found id: ""
	I0816 13:47:40.417575   57945 logs.go:276] 0 containers: []
	W0816 13:47:40.417583   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:40.417589   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:40.417645   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:40.452778   57945 cri.go:89] found id: ""
	I0816 13:47:40.452809   57945 logs.go:276] 0 containers: []
	W0816 13:47:40.452819   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:40.452827   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:40.452896   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:40.491389   57945 cri.go:89] found id: ""
	I0816 13:47:40.491424   57945 logs.go:276] 0 containers: []
	W0816 13:47:40.491436   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:40.491445   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:40.491505   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:40.529780   57945 cri.go:89] found id: ""
	I0816 13:47:40.529815   57945 logs.go:276] 0 containers: []
	W0816 13:47:40.529826   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:40.529835   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:40.529903   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:40.567724   57945 cri.go:89] found id: ""
	I0816 13:47:40.567751   57945 logs.go:276] 0 containers: []
	W0816 13:47:40.567761   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:40.567768   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:40.567825   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:40.604260   57945 cri.go:89] found id: ""
	I0816 13:47:40.604299   57945 logs.go:276] 0 containers: []
	W0816 13:47:40.604309   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:40.604319   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:40.604335   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:40.676611   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:40.676642   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:40.676659   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:40.755779   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:40.755815   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:40.793780   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:40.793811   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:40.845869   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:40.845902   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:43.361766   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:43.376247   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:43.376309   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:43.416527   57945 cri.go:89] found id: ""
	I0816 13:47:43.416559   57945 logs.go:276] 0 containers: []
	W0816 13:47:43.416567   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:43.416573   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:43.416621   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:43.458203   57945 cri.go:89] found id: ""
	I0816 13:47:43.458228   57945 logs.go:276] 0 containers: []
	W0816 13:47:43.458239   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:43.458246   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:43.458312   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:43.498122   57945 cri.go:89] found id: ""
	I0816 13:47:43.498146   57945 logs.go:276] 0 containers: []
	W0816 13:47:43.498158   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:43.498166   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:43.498231   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:43.533392   57945 cri.go:89] found id: ""
	I0816 13:47:43.533418   57945 logs.go:276] 0 containers: []
	W0816 13:47:43.533428   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:43.533436   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:43.533510   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:43.569258   57945 cri.go:89] found id: ""
	I0816 13:47:43.569294   57945 logs.go:276] 0 containers: []
	W0816 13:47:43.569301   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:43.569309   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:43.569368   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:43.603599   57945 cri.go:89] found id: ""
	I0816 13:47:43.603624   57945 logs.go:276] 0 containers: []
	W0816 13:47:43.603633   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:43.603639   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:43.603696   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:43.643204   57945 cri.go:89] found id: ""
	I0816 13:47:43.643236   57945 logs.go:276] 0 containers: []
	W0816 13:47:43.643248   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:43.643256   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:43.643343   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:43.678365   57945 cri.go:89] found id: ""
	I0816 13:47:43.678393   57945 logs.go:276] 0 containers: []
	W0816 13:47:43.678412   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:43.678424   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:43.678440   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:43.729472   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:43.729522   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:43.743714   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:43.743749   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:43.819210   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:43.819237   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:43.819252   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:43.899800   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:43.899835   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:41.606734   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:43.608097   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:44.357080   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:46.357589   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:46.503576   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:48.998084   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:46.437795   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:46.450756   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:46.450828   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:46.487036   57945 cri.go:89] found id: ""
	I0816 13:47:46.487059   57945 logs.go:276] 0 containers: []
	W0816 13:47:46.487067   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:46.487073   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:46.487119   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:46.524268   57945 cri.go:89] found id: ""
	I0816 13:47:46.524294   57945 logs.go:276] 0 containers: []
	W0816 13:47:46.524303   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:46.524308   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:46.524360   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:46.561202   57945 cri.go:89] found id: ""
	I0816 13:47:46.561232   57945 logs.go:276] 0 containers: []
	W0816 13:47:46.561244   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:46.561251   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:46.561311   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:46.596006   57945 cri.go:89] found id: ""
	I0816 13:47:46.596032   57945 logs.go:276] 0 containers: []
	W0816 13:47:46.596039   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:46.596045   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:46.596094   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:46.632279   57945 cri.go:89] found id: ""
	I0816 13:47:46.632306   57945 logs.go:276] 0 containers: []
	W0816 13:47:46.632313   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:46.632319   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:46.632372   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:46.669139   57945 cri.go:89] found id: ""
	I0816 13:47:46.669166   57945 logs.go:276] 0 containers: []
	W0816 13:47:46.669174   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:46.669179   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:46.669237   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:46.704084   57945 cri.go:89] found id: ""
	I0816 13:47:46.704115   57945 logs.go:276] 0 containers: []
	W0816 13:47:46.704126   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:46.704134   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:46.704207   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:46.740275   57945 cri.go:89] found id: ""
	I0816 13:47:46.740303   57945 logs.go:276] 0 containers: []
	W0816 13:47:46.740314   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:46.740325   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:46.740341   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:46.792777   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:46.792811   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:46.807390   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:46.807429   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:46.877563   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:46.877589   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:46.877605   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:46.954703   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:46.954737   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:49.497506   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:49.510913   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:49.511007   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:49.547461   57945 cri.go:89] found id: ""
	I0816 13:47:49.547491   57945 logs.go:276] 0 containers: []
	W0816 13:47:49.547503   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:49.547517   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:49.547579   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:49.581972   57945 cri.go:89] found id: ""
	I0816 13:47:49.582005   57945 logs.go:276] 0 containers: []
	W0816 13:47:49.582014   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:49.582021   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:49.582084   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:49.617148   57945 cri.go:89] found id: ""
	I0816 13:47:49.617176   57945 logs.go:276] 0 containers: []
	W0816 13:47:49.617185   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:49.617193   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:49.617260   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:49.652546   57945 cri.go:89] found id: ""
	I0816 13:47:49.652569   57945 logs.go:276] 0 containers: []
	W0816 13:47:49.652578   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:49.652584   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:49.652631   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:49.688040   57945 cri.go:89] found id: ""
	I0816 13:47:49.688071   57945 logs.go:276] 0 containers: []
	W0816 13:47:49.688079   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:49.688084   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:49.688154   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:49.721779   57945 cri.go:89] found id: ""
	I0816 13:47:49.721809   57945 logs.go:276] 0 containers: []
	W0816 13:47:49.721819   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:49.721827   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:49.721890   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:49.758926   57945 cri.go:89] found id: ""
	I0816 13:47:49.758953   57945 logs.go:276] 0 containers: []
	W0816 13:47:49.758960   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:49.758966   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:49.759020   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:49.796328   57945 cri.go:89] found id: ""
	I0816 13:47:49.796358   57945 logs.go:276] 0 containers: []
	W0816 13:47:49.796368   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:49.796378   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:49.796393   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:49.851818   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:49.851855   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:49.867320   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:49.867350   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:49.934885   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:49.934907   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:49.934921   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:50.018012   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:50.018055   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:46.105523   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:48.107122   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:50.606969   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:48.357769   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:50.859617   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:50.998256   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:53.498046   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:52.563101   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:52.576817   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:52.576879   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:52.613425   57945 cri.go:89] found id: ""
	I0816 13:47:52.613459   57945 logs.go:276] 0 containers: []
	W0816 13:47:52.613469   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:52.613475   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:52.613522   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:52.650086   57945 cri.go:89] found id: ""
	I0816 13:47:52.650109   57945 logs.go:276] 0 containers: []
	W0816 13:47:52.650117   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:52.650123   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:52.650186   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:52.686993   57945 cri.go:89] found id: ""
	I0816 13:47:52.687020   57945 logs.go:276] 0 containers: []
	W0816 13:47:52.687028   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:52.687034   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:52.687080   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:52.724307   57945 cri.go:89] found id: ""
	I0816 13:47:52.724337   57945 logs.go:276] 0 containers: []
	W0816 13:47:52.724349   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:52.724357   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:52.724421   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:52.759250   57945 cri.go:89] found id: ""
	I0816 13:47:52.759281   57945 logs.go:276] 0 containers: []
	W0816 13:47:52.759290   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:52.759295   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:52.759350   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:52.798634   57945 cri.go:89] found id: ""
	I0816 13:47:52.798660   57945 logs.go:276] 0 containers: []
	W0816 13:47:52.798670   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:52.798677   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:52.798741   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:52.833923   57945 cri.go:89] found id: ""
	I0816 13:47:52.833946   57945 logs.go:276] 0 containers: []
	W0816 13:47:52.833954   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:52.833960   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:52.834005   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:52.873647   57945 cri.go:89] found id: ""
	I0816 13:47:52.873671   57945 logs.go:276] 0 containers: []
	W0816 13:47:52.873679   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:52.873687   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:52.873701   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:52.887667   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:52.887697   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:52.960494   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:52.960516   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:52.960529   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:53.037132   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:53.037167   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:53.076769   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:53.076799   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:52.607529   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:55.107256   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:53.357315   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:55.357380   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:55.498193   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:57.498238   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:59.997582   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:55.625565   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:55.639296   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:55.639367   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:55.675104   57945 cri.go:89] found id: ""
	I0816 13:47:55.675137   57945 logs.go:276] 0 containers: []
	W0816 13:47:55.675149   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:55.675156   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:55.675220   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:55.710108   57945 cri.go:89] found id: ""
	I0816 13:47:55.710137   57945 logs.go:276] 0 containers: []
	W0816 13:47:55.710149   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:55.710156   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:55.710218   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:55.744190   57945 cri.go:89] found id: ""
	I0816 13:47:55.744212   57945 logs.go:276] 0 containers: []
	W0816 13:47:55.744220   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:55.744225   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:55.744288   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:55.781775   57945 cri.go:89] found id: ""
	I0816 13:47:55.781806   57945 logs.go:276] 0 containers: []
	W0816 13:47:55.781815   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:55.781821   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:55.781879   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:55.818877   57945 cri.go:89] found id: ""
	I0816 13:47:55.818907   57945 logs.go:276] 0 containers: []
	W0816 13:47:55.818915   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:55.818921   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:55.818973   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:55.858751   57945 cri.go:89] found id: ""
	I0816 13:47:55.858773   57945 logs.go:276] 0 containers: []
	W0816 13:47:55.858782   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:55.858790   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:55.858852   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:55.894745   57945 cri.go:89] found id: ""
	I0816 13:47:55.894776   57945 logs.go:276] 0 containers: []
	W0816 13:47:55.894787   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:55.894796   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:55.894854   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:55.928805   57945 cri.go:89] found id: ""
	I0816 13:47:55.928832   57945 logs.go:276] 0 containers: []
	W0816 13:47:55.928843   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:55.928853   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:55.928872   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:55.982684   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:55.982717   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:55.997319   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:55.997354   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:56.063016   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:56.063043   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:56.063059   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:56.147138   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:56.147177   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:58.686160   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:58.699135   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:58.699260   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:58.737566   57945 cri.go:89] found id: ""
	I0816 13:47:58.737597   57945 logs.go:276] 0 containers: []
	W0816 13:47:58.737606   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:58.737613   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:58.737662   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:58.778119   57945 cri.go:89] found id: ""
	I0816 13:47:58.778149   57945 logs.go:276] 0 containers: []
	W0816 13:47:58.778164   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:58.778173   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:58.778243   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:58.815003   57945 cri.go:89] found id: ""
	I0816 13:47:58.815031   57945 logs.go:276] 0 containers: []
	W0816 13:47:58.815040   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:58.815046   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:58.815094   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:58.847912   57945 cri.go:89] found id: ""
	I0816 13:47:58.847941   57945 logs.go:276] 0 containers: []
	W0816 13:47:58.847949   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:58.847955   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:58.848005   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:58.882600   57945 cri.go:89] found id: ""
	I0816 13:47:58.882623   57945 logs.go:276] 0 containers: []
	W0816 13:47:58.882631   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:58.882637   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:58.882686   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:58.920459   57945 cri.go:89] found id: ""
	I0816 13:47:58.920489   57945 logs.go:276] 0 containers: []
	W0816 13:47:58.920500   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:58.920507   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:58.920571   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:58.952411   57945 cri.go:89] found id: ""
	I0816 13:47:58.952445   57945 logs.go:276] 0 containers: []
	W0816 13:47:58.952453   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:58.952460   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:58.952570   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:58.985546   57945 cri.go:89] found id: ""
	I0816 13:47:58.985573   57945 logs.go:276] 0 containers: []
	W0816 13:47:58.985581   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:58.985589   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:58.985600   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:59.067406   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:59.067439   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:59.108076   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:59.108107   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:59.162698   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:59.162734   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:59.178734   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:59.178759   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:59.255267   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:57.606146   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:59.606603   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:57.358416   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:59.861332   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:01.998633   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:04.498646   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:01.756248   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:01.768940   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:01.769009   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:01.804884   57945 cri.go:89] found id: ""
	I0816 13:48:01.804924   57945 logs.go:276] 0 containers: []
	W0816 13:48:01.804936   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:01.804946   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:01.805000   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:01.844010   57945 cri.go:89] found id: ""
	I0816 13:48:01.844035   57945 logs.go:276] 0 containers: []
	W0816 13:48:01.844042   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:01.844051   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:01.844104   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:01.882450   57945 cri.go:89] found id: ""
	I0816 13:48:01.882488   57945 logs.go:276] 0 containers: []
	W0816 13:48:01.882500   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:01.882507   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:01.882568   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:01.916995   57945 cri.go:89] found id: ""
	I0816 13:48:01.917028   57945 logs.go:276] 0 containers: []
	W0816 13:48:01.917039   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:01.917048   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:01.917109   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:01.956289   57945 cri.go:89] found id: ""
	I0816 13:48:01.956312   57945 logs.go:276] 0 containers: []
	W0816 13:48:01.956319   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:01.956325   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:01.956378   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:01.991823   57945 cri.go:89] found id: ""
	I0816 13:48:01.991862   57945 logs.go:276] 0 containers: []
	W0816 13:48:01.991875   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:01.991882   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:01.991953   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:02.034244   57945 cri.go:89] found id: ""
	I0816 13:48:02.034272   57945 logs.go:276] 0 containers: []
	W0816 13:48:02.034282   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:02.034290   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:02.034357   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:02.067902   57945 cri.go:89] found id: ""
	I0816 13:48:02.067930   57945 logs.go:276] 0 containers: []
	W0816 13:48:02.067942   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:02.067953   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:02.067971   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:02.121170   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:02.121196   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:02.177468   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:02.177498   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:02.191721   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:02.191757   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:02.270433   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:02.270463   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:02.270500   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:04.855768   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:04.869098   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:04.869175   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:04.907817   57945 cri.go:89] found id: ""
	I0816 13:48:04.907848   57945 logs.go:276] 0 containers: []
	W0816 13:48:04.907856   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:04.907863   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:04.907919   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:04.943307   57945 cri.go:89] found id: ""
	I0816 13:48:04.943339   57945 logs.go:276] 0 containers: []
	W0816 13:48:04.943349   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:04.943356   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:04.943416   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:04.979884   57945 cri.go:89] found id: ""
	I0816 13:48:04.979914   57945 logs.go:276] 0 containers: []
	W0816 13:48:04.979922   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:04.979929   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:04.979978   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:05.021400   57945 cri.go:89] found id: ""
	I0816 13:48:05.021442   57945 logs.go:276] 0 containers: []
	W0816 13:48:05.021453   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:05.021463   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:05.021542   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:05.057780   57945 cri.go:89] found id: ""
	I0816 13:48:05.057800   57945 logs.go:276] 0 containers: []
	W0816 13:48:05.057808   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:05.057814   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:05.057864   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:05.091947   57945 cri.go:89] found id: ""
	I0816 13:48:05.091976   57945 logs.go:276] 0 containers: []
	W0816 13:48:05.091987   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:05.091995   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:05.092058   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:01.607315   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:04.107759   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:02.358142   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:04.857766   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:06.998437   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:09.496888   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:05.129740   57945 cri.go:89] found id: ""
	I0816 13:48:05.129771   57945 logs.go:276] 0 containers: []
	W0816 13:48:05.129781   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:05.129788   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:05.129857   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:05.163020   57945 cri.go:89] found id: ""
	I0816 13:48:05.163049   57945 logs.go:276] 0 containers: []
	W0816 13:48:05.163060   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:05.163070   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:05.163087   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:05.236240   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:05.236266   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:05.236281   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:05.310559   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:05.310595   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:05.351614   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:05.351646   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:05.404938   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:05.404971   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:07.921010   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:07.934181   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:07.934255   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:07.969474   57945 cri.go:89] found id: ""
	I0816 13:48:07.969502   57945 logs.go:276] 0 containers: []
	W0816 13:48:07.969512   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:07.969520   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:07.969575   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:08.007423   57945 cri.go:89] found id: ""
	I0816 13:48:08.007447   57945 logs.go:276] 0 containers: []
	W0816 13:48:08.007454   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:08.007460   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:08.007515   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:08.043981   57945 cri.go:89] found id: ""
	I0816 13:48:08.044010   57945 logs.go:276] 0 containers: []
	W0816 13:48:08.044021   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:08.044027   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:08.044076   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:08.078631   57945 cri.go:89] found id: ""
	I0816 13:48:08.078656   57945 logs.go:276] 0 containers: []
	W0816 13:48:08.078664   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:08.078669   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:08.078720   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:08.114970   57945 cri.go:89] found id: ""
	I0816 13:48:08.114998   57945 logs.go:276] 0 containers: []
	W0816 13:48:08.115010   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:08.115020   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:08.115081   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:08.149901   57945 cri.go:89] found id: ""
	I0816 13:48:08.149936   57945 logs.go:276] 0 containers: []
	W0816 13:48:08.149944   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:08.149951   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:08.150007   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:08.183104   57945 cri.go:89] found id: ""
	I0816 13:48:08.183128   57945 logs.go:276] 0 containers: []
	W0816 13:48:08.183136   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:08.183141   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:08.183189   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:08.216972   57945 cri.go:89] found id: ""
	I0816 13:48:08.217005   57945 logs.go:276] 0 containers: []
	W0816 13:48:08.217016   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:08.217027   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:08.217043   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:08.231192   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:08.231223   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:08.306779   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:08.306807   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:08.306823   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:08.388235   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:08.388274   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:08.429040   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:08.429071   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:06.110473   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:08.606467   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:07.356589   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:09.357419   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:11.357839   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:11.497754   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:13.997641   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:10.983867   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:10.997649   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:10.997722   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:11.033315   57945 cri.go:89] found id: ""
	I0816 13:48:11.033351   57945 logs.go:276] 0 containers: []
	W0816 13:48:11.033362   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:11.033370   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:11.033437   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:11.069000   57945 cri.go:89] found id: ""
	I0816 13:48:11.069030   57945 logs.go:276] 0 containers: []
	W0816 13:48:11.069038   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:11.069044   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:11.069102   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:11.100668   57945 cri.go:89] found id: ""
	I0816 13:48:11.100691   57945 logs.go:276] 0 containers: []
	W0816 13:48:11.100698   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:11.100704   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:11.100755   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:11.134753   57945 cri.go:89] found id: ""
	I0816 13:48:11.134782   57945 logs.go:276] 0 containers: []
	W0816 13:48:11.134792   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:11.134800   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:11.134857   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:11.169691   57945 cri.go:89] found id: ""
	I0816 13:48:11.169717   57945 logs.go:276] 0 containers: []
	W0816 13:48:11.169726   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:11.169734   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:11.169797   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:11.204048   57945 cri.go:89] found id: ""
	I0816 13:48:11.204077   57945 logs.go:276] 0 containers: []
	W0816 13:48:11.204088   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:11.204095   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:11.204147   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:11.237659   57945 cri.go:89] found id: ""
	I0816 13:48:11.237687   57945 logs.go:276] 0 containers: []
	W0816 13:48:11.237698   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:11.237706   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:11.237768   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:11.271886   57945 cri.go:89] found id: ""
	I0816 13:48:11.271911   57945 logs.go:276] 0 containers: []
	W0816 13:48:11.271922   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:11.271932   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:11.271946   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:11.327237   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:11.327274   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:11.343215   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:11.343256   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:11.419725   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:11.419752   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:11.419768   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:11.498221   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:11.498252   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:14.044619   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:14.057479   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:14.057537   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:14.093405   57945 cri.go:89] found id: ""
	I0816 13:48:14.093439   57945 logs.go:276] 0 containers: []
	W0816 13:48:14.093450   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:14.093459   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:14.093516   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:14.127089   57945 cri.go:89] found id: ""
	I0816 13:48:14.127111   57945 logs.go:276] 0 containers: []
	W0816 13:48:14.127120   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:14.127127   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:14.127190   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:14.165676   57945 cri.go:89] found id: ""
	I0816 13:48:14.165708   57945 logs.go:276] 0 containers: []
	W0816 13:48:14.165719   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:14.165726   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:14.165791   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:14.198630   57945 cri.go:89] found id: ""
	I0816 13:48:14.198652   57945 logs.go:276] 0 containers: []
	W0816 13:48:14.198660   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:14.198665   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:14.198717   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:14.246679   57945 cri.go:89] found id: ""
	I0816 13:48:14.246706   57945 logs.go:276] 0 containers: []
	W0816 13:48:14.246714   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:14.246719   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:14.246774   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:14.290928   57945 cri.go:89] found id: ""
	I0816 13:48:14.290960   57945 logs.go:276] 0 containers: []
	W0816 13:48:14.290972   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:14.290979   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:14.291043   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:14.342499   57945 cri.go:89] found id: ""
	I0816 13:48:14.342527   57945 logs.go:276] 0 containers: []
	W0816 13:48:14.342537   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:14.342544   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:14.342613   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:14.377858   57945 cri.go:89] found id: ""
	I0816 13:48:14.377891   57945 logs.go:276] 0 containers: []
	W0816 13:48:14.377899   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:14.377913   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:14.377928   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:14.431180   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:14.431218   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:14.445355   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:14.445381   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:14.513970   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:14.513991   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:14.514006   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:14.591381   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:14.591416   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:11.108299   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:13.612816   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:13.856979   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:15.857269   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:15.999100   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:18.497473   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:17.133406   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:17.146647   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:17.146703   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:17.180991   57945 cri.go:89] found id: ""
	I0816 13:48:17.181022   57945 logs.go:276] 0 containers: []
	W0816 13:48:17.181032   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:17.181041   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:17.181103   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:17.214862   57945 cri.go:89] found id: ""
	I0816 13:48:17.214892   57945 logs.go:276] 0 containers: []
	W0816 13:48:17.214903   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:17.214910   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:17.214971   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:17.250316   57945 cri.go:89] found id: ""
	I0816 13:48:17.250344   57945 logs.go:276] 0 containers: []
	W0816 13:48:17.250355   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:17.250362   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:17.250425   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:17.282959   57945 cri.go:89] found id: ""
	I0816 13:48:17.282991   57945 logs.go:276] 0 containers: []
	W0816 13:48:17.283001   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:17.283008   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:17.283070   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:17.316185   57945 cri.go:89] found id: ""
	I0816 13:48:17.316213   57945 logs.go:276] 0 containers: []
	W0816 13:48:17.316224   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:17.316232   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:17.316292   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:17.353383   57945 cri.go:89] found id: ""
	I0816 13:48:17.353410   57945 logs.go:276] 0 containers: []
	W0816 13:48:17.353420   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:17.353428   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:17.353487   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:17.390808   57945 cri.go:89] found id: ""
	I0816 13:48:17.390836   57945 logs.go:276] 0 containers: []
	W0816 13:48:17.390844   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:17.390850   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:17.390898   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:17.425484   57945 cri.go:89] found id: ""
	I0816 13:48:17.425517   57945 logs.go:276] 0 containers: []
	W0816 13:48:17.425529   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:17.425539   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:17.425556   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:17.439184   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:17.439220   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:17.511813   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:17.511838   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:17.511853   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:17.597415   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:17.597447   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:17.636703   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:17.636738   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:16.105992   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:18.606940   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:20.607532   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:18.357812   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:20.358351   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:20.498644   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:22.998103   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:24.999122   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:20.193694   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:20.207488   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:20.207549   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:20.246584   57945 cri.go:89] found id: ""
	I0816 13:48:20.246610   57945 logs.go:276] 0 containers: []
	W0816 13:48:20.246618   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:20.246624   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:20.246678   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:20.282030   57945 cri.go:89] found id: ""
	I0816 13:48:20.282060   57945 logs.go:276] 0 containers: []
	W0816 13:48:20.282071   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:20.282078   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:20.282142   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:20.317530   57945 cri.go:89] found id: ""
	I0816 13:48:20.317562   57945 logs.go:276] 0 containers: []
	W0816 13:48:20.317571   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:20.317578   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:20.317630   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:20.352964   57945 cri.go:89] found id: ""
	I0816 13:48:20.352990   57945 logs.go:276] 0 containers: []
	W0816 13:48:20.353000   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:20.353008   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:20.353066   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:20.388108   57945 cri.go:89] found id: ""
	I0816 13:48:20.388138   57945 logs.go:276] 0 containers: []
	W0816 13:48:20.388148   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:20.388156   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:20.388224   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:20.423627   57945 cri.go:89] found id: ""
	I0816 13:48:20.423660   57945 logs.go:276] 0 containers: []
	W0816 13:48:20.423672   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:20.423680   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:20.423741   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:20.460975   57945 cri.go:89] found id: ""
	I0816 13:48:20.461003   57945 logs.go:276] 0 containers: []
	W0816 13:48:20.461011   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:20.461017   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:20.461081   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:20.497707   57945 cri.go:89] found id: ""
	I0816 13:48:20.497728   57945 logs.go:276] 0 containers: []
	W0816 13:48:20.497735   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:20.497743   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:20.497758   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:20.584887   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:20.584939   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:20.627020   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:20.627054   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:20.680716   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:20.680756   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:20.694945   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:20.694973   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:20.770900   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:23.271654   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:23.284709   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:23.284788   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:23.324342   57945 cri.go:89] found id: ""
	I0816 13:48:23.324374   57945 logs.go:276] 0 containers: []
	W0816 13:48:23.324384   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:23.324393   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:23.324453   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:23.358846   57945 cri.go:89] found id: ""
	I0816 13:48:23.358869   57945 logs.go:276] 0 containers: []
	W0816 13:48:23.358879   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:23.358885   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:23.358943   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:23.392580   57945 cri.go:89] found id: ""
	I0816 13:48:23.392607   57945 logs.go:276] 0 containers: []
	W0816 13:48:23.392618   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:23.392626   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:23.392686   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:23.428035   57945 cri.go:89] found id: ""
	I0816 13:48:23.428066   57945 logs.go:276] 0 containers: []
	W0816 13:48:23.428076   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:23.428083   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:23.428164   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:23.470027   57945 cri.go:89] found id: ""
	I0816 13:48:23.470054   57945 logs.go:276] 0 containers: []
	W0816 13:48:23.470066   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:23.470076   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:23.470242   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:23.506497   57945 cri.go:89] found id: ""
	I0816 13:48:23.506522   57945 logs.go:276] 0 containers: []
	W0816 13:48:23.506530   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:23.506536   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:23.506588   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:23.542571   57945 cri.go:89] found id: ""
	I0816 13:48:23.542600   57945 logs.go:276] 0 containers: []
	W0816 13:48:23.542611   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:23.542619   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:23.542683   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:23.578552   57945 cri.go:89] found id: ""
	I0816 13:48:23.578584   57945 logs.go:276] 0 containers: []
	W0816 13:48:23.578592   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:23.578601   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:23.578612   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:23.633145   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:23.633181   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:23.648089   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:23.648129   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:23.724645   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:23.724663   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:23.724675   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:23.812979   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:23.813013   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:23.107986   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:25.607110   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:22.858674   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:25.358411   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:27.497538   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:29.498345   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:26.353455   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:26.367433   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:26.367504   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:26.406717   57945 cri.go:89] found id: ""
	I0816 13:48:26.406746   57945 logs.go:276] 0 containers: []
	W0816 13:48:26.406756   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:26.406764   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:26.406825   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:26.440267   57945 cri.go:89] found id: ""
	I0816 13:48:26.440298   57945 logs.go:276] 0 containers: []
	W0816 13:48:26.440309   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:26.440317   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:26.440379   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:26.479627   57945 cri.go:89] found id: ""
	I0816 13:48:26.479653   57945 logs.go:276] 0 containers: []
	W0816 13:48:26.479662   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:26.479667   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:26.479714   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:26.516608   57945 cri.go:89] found id: ""
	I0816 13:48:26.516638   57945 logs.go:276] 0 containers: []
	W0816 13:48:26.516646   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:26.516653   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:26.516713   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:26.553474   57945 cri.go:89] found id: ""
	I0816 13:48:26.553496   57945 logs.go:276] 0 containers: []
	W0816 13:48:26.553505   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:26.553510   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:26.553566   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:26.586090   57945 cri.go:89] found id: ""
	I0816 13:48:26.586147   57945 logs.go:276] 0 containers: []
	W0816 13:48:26.586160   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:26.586167   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:26.586217   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:26.621874   57945 cri.go:89] found id: ""
	I0816 13:48:26.621903   57945 logs.go:276] 0 containers: []
	W0816 13:48:26.621914   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:26.621923   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:26.621999   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:26.656643   57945 cri.go:89] found id: ""
	I0816 13:48:26.656668   57945 logs.go:276] 0 containers: []
	W0816 13:48:26.656676   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:26.656684   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:26.656694   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:26.710589   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:26.710628   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:26.724403   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:26.724423   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:26.795530   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:26.795550   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:26.795568   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:26.879670   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:26.879709   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:29.420540   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:29.434301   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:29.434368   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:29.471409   57945 cri.go:89] found id: ""
	I0816 13:48:29.471438   57945 logs.go:276] 0 containers: []
	W0816 13:48:29.471455   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:29.471464   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:29.471527   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:29.510841   57945 cri.go:89] found id: ""
	I0816 13:48:29.510865   57945 logs.go:276] 0 containers: []
	W0816 13:48:29.510873   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:29.510880   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:29.510928   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:29.546300   57945 cri.go:89] found id: ""
	I0816 13:48:29.546331   57945 logs.go:276] 0 containers: []
	W0816 13:48:29.546342   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:29.546349   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:29.546409   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:29.579324   57945 cri.go:89] found id: ""
	I0816 13:48:29.579349   57945 logs.go:276] 0 containers: []
	W0816 13:48:29.579357   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:29.579363   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:29.579414   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:29.613729   57945 cri.go:89] found id: ""
	I0816 13:48:29.613755   57945 logs.go:276] 0 containers: []
	W0816 13:48:29.613765   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:29.613772   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:29.613831   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:29.649401   57945 cri.go:89] found id: ""
	I0816 13:48:29.649428   57945 logs.go:276] 0 containers: []
	W0816 13:48:29.649439   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:29.649447   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:29.649510   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:29.685391   57945 cri.go:89] found id: ""
	I0816 13:48:29.685420   57945 logs.go:276] 0 containers: []
	W0816 13:48:29.685428   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:29.685436   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:29.685504   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:29.720954   57945 cri.go:89] found id: ""
	I0816 13:48:29.720981   57945 logs.go:276] 0 containers: []
	W0816 13:48:29.720993   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:29.721004   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:29.721019   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:29.791602   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:29.791625   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:29.791637   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:29.876595   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:29.876633   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:29.917172   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:29.917203   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:29.969511   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:29.969548   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:27.607276   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:30.106660   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:27.856585   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:29.857836   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:31.498615   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:33.999039   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:32.484186   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:32.499320   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:32.499386   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:32.537301   57945 cri.go:89] found id: ""
	I0816 13:48:32.537351   57945 logs.go:276] 0 containers: []
	W0816 13:48:32.537365   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:32.537373   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:32.537441   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:32.574324   57945 cri.go:89] found id: ""
	I0816 13:48:32.574350   57945 logs.go:276] 0 containers: []
	W0816 13:48:32.574360   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:32.574367   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:32.574445   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:32.610672   57945 cri.go:89] found id: ""
	I0816 13:48:32.610697   57945 logs.go:276] 0 containers: []
	W0816 13:48:32.610704   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:32.610709   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:32.610760   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:32.649916   57945 cri.go:89] found id: ""
	I0816 13:48:32.649941   57945 logs.go:276] 0 containers: []
	W0816 13:48:32.649949   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:32.649955   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:32.650010   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:32.684204   57945 cri.go:89] found id: ""
	I0816 13:48:32.684234   57945 logs.go:276] 0 containers: []
	W0816 13:48:32.684245   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:32.684257   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:32.684319   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:32.723735   57945 cri.go:89] found id: ""
	I0816 13:48:32.723764   57945 logs.go:276] 0 containers: []
	W0816 13:48:32.723772   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:32.723778   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:32.723838   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:32.759709   57945 cri.go:89] found id: ""
	I0816 13:48:32.759732   57945 logs.go:276] 0 containers: []
	W0816 13:48:32.759740   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:32.759746   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:32.759798   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:32.798782   57945 cri.go:89] found id: ""
	I0816 13:48:32.798807   57945 logs.go:276] 0 containers: []
	W0816 13:48:32.798815   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:32.798823   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:32.798835   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:32.876166   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:32.876188   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:32.876199   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:32.956218   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:32.956253   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:32.996625   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:32.996662   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:33.050093   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:33.050128   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:32.107363   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:34.607045   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:32.357801   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:34.856980   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:36.857321   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:36.497064   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:38.498666   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:35.565097   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:35.578582   57945 kubeadm.go:597] duration metric: took 4m3.330349632s to restartPrimaryControlPlane
	W0816 13:48:35.578670   57945 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0816 13:48:35.578704   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 13:48:36.655625   57945 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.076898816s)
	I0816 13:48:36.655703   57945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 13:48:36.670273   57945 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 13:48:36.681600   57945 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 13:48:36.691816   57945 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 13:48:36.691835   57945 kubeadm.go:157] found existing configuration files:
	
	I0816 13:48:36.691877   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 13:48:36.701841   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 13:48:36.701901   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 13:48:36.711571   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 13:48:36.720990   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 13:48:36.721055   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 13:48:36.730948   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 13:48:36.740294   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 13:48:36.740361   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 13:48:36.750725   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 13:48:36.761936   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 13:48:36.762009   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 13:48:36.772572   57945 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 13:48:37.001184   57945 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 13:48:36.608364   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:39.106585   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:38.857386   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:41.358217   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:40.997776   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:42.998819   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:44.999474   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:41.106806   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:43.107007   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:45.606716   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:42.357715   57440 pod_ready.go:82] duration metric: took 4m0.006671881s for pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace to be "Ready" ...
	E0816 13:48:42.357741   57440 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0816 13:48:42.357749   57440 pod_ready.go:39] duration metric: took 4m4.542302811s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:48:42.357762   57440 api_server.go:52] waiting for apiserver process to appear ...
	I0816 13:48:42.357787   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:42.357834   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:42.415231   57440 cri.go:89] found id: "17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1"
	I0816 13:48:42.415255   57440 cri.go:89] found id: ""
	I0816 13:48:42.415265   57440 logs.go:276] 1 containers: [17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1]
	I0816 13:48:42.415324   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:42.421713   57440 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:42.421779   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:42.462840   57440 cri.go:89] found id: "43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453"
	I0816 13:48:42.462867   57440 cri.go:89] found id: ""
	I0816 13:48:42.462878   57440 logs.go:276] 1 containers: [43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453]
	I0816 13:48:42.462940   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:42.467260   57440 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:42.467321   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:42.505423   57440 cri.go:89] found id: "1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc"
	I0816 13:48:42.505449   57440 cri.go:89] found id: ""
	I0816 13:48:42.505458   57440 logs.go:276] 1 containers: [1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc]
	I0816 13:48:42.505517   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:42.510072   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:42.510124   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:42.551873   57440 cri.go:89] found id: "db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4"
	I0816 13:48:42.551894   57440 cri.go:89] found id: ""
	I0816 13:48:42.551902   57440 logs.go:276] 1 containers: [db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4]
	I0816 13:48:42.551949   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:42.556735   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:42.556783   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:42.595853   57440 cri.go:89] found id: "ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5"
	I0816 13:48:42.595884   57440 cri.go:89] found id: ""
	I0816 13:48:42.595895   57440 logs.go:276] 1 containers: [ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5]
	I0816 13:48:42.595948   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:42.600951   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:42.601003   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:42.639288   57440 cri.go:89] found id: "d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd"
	I0816 13:48:42.639311   57440 cri.go:89] found id: ""
	I0816 13:48:42.639320   57440 logs.go:276] 1 containers: [d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd]
	I0816 13:48:42.639367   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:42.644502   57440 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:42.644554   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:42.685041   57440 cri.go:89] found id: ""
	I0816 13:48:42.685065   57440 logs.go:276] 0 containers: []
	W0816 13:48:42.685074   57440 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:42.685079   57440 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 13:48:42.685137   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 13:48:42.722485   57440 cri.go:89] found id: "b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae"
	I0816 13:48:42.722506   57440 cri.go:89] found id: "35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f"
	I0816 13:48:42.722510   57440 cri.go:89] found id: ""
	I0816 13:48:42.722519   57440 logs.go:276] 2 containers: [b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae 35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f]
	I0816 13:48:42.722590   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:42.727136   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:42.731169   57440 logs.go:123] Gathering logs for kube-controller-manager [d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd] ...
	I0816 13:48:42.731189   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd"
	I0816 13:48:42.794303   57440 logs.go:123] Gathering logs for storage-provisioner [b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae] ...
	I0816 13:48:42.794334   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae"
	I0816 13:48:42.833686   57440 logs.go:123] Gathering logs for container status ...
	I0816 13:48:42.833715   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:42.874606   57440 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:42.874632   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:42.948074   57440 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:42.948111   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:42.963546   57440 logs.go:123] Gathering logs for etcd [43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453] ...
	I0816 13:48:42.963571   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453"
	I0816 13:48:43.027410   57440 logs.go:123] Gathering logs for kube-scheduler [db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4] ...
	I0816 13:48:43.027446   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4"
	I0816 13:48:43.067643   57440 logs.go:123] Gathering logs for kube-proxy [ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5] ...
	I0816 13:48:43.067670   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5"
	I0816 13:48:43.115156   57440 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:43.115183   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 13:48:43.246588   57440 logs.go:123] Gathering logs for kube-apiserver [17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1] ...
	I0816 13:48:43.246618   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1"
	I0816 13:48:43.291042   57440 logs.go:123] Gathering logs for coredns [1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc] ...
	I0816 13:48:43.291069   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc"
	I0816 13:48:43.330741   57440 logs.go:123] Gathering logs for storage-provisioner [35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f] ...
	I0816 13:48:43.330771   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f"
	I0816 13:48:43.371970   57440 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:43.371999   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:46.357313   57440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:46.373368   57440 api_server.go:72] duration metric: took 4m16.32601859s to wait for apiserver process to appear ...
	I0816 13:48:46.373396   57440 api_server.go:88] waiting for apiserver healthz status ...
	I0816 13:48:46.373426   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:46.373473   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:46.411034   57440 cri.go:89] found id: "17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1"
	I0816 13:48:46.411059   57440 cri.go:89] found id: ""
	I0816 13:48:46.411067   57440 logs.go:276] 1 containers: [17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1]
	I0816 13:48:46.411121   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:46.415948   57440 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:46.416009   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:46.458648   57440 cri.go:89] found id: "43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453"
	I0816 13:48:46.458673   57440 cri.go:89] found id: ""
	I0816 13:48:46.458684   57440 logs.go:276] 1 containers: [43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453]
	I0816 13:48:46.458735   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:46.463268   57440 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:46.463332   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:46.502120   57440 cri.go:89] found id: "1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc"
	I0816 13:48:46.502139   57440 cri.go:89] found id: ""
	I0816 13:48:46.502149   57440 logs.go:276] 1 containers: [1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc]
	I0816 13:48:46.502319   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:46.508632   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:46.508692   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:46.552732   57440 cri.go:89] found id: "db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4"
	I0816 13:48:46.552757   57440 cri.go:89] found id: ""
	I0816 13:48:46.552765   57440 logs.go:276] 1 containers: [db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4]
	I0816 13:48:46.552812   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:46.557459   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:46.557524   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:46.598286   57440 cri.go:89] found id: "ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5"
	I0816 13:48:46.598308   57440 cri.go:89] found id: ""
	I0816 13:48:46.598330   57440 logs.go:276] 1 containers: [ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5]
	I0816 13:48:46.598403   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:46.603050   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:46.603110   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:46.641616   57440 cri.go:89] found id: "d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd"
	I0816 13:48:46.641638   57440 cri.go:89] found id: ""
	I0816 13:48:46.641648   57440 logs.go:276] 1 containers: [d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd]
	I0816 13:48:46.641712   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:46.646008   57440 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:46.646076   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:46.682259   57440 cri.go:89] found id: ""
	I0816 13:48:46.682290   57440 logs.go:276] 0 containers: []
	W0816 13:48:46.682302   57440 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:46.682310   57440 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 13:48:46.682366   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 13:48:46.718955   57440 cri.go:89] found id: "b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae"
	I0816 13:48:46.718979   57440 cri.go:89] found id: "35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f"
	I0816 13:48:46.718985   57440 cri.go:89] found id: ""
	I0816 13:48:46.718993   57440 logs.go:276] 2 containers: [b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae 35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f]
	I0816 13:48:46.719049   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:46.723519   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:46.727942   57440 logs.go:123] Gathering logs for kube-scheduler [db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4] ...
	I0816 13:48:46.727968   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4"
	I0816 13:48:46.771942   57440 logs.go:123] Gathering logs for storage-provisioner [b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae] ...
	I0816 13:48:46.771971   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae"
	I0816 13:48:46.818294   57440 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:46.818319   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:46.887977   57440 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:46.888021   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:46.903567   57440 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:46.903599   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 13:48:47.010715   57440 logs.go:123] Gathering logs for kube-apiserver [17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1] ...
	I0816 13:48:47.010747   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1"
	I0816 13:48:47.056317   57440 logs.go:123] Gathering logs for etcd [43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453] ...
	I0816 13:48:47.056346   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453"
	I0816 13:48:47.114669   57440 logs.go:123] Gathering logs for coredns [1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc] ...
	I0816 13:48:47.114696   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc"
	I0816 13:48:47.498472   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:49.998541   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:47.606991   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:49.607458   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:47.157046   57440 logs.go:123] Gathering logs for storage-provisioner [35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f] ...
	I0816 13:48:47.157073   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f"
	I0816 13:48:47.199364   57440 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:47.199393   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:47.640964   57440 logs.go:123] Gathering logs for kube-proxy [ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5] ...
	I0816 13:48:47.641003   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5"
	I0816 13:48:47.683503   57440 logs.go:123] Gathering logs for kube-controller-manager [d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd] ...
	I0816 13:48:47.683541   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd"
	I0816 13:48:47.746748   57440 logs.go:123] Gathering logs for container status ...
	I0816 13:48:47.746798   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:50.296176   57440 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0816 13:48:50.300482   57440 api_server.go:279] https://192.168.61.116:8443/healthz returned 200:
	ok
	I0816 13:48:50.301550   57440 api_server.go:141] control plane version: v1.31.0
	I0816 13:48:50.301570   57440 api_server.go:131] duration metric: took 3.928168044s to wait for apiserver health ...
	I0816 13:48:50.301578   57440 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 13:48:50.301599   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:50.301653   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:50.343199   57440 cri.go:89] found id: "17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1"
	I0816 13:48:50.343223   57440 cri.go:89] found id: ""
	I0816 13:48:50.343231   57440 logs.go:276] 1 containers: [17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1]
	I0816 13:48:50.343276   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:50.347576   57440 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:50.347651   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:50.387912   57440 cri.go:89] found id: "43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453"
	I0816 13:48:50.387937   57440 cri.go:89] found id: ""
	I0816 13:48:50.387947   57440 logs.go:276] 1 containers: [43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453]
	I0816 13:48:50.388004   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:50.392120   57440 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:50.392188   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:50.428655   57440 cri.go:89] found id: "1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc"
	I0816 13:48:50.428680   57440 cri.go:89] found id: ""
	I0816 13:48:50.428688   57440 logs.go:276] 1 containers: [1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc]
	I0816 13:48:50.428734   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:50.432863   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:50.432941   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:50.472269   57440 cri.go:89] found id: "db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4"
	I0816 13:48:50.472295   57440 cri.go:89] found id: ""
	I0816 13:48:50.472304   57440 logs.go:276] 1 containers: [db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4]
	I0816 13:48:50.472351   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:50.476961   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:50.477006   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:50.514772   57440 cri.go:89] found id: "ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5"
	I0816 13:48:50.514793   57440 cri.go:89] found id: ""
	I0816 13:48:50.514801   57440 logs.go:276] 1 containers: [ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5]
	I0816 13:48:50.514857   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:50.520430   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:50.520492   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:50.564708   57440 cri.go:89] found id: "d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd"
	I0816 13:48:50.564733   57440 cri.go:89] found id: ""
	I0816 13:48:50.564741   57440 logs.go:276] 1 containers: [d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd]
	I0816 13:48:50.564788   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:50.569255   57440 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:50.569306   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:50.607803   57440 cri.go:89] found id: ""
	I0816 13:48:50.607823   57440 logs.go:276] 0 containers: []
	W0816 13:48:50.607829   57440 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:50.607835   57440 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 13:48:50.607888   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 13:48:50.643909   57440 cri.go:89] found id: "b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae"
	I0816 13:48:50.643934   57440 cri.go:89] found id: "35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f"
	I0816 13:48:50.643940   57440 cri.go:89] found id: ""
	I0816 13:48:50.643949   57440 logs.go:276] 2 containers: [b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae 35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f]
	I0816 13:48:50.643994   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:50.648575   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:50.653322   57440 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:50.653354   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:50.667847   57440 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:50.667878   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 13:48:50.774932   57440 logs.go:123] Gathering logs for kube-apiserver [17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1] ...
	I0816 13:48:50.774969   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1"
	I0816 13:48:50.823473   57440 logs.go:123] Gathering logs for etcd [43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453] ...
	I0816 13:48:50.823503   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453"
	I0816 13:48:50.884009   57440 logs.go:123] Gathering logs for storage-provisioner [b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae] ...
	I0816 13:48:50.884044   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae"
	I0816 13:48:50.925187   57440 logs.go:123] Gathering logs for container status ...
	I0816 13:48:50.925219   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:50.965019   57440 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:50.965046   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:51.033614   57440 logs.go:123] Gathering logs for coredns [1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc] ...
	I0816 13:48:51.033651   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc"
	I0816 13:48:51.068360   57440 logs.go:123] Gathering logs for kube-scheduler [db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4] ...
	I0816 13:48:51.068387   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4"
	I0816 13:48:51.107768   57440 logs.go:123] Gathering logs for kube-proxy [ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5] ...
	I0816 13:48:51.107792   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5"
	I0816 13:48:51.163637   57440 logs.go:123] Gathering logs for kube-controller-manager [d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd] ...
	I0816 13:48:51.163673   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd"
	I0816 13:48:51.227436   57440 logs.go:123] Gathering logs for storage-provisioner [35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f] ...
	I0816 13:48:51.227462   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f"
	I0816 13:48:51.265505   57440 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:51.265531   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:54.130801   57440 system_pods.go:59] 8 kube-system pods found
	I0816 13:48:54.130828   57440 system_pods.go:61] "coredns-6f6b679f8f-8kbs6" [e732183e-3b22-4a11-909a-246de5fc1c8a] Running
	I0816 13:48:54.130833   57440 system_pods.go:61] "etcd-no-preload-311070" [e05deb95-06ff-4a8d-b520-0350b2332814] Running
	I0816 13:48:54.130837   57440 system_pods.go:61] "kube-apiserver-no-preload-311070" [06cb4989-0511-4c14-a3ec-e966829cf2ec] Running
	I0816 13:48:54.130840   57440 system_pods.go:61] "kube-controller-manager-no-preload-311070" [baa9f379-4e14-4873-a456-42e90a816b0b] Running
	I0816 13:48:54.130843   57440 system_pods.go:61] "kube-proxy-b8d5b" [9ed1c33b-903f-43e8-880c-b9a49c658806] Running
	I0816 13:48:54.130846   57440 system_pods.go:61] "kube-scheduler-no-preload-311070" [166c0f64-ebf6-413f-82d5-f1c32991c63a] Running
	I0816 13:48:54.130852   57440 system_pods.go:61] "metrics-server-6867b74b74-mgxhv" [e9654a8e-4db2-494d-93a7-a134b0e2bb50] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 13:48:54.130855   57440 system_pods.go:61] "storage-provisioner" [f340d2e3-2889-4200-b477-830494b827c6] Running
	I0816 13:48:54.130862   57440 system_pods.go:74] duration metric: took 3.829279192s to wait for pod list to return data ...
	I0816 13:48:54.130868   57440 default_sa.go:34] waiting for default service account to be created ...
	I0816 13:48:54.133253   57440 default_sa.go:45] found service account: "default"
	I0816 13:48:54.133282   57440 default_sa.go:55] duration metric: took 2.407297ms for default service account to be created ...
	I0816 13:48:54.133292   57440 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 13:48:54.138812   57440 system_pods.go:86] 8 kube-system pods found
	I0816 13:48:54.138835   57440 system_pods.go:89] "coredns-6f6b679f8f-8kbs6" [e732183e-3b22-4a11-909a-246de5fc1c8a] Running
	I0816 13:48:54.138841   57440 system_pods.go:89] "etcd-no-preload-311070" [e05deb95-06ff-4a8d-b520-0350b2332814] Running
	I0816 13:48:54.138845   57440 system_pods.go:89] "kube-apiserver-no-preload-311070" [06cb4989-0511-4c14-a3ec-e966829cf2ec] Running
	I0816 13:48:54.138849   57440 system_pods.go:89] "kube-controller-manager-no-preload-311070" [baa9f379-4e14-4873-a456-42e90a816b0b] Running
	I0816 13:48:54.138853   57440 system_pods.go:89] "kube-proxy-b8d5b" [9ed1c33b-903f-43e8-880c-b9a49c658806] Running
	I0816 13:48:54.138856   57440 system_pods.go:89] "kube-scheduler-no-preload-311070" [166c0f64-ebf6-413f-82d5-f1c32991c63a] Running
	I0816 13:48:54.138863   57440 system_pods.go:89] "metrics-server-6867b74b74-mgxhv" [e9654a8e-4db2-494d-93a7-a134b0e2bb50] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 13:48:54.138868   57440 system_pods.go:89] "storage-provisioner" [f340d2e3-2889-4200-b477-830494b827c6] Running
	I0816 13:48:54.138874   57440 system_pods.go:126] duration metric: took 5.576801ms to wait for k8s-apps to be running ...
	I0816 13:48:54.138879   57440 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 13:48:54.138922   57440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 13:48:54.154406   57440 system_svc.go:56] duration metric: took 15.507123ms WaitForService to wait for kubelet
	I0816 13:48:54.154438   57440 kubeadm.go:582] duration metric: took 4m24.107091364s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 13:48:54.154463   57440 node_conditions.go:102] verifying NodePressure condition ...
	I0816 13:48:54.156991   57440 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 13:48:54.157012   57440 node_conditions.go:123] node cpu capacity is 2
	I0816 13:48:54.157027   57440 node_conditions.go:105] duration metric: took 2.558338ms to run NodePressure ...
	I0816 13:48:54.157041   57440 start.go:241] waiting for startup goroutines ...
	I0816 13:48:54.157052   57440 start.go:246] waiting for cluster config update ...
	I0816 13:48:54.157070   57440 start.go:255] writing updated cluster config ...
	I0816 13:48:54.157381   57440 ssh_runner.go:195] Run: rm -f paused
	I0816 13:48:54.205583   57440 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 13:48:54.207845   57440 out.go:177] * Done! kubectl is now configured to use "no-preload-311070" cluster and "default" namespace by default
	I0816 13:48:51.999301   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:54.498057   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:52.107465   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:54.606735   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:56.498967   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:58.997311   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:56.606925   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:58.606970   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:00.607943   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:00.997760   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:02.998653   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:03.107555   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:05.606363   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:05.497723   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:07.498572   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:09.997905   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:07.607916   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:09.606579   58430 pod_ready.go:82] duration metric: took 4m0.00617652s for pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace to be "Ready" ...
	E0816 13:49:09.606602   58430 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0816 13:49:09.606612   58430 pod_ready.go:39] duration metric: took 4m3.606005486s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:49:09.606627   58430 api_server.go:52] waiting for apiserver process to appear ...
	I0816 13:49:09.606652   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:49:09.606698   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:49:09.660442   58430 cri.go:89] found id: "4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190"
	I0816 13:49:09.660461   58430 cri.go:89] found id: ""
	I0816 13:49:09.660469   58430 logs.go:276] 1 containers: [4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190]
	I0816 13:49:09.660519   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:09.664752   58430 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:49:09.664813   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:49:09.701589   58430 cri.go:89] found id: "83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176"
	I0816 13:49:09.701615   58430 cri.go:89] found id: ""
	I0816 13:49:09.701625   58430 logs.go:276] 1 containers: [83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176]
	I0816 13:49:09.701681   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:09.706048   58430 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:49:09.706114   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:49:09.743810   58430 cri.go:89] found id: "8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910"
	I0816 13:49:09.743832   58430 cri.go:89] found id: ""
	I0816 13:49:09.743841   58430 logs.go:276] 1 containers: [8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910]
	I0816 13:49:09.743898   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:09.748197   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:49:09.748271   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:49:09.783730   58430 cri.go:89] found id: "ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9"
	I0816 13:49:09.783752   58430 cri.go:89] found id: ""
	I0816 13:49:09.783765   58430 logs.go:276] 1 containers: [ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9]
	I0816 13:49:09.783828   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:09.787845   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:49:09.787909   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:49:09.828449   58430 cri.go:89] found id: "99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb"
	I0816 13:49:09.828472   58430 cri.go:89] found id: ""
	I0816 13:49:09.828481   58430 logs.go:276] 1 containers: [99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb]
	I0816 13:49:09.828546   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:09.832890   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:49:09.832963   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:49:09.880136   58430 cri.go:89] found id: "590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239"
	I0816 13:49:09.880164   58430 cri.go:89] found id: ""
	I0816 13:49:09.880175   58430 logs.go:276] 1 containers: [590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239]
	I0816 13:49:09.880232   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:09.884533   58430 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:49:09.884599   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:49:09.924776   58430 cri.go:89] found id: ""
	I0816 13:49:09.924805   58430 logs.go:276] 0 containers: []
	W0816 13:49:09.924816   58430 logs.go:278] No container was found matching "kindnet"
	I0816 13:49:09.924828   58430 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 13:49:09.924889   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 13:49:09.971663   58430 cri.go:89] found id: "7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b"
	I0816 13:49:09.971689   58430 cri.go:89] found id: "17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825"
	I0816 13:49:09.971695   58430 cri.go:89] found id: ""
	I0816 13:49:09.971705   58430 logs.go:276] 2 containers: [7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b 17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825]
	I0816 13:49:09.971770   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:09.976297   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:09.980815   58430 logs.go:123] Gathering logs for coredns [8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910] ...
	I0816 13:49:09.980844   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910"
	I0816 13:49:10.020287   58430 logs.go:123] Gathering logs for kube-scheduler [ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9] ...
	I0816 13:49:10.020317   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9"
	I0816 13:49:10.060266   58430 logs.go:123] Gathering logs for kube-controller-manager [590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239] ...
	I0816 13:49:10.060291   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239"
	I0816 13:49:10.113574   58430 logs.go:123] Gathering logs for storage-provisioner [7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b] ...
	I0816 13:49:10.113608   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b"
	I0816 13:49:10.153457   58430 logs.go:123] Gathering logs for storage-provisioner [17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825] ...
	I0816 13:49:10.153482   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825"
	I0816 13:49:10.191530   58430 logs.go:123] Gathering logs for dmesg ...
	I0816 13:49:10.191559   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:49:10.206267   58430 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:49:10.206296   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 13:49:10.326723   58430 logs.go:123] Gathering logs for etcd [83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176] ...
	I0816 13:49:10.326753   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176"
	I0816 13:49:10.377541   58430 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:49:10.377574   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:49:10.895387   58430 logs.go:123] Gathering logs for container status ...
	I0816 13:49:10.895445   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:49:10.947447   58430 logs.go:123] Gathering logs for kubelet ...
	I0816 13:49:10.947475   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:49:11.997943   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:13.998932   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:11.020745   58430 logs.go:123] Gathering logs for kube-apiserver [4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190] ...
	I0816 13:49:11.020786   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190"
	I0816 13:49:11.081224   58430 logs.go:123] Gathering logs for kube-proxy [99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb] ...
	I0816 13:49:11.081257   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb"
	I0816 13:49:13.632726   58430 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:49:13.651185   58430 api_server.go:72] duration metric: took 4m14.880109274s to wait for apiserver process to appear ...
	I0816 13:49:13.651214   58430 api_server.go:88] waiting for apiserver healthz status ...
	I0816 13:49:13.651254   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:49:13.651308   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:49:13.691473   58430 cri.go:89] found id: "4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190"
	I0816 13:49:13.691495   58430 cri.go:89] found id: ""
	I0816 13:49:13.691503   58430 logs.go:276] 1 containers: [4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190]
	I0816 13:49:13.691582   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:13.695945   58430 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:49:13.695998   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:49:13.730798   58430 cri.go:89] found id: "83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176"
	I0816 13:49:13.730830   58430 cri.go:89] found id: ""
	I0816 13:49:13.730840   58430 logs.go:276] 1 containers: [83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176]
	I0816 13:49:13.730913   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:13.735156   58430 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:49:13.735222   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:49:13.769612   58430 cri.go:89] found id: "8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910"
	I0816 13:49:13.769639   58430 cri.go:89] found id: ""
	I0816 13:49:13.769650   58430 logs.go:276] 1 containers: [8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910]
	I0816 13:49:13.769710   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:13.773690   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:49:13.773745   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:49:13.815417   58430 cri.go:89] found id: "ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9"
	I0816 13:49:13.815444   58430 cri.go:89] found id: ""
	I0816 13:49:13.815454   58430 logs.go:276] 1 containers: [ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9]
	I0816 13:49:13.815515   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:13.819596   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:49:13.819666   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:49:13.852562   58430 cri.go:89] found id: "99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb"
	I0816 13:49:13.852587   58430 cri.go:89] found id: ""
	I0816 13:49:13.852597   58430 logs.go:276] 1 containers: [99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb]
	I0816 13:49:13.852657   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:13.856697   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:49:13.856757   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:49:13.902327   58430 cri.go:89] found id: "590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239"
	I0816 13:49:13.902346   58430 cri.go:89] found id: ""
	I0816 13:49:13.902353   58430 logs.go:276] 1 containers: [590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239]
	I0816 13:49:13.902416   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:13.906789   58430 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:49:13.906840   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:49:13.943401   58430 cri.go:89] found id: ""
	I0816 13:49:13.943430   58430 logs.go:276] 0 containers: []
	W0816 13:49:13.943438   58430 logs.go:278] No container was found matching "kindnet"
	I0816 13:49:13.943443   58430 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 13:49:13.943490   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 13:49:13.979154   58430 cri.go:89] found id: "7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b"
	I0816 13:49:13.979178   58430 cri.go:89] found id: "17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825"
	I0816 13:49:13.979182   58430 cri.go:89] found id: ""
	I0816 13:49:13.979189   58430 logs.go:276] 2 containers: [7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b 17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825]
	I0816 13:49:13.979235   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:13.983301   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:13.988522   58430 logs.go:123] Gathering logs for dmesg ...
	I0816 13:49:13.988545   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:49:14.005891   58430 logs.go:123] Gathering logs for kube-apiserver [4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190] ...
	I0816 13:49:14.005916   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190"
	I0816 13:49:14.055686   58430 logs.go:123] Gathering logs for etcd [83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176] ...
	I0816 13:49:14.055713   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176"
	I0816 13:49:14.104975   58430 logs.go:123] Gathering logs for kube-proxy [99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb] ...
	I0816 13:49:14.105010   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb"
	I0816 13:49:14.145761   58430 logs.go:123] Gathering logs for kube-controller-manager [590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239] ...
	I0816 13:49:14.145786   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239"
	I0816 13:49:14.198935   58430 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:49:14.198966   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:49:14.662287   58430 logs.go:123] Gathering logs for container status ...
	I0816 13:49:14.662323   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:49:14.717227   58430 logs.go:123] Gathering logs for kubelet ...
	I0816 13:49:14.717256   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:49:14.789824   58430 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:49:14.789868   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 13:49:14.902892   58430 logs.go:123] Gathering logs for coredns [8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910] ...
	I0816 13:49:14.902922   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910"
	I0816 13:49:14.946711   58430 logs.go:123] Gathering logs for kube-scheduler [ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9] ...
	I0816 13:49:14.946736   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9"
	I0816 13:49:14.986143   58430 logs.go:123] Gathering logs for storage-provisioner [7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b] ...
	I0816 13:49:14.986175   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b"
	I0816 13:49:15.022107   58430 logs.go:123] Gathering logs for storage-provisioner [17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825] ...
	I0816 13:49:15.022138   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825"
	I0816 13:49:16.497493   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:18.497979   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:17.556820   58430 api_server.go:253] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 13:49:17.562249   58430 api_server.go:279] https://192.168.50.186:8444/healthz returned 200:
	ok
	I0816 13:49:17.563264   58430 api_server.go:141] control plane version: v1.31.0
	I0816 13:49:17.563280   58430 api_server.go:131] duration metric: took 3.912060569s to wait for apiserver health ...
	I0816 13:49:17.563288   58430 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 13:49:17.563312   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:49:17.563377   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:49:17.604072   58430 cri.go:89] found id: "4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190"
	I0816 13:49:17.604099   58430 cri.go:89] found id: ""
	I0816 13:49:17.604109   58430 logs.go:276] 1 containers: [4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190]
	I0816 13:49:17.604163   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:17.608623   58430 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:49:17.608678   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:49:17.650241   58430 cri.go:89] found id: "83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176"
	I0816 13:49:17.650267   58430 cri.go:89] found id: ""
	I0816 13:49:17.650275   58430 logs.go:276] 1 containers: [83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176]
	I0816 13:49:17.650328   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:17.654928   58430 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:49:17.655000   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:49:17.690057   58430 cri.go:89] found id: "8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910"
	I0816 13:49:17.690085   58430 cri.go:89] found id: ""
	I0816 13:49:17.690095   58430 logs.go:276] 1 containers: [8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910]
	I0816 13:49:17.690164   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:17.694636   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:49:17.694692   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:49:17.730134   58430 cri.go:89] found id: "ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9"
	I0816 13:49:17.730167   58430 cri.go:89] found id: ""
	I0816 13:49:17.730177   58430 logs.go:276] 1 containers: [ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9]
	I0816 13:49:17.730238   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:17.734364   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:49:17.734420   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:49:17.769579   58430 cri.go:89] found id: "99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb"
	I0816 13:49:17.769595   58430 cri.go:89] found id: ""
	I0816 13:49:17.769603   58430 logs.go:276] 1 containers: [99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb]
	I0816 13:49:17.769643   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:17.773543   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:49:17.773601   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:49:17.814287   58430 cri.go:89] found id: "590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239"
	I0816 13:49:17.814310   58430 cri.go:89] found id: ""
	I0816 13:49:17.814319   58430 logs.go:276] 1 containers: [590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239]
	I0816 13:49:17.814393   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:17.818904   58430 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:49:17.818977   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:49:17.858587   58430 cri.go:89] found id: ""
	I0816 13:49:17.858614   58430 logs.go:276] 0 containers: []
	W0816 13:49:17.858622   58430 logs.go:278] No container was found matching "kindnet"
	I0816 13:49:17.858627   58430 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 13:49:17.858674   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 13:49:17.901759   58430 cri.go:89] found id: "7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b"
	I0816 13:49:17.901784   58430 cri.go:89] found id: "17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825"
	I0816 13:49:17.901788   58430 cri.go:89] found id: ""
	I0816 13:49:17.901796   58430 logs.go:276] 2 containers: [7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b 17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825]
	I0816 13:49:17.901853   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:17.906139   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:17.910273   58430 logs.go:123] Gathering logs for dmesg ...
	I0816 13:49:17.910293   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:49:17.924565   58430 logs.go:123] Gathering logs for etcd [83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176] ...
	I0816 13:49:17.924590   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176"
	I0816 13:49:17.971895   58430 logs.go:123] Gathering logs for coredns [8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910] ...
	I0816 13:49:17.971927   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910"
	I0816 13:49:18.011332   58430 logs.go:123] Gathering logs for kube-scheduler [ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9] ...
	I0816 13:49:18.011364   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9"
	I0816 13:49:18.049264   58430 logs.go:123] Gathering logs for storage-provisioner [7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b] ...
	I0816 13:49:18.049292   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b"
	I0816 13:49:18.084004   58430 logs.go:123] Gathering logs for container status ...
	I0816 13:49:18.084030   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:49:18.136961   58430 logs.go:123] Gathering logs for kubelet ...
	I0816 13:49:18.137000   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:49:18.210452   58430 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:49:18.210483   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 13:49:18.327398   58430 logs.go:123] Gathering logs for kube-apiserver [4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190] ...
	I0816 13:49:18.327429   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190"
	I0816 13:49:18.378777   58430 logs.go:123] Gathering logs for kube-proxy [99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb] ...
	I0816 13:49:18.378809   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb"
	I0816 13:49:18.430052   58430 logs.go:123] Gathering logs for kube-controller-manager [590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239] ...
	I0816 13:49:18.430088   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239"
	I0816 13:49:18.496775   58430 logs.go:123] Gathering logs for storage-provisioner [17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825] ...
	I0816 13:49:18.496806   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825"
	I0816 13:49:18.540493   58430 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:49:18.540523   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:49:21.451644   58430 system_pods.go:59] 8 kube-system pods found
	I0816 13:49:21.451673   58430 system_pods.go:61] "coredns-6f6b679f8f-xdwhx" [66987c52-9a8c-4ddd-a6cf-ac84172d8c8c] Running
	I0816 13:49:21.451679   58430 system_pods.go:61] "etcd-default-k8s-diff-port-893736" [54f5bb47-00f5-4c65-88ae-460571872fd1] Running
	I0816 13:49:21.451682   58430 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-893736" [c8c57619-3a4c-4cc9-b637-7842194071ad] Running
	I0816 13:49:21.451687   58430 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-893736" [e553aced-34bd-4d30-b473-082001fe2948] Running
	I0816 13:49:21.451691   58430 system_pods.go:61] "kube-proxy-btq6r" [a2b7b283-da62-4cb8-a039-07a509491e5e] Running
	I0816 13:49:21.451694   58430 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-893736" [0a30cbd0-a2ac-4ca9-bddc-674e6fc9ae77] Running
	I0816 13:49:21.451701   58430 system_pods.go:61] "metrics-server-6867b74b74-j9tqh" [ef077e6d-f368-4872-bb87-9e031d3ea764] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 13:49:21.451705   58430 system_pods.go:61] "storage-provisioner" [e2fbf16a-3bc7-4300-8023-5dbb20ba70bc] Running
	I0816 13:49:21.451713   58430 system_pods.go:74] duration metric: took 3.888418707s to wait for pod list to return data ...
	I0816 13:49:21.451719   58430 default_sa.go:34] waiting for default service account to be created ...
	I0816 13:49:21.454558   58430 default_sa.go:45] found service account: "default"
	I0816 13:49:21.454578   58430 default_sa.go:55] duration metric: took 2.853068ms for default service account to be created ...
	I0816 13:49:21.454585   58430 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 13:49:21.458906   58430 system_pods.go:86] 8 kube-system pods found
	I0816 13:49:21.458930   58430 system_pods.go:89] "coredns-6f6b679f8f-xdwhx" [66987c52-9a8c-4ddd-a6cf-ac84172d8c8c] Running
	I0816 13:49:21.458935   58430 system_pods.go:89] "etcd-default-k8s-diff-port-893736" [54f5bb47-00f5-4c65-88ae-460571872fd1] Running
	I0816 13:49:21.458941   58430 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-893736" [c8c57619-3a4c-4cc9-b637-7842194071ad] Running
	I0816 13:49:21.458944   58430 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-893736" [e553aced-34bd-4d30-b473-082001fe2948] Running
	I0816 13:49:21.458948   58430 system_pods.go:89] "kube-proxy-btq6r" [a2b7b283-da62-4cb8-a039-07a509491e5e] Running
	I0816 13:49:21.458951   58430 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-893736" [0a30cbd0-a2ac-4ca9-bddc-674e6fc9ae77] Running
	I0816 13:49:21.458958   58430 system_pods.go:89] "metrics-server-6867b74b74-j9tqh" [ef077e6d-f368-4872-bb87-9e031d3ea764] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 13:49:21.458961   58430 system_pods.go:89] "storage-provisioner" [e2fbf16a-3bc7-4300-8023-5dbb20ba70bc] Running
	I0816 13:49:21.458968   58430 system_pods.go:126] duration metric: took 4.378971ms to wait for k8s-apps to be running ...
	I0816 13:49:21.458975   58430 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 13:49:21.459016   58430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 13:49:21.476060   58430 system_svc.go:56] duration metric: took 17.075817ms WaitForService to wait for kubelet
	I0816 13:49:21.476086   58430 kubeadm.go:582] duration metric: took 4m22.705015833s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 13:49:21.476109   58430 node_conditions.go:102] verifying NodePressure condition ...
	I0816 13:49:21.479557   58430 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 13:49:21.479585   58430 node_conditions.go:123] node cpu capacity is 2
	I0816 13:49:21.479600   58430 node_conditions.go:105] duration metric: took 3.483638ms to run NodePressure ...
	I0816 13:49:21.479613   58430 start.go:241] waiting for startup goroutines ...
	I0816 13:49:21.479622   58430 start.go:246] waiting for cluster config update ...
	I0816 13:49:21.479637   58430 start.go:255] writing updated cluster config ...
	I0816 13:49:21.479949   58430 ssh_runner.go:195] Run: rm -f paused
	I0816 13:49:21.530237   58430 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 13:49:21.532328   58430 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-893736" cluster and "default" namespace by default
	I0816 13:49:20.998486   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:23.498358   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:25.498502   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:27.998622   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:30.491886   57240 pod_ready.go:82] duration metric: took 4m0.000539211s for pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace to be "Ready" ...
	E0816 13:49:30.491929   57240 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace to be "Ready" (will not retry!)
	I0816 13:49:30.491945   57240 pod_ready.go:39] duration metric: took 4m12.492024576s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:49:30.491972   57240 kubeadm.go:597] duration metric: took 4m19.795438093s to restartPrimaryControlPlane
	W0816 13:49:30.492032   57240 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0816 13:49:30.492059   57240 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 13:49:56.783263   57240 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.29118348s)
	I0816 13:49:56.783321   57240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 13:49:56.798550   57240 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 13:49:56.810542   57240 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 13:49:56.820837   57240 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 13:49:56.820873   57240 kubeadm.go:157] found existing configuration files:
	
	I0816 13:49:56.820947   57240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 13:49:56.831998   57240 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 13:49:56.832057   57240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 13:49:56.842351   57240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 13:49:56.852062   57240 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 13:49:56.852119   57240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 13:49:56.862337   57240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 13:49:56.872000   57240 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 13:49:56.872050   57240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 13:49:56.881764   57240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 13:49:56.891211   57240 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 13:49:56.891276   57240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 13:49:56.900969   57240 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 13:49:56.942823   57240 kubeadm.go:310] W0816 13:49:56.895203    2544 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 13:49:56.943751   57240 kubeadm.go:310] W0816 13:49:56.896255    2544 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 13:49:57.049491   57240 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 13:50:05.244505   57240 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0816 13:50:05.244561   57240 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 13:50:05.244657   57240 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 13:50:05.244775   57240 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 13:50:05.244901   57240 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0816 13:50:05.244989   57240 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 13:50:05.246568   57240 out.go:235]   - Generating certificates and keys ...
	I0816 13:50:05.246667   57240 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 13:50:05.246779   57240 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 13:50:05.246885   57240 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 13:50:05.246968   57240 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 13:50:05.247065   57240 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 13:50:05.247125   57240 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 13:50:05.247195   57240 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 13:50:05.247260   57240 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 13:50:05.247372   57240 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 13:50:05.247480   57240 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 13:50:05.247521   57240 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 13:50:05.247590   57240 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 13:50:05.247670   57240 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 13:50:05.247751   57240 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0816 13:50:05.247830   57240 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 13:50:05.247886   57240 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 13:50:05.247965   57240 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 13:50:05.248046   57240 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 13:50:05.248100   57240 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 13:50:05.249601   57240 out.go:235]   - Booting up control plane ...
	I0816 13:50:05.249698   57240 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 13:50:05.249779   57240 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 13:50:05.249835   57240 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 13:50:05.249930   57240 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 13:50:05.250007   57240 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 13:50:05.250046   57240 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 13:50:05.250184   57240 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0816 13:50:05.250289   57240 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0816 13:50:05.250343   57240 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002296228s
	I0816 13:50:05.250403   57240 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0816 13:50:05.250456   57240 kubeadm.go:310] [api-check] The API server is healthy after 5.002119618s
	I0816 13:50:05.250546   57240 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0816 13:50:05.250651   57240 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0816 13:50:05.250700   57240 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0816 13:50:05.250876   57240 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-302520 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0816 13:50:05.250930   57240 kubeadm.go:310] [bootstrap-token] Using token: dta4cr.diyk2wto3tx3ixlb
	I0816 13:50:05.252120   57240 out.go:235]   - Configuring RBAC rules ...
	I0816 13:50:05.252207   57240 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0816 13:50:05.252287   57240 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0816 13:50:05.252418   57240 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0816 13:50:05.252542   57240 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0816 13:50:05.252648   57240 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0816 13:50:05.252724   57240 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0816 13:50:05.252819   57240 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0816 13:50:05.252856   57240 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0816 13:50:05.252895   57240 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0816 13:50:05.252901   57240 kubeadm.go:310] 
	I0816 13:50:05.253004   57240 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0816 13:50:05.253022   57240 kubeadm.go:310] 
	I0816 13:50:05.253116   57240 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0816 13:50:05.253126   57240 kubeadm.go:310] 
	I0816 13:50:05.253155   57240 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0816 13:50:05.253240   57240 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0816 13:50:05.253283   57240 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0816 13:50:05.253289   57240 kubeadm.go:310] 
	I0816 13:50:05.253340   57240 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0816 13:50:05.253347   57240 kubeadm.go:310] 
	I0816 13:50:05.253405   57240 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0816 13:50:05.253423   57240 kubeadm.go:310] 
	I0816 13:50:05.253484   57240 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0816 13:50:05.253556   57240 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0816 13:50:05.253621   57240 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0816 13:50:05.253629   57240 kubeadm.go:310] 
	I0816 13:50:05.253710   57240 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0816 13:50:05.253840   57240 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0816 13:50:05.253855   57240 kubeadm.go:310] 
	I0816 13:50:05.253962   57240 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token dta4cr.diyk2wto3tx3ixlb \
	I0816 13:50:05.254087   57240 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4dc33a2efd2478f5cbf5aa38b4f971f510c5b41ce450063af834610254851641 \
	I0816 13:50:05.254122   57240 kubeadm.go:310] 	--control-plane 
	I0816 13:50:05.254126   57240 kubeadm.go:310] 
	I0816 13:50:05.254202   57240 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0816 13:50:05.254209   57240 kubeadm.go:310] 
	I0816 13:50:05.254280   57240 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token dta4cr.diyk2wto3tx3ixlb \
	I0816 13:50:05.254394   57240 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4dc33a2efd2478f5cbf5aa38b4f971f510c5b41ce450063af834610254851641 
	I0816 13:50:05.254407   57240 cni.go:84] Creating CNI manager for ""
	I0816 13:50:05.254416   57240 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:50:05.255889   57240 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 13:50:05.257086   57240 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 13:50:05.268668   57240 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 13:50:05.288676   57240 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 13:50:05.288735   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 13:50:05.288755   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-302520 minikube.k8s.io/updated_at=2024_08_16T13_50_05_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ab84f9bc76071a77c857a14f5c66dccc01002b05 minikube.k8s.io/name=embed-certs-302520 minikube.k8s.io/primary=true
	I0816 13:50:05.494987   57240 ops.go:34] apiserver oom_adj: -16
	I0816 13:50:05.495066   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 13:50:05.995792   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 13:50:06.495937   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 13:50:06.995513   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 13:50:07.495437   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 13:50:07.995600   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 13:50:08.495194   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 13:50:08.995101   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 13:50:09.495533   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 13:50:09.659383   57240 kubeadm.go:1113] duration metric: took 4.370714211s to wait for elevateKubeSystemPrivileges
	I0816 13:50:09.659425   57240 kubeadm.go:394] duration metric: took 4m59.010243945s to StartCluster
	I0816 13:50:09.659448   57240 settings.go:142] acquiring lock: {Name:mk96ffc9331ceacd8f3c1c33a59b38e047898a0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:50:09.659529   57240 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 13:50:09.661178   57240 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/kubeconfig: {Name:mk102d7d0e1fecb6a50b9d6c1ee82dcff7f7a898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:50:09.661475   57240 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 13:50:09.661579   57240 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 13:50:09.661662   57240 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-302520"
	I0816 13:50:09.661678   57240 addons.go:69] Setting default-storageclass=true in profile "embed-certs-302520"
	I0816 13:50:09.661693   57240 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-302520"
	W0816 13:50:09.661701   57240 addons.go:243] addon storage-provisioner should already be in state true
	I0816 13:50:09.661683   57240 config.go:182] Loaded profile config "embed-certs-302520": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 13:50:09.661707   57240 addons.go:69] Setting metrics-server=true in profile "embed-certs-302520"
	I0816 13:50:09.661730   57240 host.go:66] Checking if "embed-certs-302520" exists ...
	I0816 13:50:09.661732   57240 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-302520"
	I0816 13:50:09.661744   57240 addons.go:234] Setting addon metrics-server=true in "embed-certs-302520"
	W0816 13:50:09.661758   57240 addons.go:243] addon metrics-server should already be in state true
	I0816 13:50:09.661789   57240 host.go:66] Checking if "embed-certs-302520" exists ...
	I0816 13:50:09.662063   57240 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:50:09.662070   57240 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:50:09.662092   57240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:50:09.662093   57240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:50:09.662125   57240 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:50:09.662177   57240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:50:09.663568   57240 out.go:177] * Verifying Kubernetes components...
	I0816 13:50:09.665144   57240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:50:09.679643   57240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44127
	I0816 13:50:09.679976   57240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33121
	I0816 13:50:09.680138   57240 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:50:09.680460   57240 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:50:09.680652   57240 main.go:141] libmachine: Using API Version  1
	I0816 13:50:09.680677   57240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:50:09.681040   57240 main.go:141] libmachine: Using API Version  1
	I0816 13:50:09.681060   57240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:50:09.681084   57240 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:50:09.681449   57240 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:50:09.681659   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetState
	I0816 13:50:09.681706   57240 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:50:09.681737   57240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:50:09.682300   57240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42691
	I0816 13:50:09.682644   57240 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:50:09.683099   57240 main.go:141] libmachine: Using API Version  1
	I0816 13:50:09.683121   57240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:50:09.683464   57240 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:50:09.683993   57240 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:50:09.684020   57240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:50:09.684695   57240 addons.go:234] Setting addon default-storageclass=true in "embed-certs-302520"
	W0816 13:50:09.684713   57240 addons.go:243] addon default-storageclass should already be in state true
	I0816 13:50:09.684733   57240 host.go:66] Checking if "embed-certs-302520" exists ...
	I0816 13:50:09.685016   57240 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:50:09.685044   57240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:50:09.699612   57240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40707
	I0816 13:50:09.700235   57240 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:50:09.700244   57240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36139
	I0816 13:50:09.700776   57240 main.go:141] libmachine: Using API Version  1
	I0816 13:50:09.700795   57240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:50:09.700827   57240 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:50:09.701285   57240 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:50:09.701369   57240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40047
	I0816 13:50:09.701457   57240 main.go:141] libmachine: Using API Version  1
	I0816 13:50:09.701467   57240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:50:09.701939   57240 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:50:09.701980   57240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:50:09.702188   57240 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:50:09.702209   57240 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:50:09.702494   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetState
	I0816 13:50:09.702618   57240 main.go:141] libmachine: Using API Version  1
	I0816 13:50:09.702635   57240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:50:09.703042   57240 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:50:09.703250   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetState
	I0816 13:50:09.704568   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	I0816 13:50:09.705308   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	I0816 13:50:09.707074   57240 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0816 13:50:09.707074   57240 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:50:09.708773   57240 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 13:50:09.708792   57240 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 13:50:09.708813   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:50:09.708894   57240 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 13:50:09.708924   57240 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 13:50:09.708941   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:50:09.714305   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:50:09.714338   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:50:09.714812   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:50:09.714840   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:50:09.714874   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:50:09.714928   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:50:09.715181   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:50:09.715215   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:50:09.715363   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:50:09.715399   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:50:09.715512   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:50:09.715556   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:50:09.715634   57240 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/embed-certs-302520/id_rsa Username:docker}
	I0816 13:50:09.715876   57240 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/embed-certs-302520/id_rsa Username:docker}
	I0816 13:50:09.724172   57240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37325
	I0816 13:50:09.724636   57240 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:50:09.725184   57240 main.go:141] libmachine: Using API Version  1
	I0816 13:50:09.725213   57240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:50:09.725596   57240 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:50:09.725799   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetState
	I0816 13:50:09.727188   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	I0816 13:50:09.727410   57240 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 13:50:09.727426   57240 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 13:50:09.727447   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:50:09.729840   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:50:09.730228   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:50:09.730255   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:50:09.730534   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:50:09.730723   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:50:09.730867   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:50:09.731014   57240 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/embed-certs-302520/id_rsa Username:docker}
	I0816 13:50:09.899195   57240 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 13:50:09.939173   57240 node_ready.go:35] waiting up to 6m0s for node "embed-certs-302520" to be "Ready" ...
	I0816 13:50:09.958087   57240 node_ready.go:49] node "embed-certs-302520" has status "Ready":"True"
	I0816 13:50:09.958119   57240 node_ready.go:38] duration metric: took 18.911367ms for node "embed-certs-302520" to be "Ready" ...
	I0816 13:50:09.958130   57240 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:50:09.963326   57240 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-whnqh" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:10.083721   57240 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 13:50:10.184794   57240 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 13:50:10.203192   57240 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 13:50:10.203214   57240 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0816 13:50:10.285922   57240 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 13:50:10.285950   57240 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 13:50:10.370797   57240 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 13:50:10.370825   57240 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 13:50:10.420892   57240 main.go:141] libmachine: Making call to close driver server
	I0816 13:50:10.420942   57240 main.go:141] libmachine: (embed-certs-302520) Calling .Close
	I0816 13:50:10.421261   57240 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:50:10.421280   57240 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:50:10.421282   57240 main.go:141] libmachine: (embed-certs-302520) DBG | Closing plugin on server side
	I0816 13:50:10.421293   57240 main.go:141] libmachine: Making call to close driver server
	I0816 13:50:10.421303   57240 main.go:141] libmachine: (embed-certs-302520) Calling .Close
	I0816 13:50:10.421556   57240 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:50:10.421620   57240 main.go:141] libmachine: (embed-certs-302520) DBG | Closing plugin on server side
	I0816 13:50:10.421625   57240 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:50:10.427229   57240 main.go:141] libmachine: Making call to close driver server
	I0816 13:50:10.427250   57240 main.go:141] libmachine: (embed-certs-302520) Calling .Close
	I0816 13:50:10.427591   57240 main.go:141] libmachine: (embed-certs-302520) DBG | Closing plugin on server side
	I0816 13:50:10.427638   57240 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:50:10.427655   57240 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:50:10.454486   57240 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 13:50:11.225905   57240 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.041077031s)
	I0816 13:50:11.225958   57240 main.go:141] libmachine: Making call to close driver server
	I0816 13:50:11.225969   57240 main.go:141] libmachine: (embed-certs-302520) Calling .Close
	I0816 13:50:11.226248   57240 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:50:11.226268   57240 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:50:11.226273   57240 main.go:141] libmachine: (embed-certs-302520) DBG | Closing plugin on server side
	I0816 13:50:11.226295   57240 main.go:141] libmachine: Making call to close driver server
	I0816 13:50:11.226310   57240 main.go:141] libmachine: (embed-certs-302520) Calling .Close
	I0816 13:50:11.226561   57240 main.go:141] libmachine: (embed-certs-302520) DBG | Closing plugin on server side
	I0816 13:50:11.226608   57240 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:50:11.226627   57240 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:50:11.447454   57240 main.go:141] libmachine: Making call to close driver server
	I0816 13:50:11.447484   57240 main.go:141] libmachine: (embed-certs-302520) Calling .Close
	I0816 13:50:11.447823   57240 main.go:141] libmachine: (embed-certs-302520) DBG | Closing plugin on server side
	I0816 13:50:11.447890   57240 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:50:11.447908   57240 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:50:11.447924   57240 main.go:141] libmachine: Making call to close driver server
	I0816 13:50:11.447936   57240 main.go:141] libmachine: (embed-certs-302520) Calling .Close
	I0816 13:50:11.448179   57240 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:50:11.448195   57240 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:50:11.448241   57240 addons.go:475] Verifying addon metrics-server=true in "embed-certs-302520"
	I0816 13:50:11.450274   57240 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0816 13:50:11.451676   57240 addons.go:510] duration metric: took 1.790101568s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0816 13:50:11.971087   57240 pod_ready.go:103] pod "coredns-6f6b679f8f-whnqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:50:12.470167   57240 pod_ready.go:93] pod "coredns-6f6b679f8f-whnqh" in "kube-system" namespace has status "Ready":"True"
	I0816 13:50:12.470193   57240 pod_ready.go:82] duration metric: took 2.506842546s for pod "coredns-6f6b679f8f-whnqh" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:12.470203   57240 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-zh69g" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:12.474959   57240 pod_ready.go:93] pod "coredns-6f6b679f8f-zh69g" in "kube-system" namespace has status "Ready":"True"
	I0816 13:50:12.474980   57240 pod_ready.go:82] duration metric: took 4.769458ms for pod "coredns-6f6b679f8f-zh69g" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:12.474988   57240 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:12.479388   57240 pod_ready.go:93] pod "etcd-embed-certs-302520" in "kube-system" namespace has status "Ready":"True"
	I0816 13:50:12.479410   57240 pod_ready.go:82] duration metric: took 4.41564ms for pod "etcd-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:12.479421   57240 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:12.483567   57240 pod_ready.go:93] pod "kube-apiserver-embed-certs-302520" in "kube-system" namespace has status "Ready":"True"
	I0816 13:50:12.483589   57240 pod_ready.go:82] duration metric: took 4.159906ms for pod "kube-apiserver-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:12.483600   57240 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:14.490212   57240 pod_ready.go:103] pod "kube-controller-manager-embed-certs-302520" in "kube-system" namespace has status "Ready":"False"
	I0816 13:50:15.990204   57240 pod_ready.go:93] pod "kube-controller-manager-embed-certs-302520" in "kube-system" namespace has status "Ready":"True"
	I0816 13:50:15.990226   57240 pod_ready.go:82] duration metric: took 3.506618768s for pod "kube-controller-manager-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:15.990235   57240 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-spgtw" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:15.994580   57240 pod_ready.go:93] pod "kube-proxy-spgtw" in "kube-system" namespace has status "Ready":"True"
	I0816 13:50:15.994597   57240 pod_ready.go:82] duration metric: took 4.356588ms for pod "kube-proxy-spgtw" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:15.994605   57240 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:16.068472   57240 pod_ready.go:93] pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace has status "Ready":"True"
	I0816 13:50:16.068495   57240 pod_ready.go:82] duration metric: took 73.884906ms for pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:16.068503   57240 pod_ready.go:39] duration metric: took 6.110362477s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:50:16.068519   57240 api_server.go:52] waiting for apiserver process to appear ...
	I0816 13:50:16.068579   57240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:50:16.086318   57240 api_server.go:72] duration metric: took 6.424804798s to wait for apiserver process to appear ...
	I0816 13:50:16.086345   57240 api_server.go:88] waiting for apiserver healthz status ...
	I0816 13:50:16.086361   57240 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0816 13:50:16.091170   57240 api_server.go:279] https://192.168.39.125:8443/healthz returned 200:
	ok
	I0816 13:50:16.092122   57240 api_server.go:141] control plane version: v1.31.0
	I0816 13:50:16.092138   57240 api_server.go:131] duration metric: took 5.787898ms to wait for apiserver health ...
	I0816 13:50:16.092146   57240 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 13:50:16.271303   57240 system_pods.go:59] 9 kube-system pods found
	I0816 13:50:16.271338   57240 system_pods.go:61] "coredns-6f6b679f8f-whnqh" [6f4d69de-4130-4959-b1ef-9ddfbe5d6a72] Running
	I0816 13:50:16.271344   57240 system_pods.go:61] "coredns-6f6b679f8f-zh69g" [b65235cd-590b-4108-b5fc-b5f6072c8f5f] Running
	I0816 13:50:16.271348   57240 system_pods.go:61] "etcd-embed-certs-302520" [54a46f37-7b4c-4732-908d-df64558dd74f] Running
	I0816 13:50:16.271353   57240 system_pods.go:61] "kube-apiserver-embed-certs-302520" [d58b625b-c94e-44a7-ac30-18b1e2e8691e] Running
	I0816 13:50:16.271359   57240 system_pods.go:61] "kube-controller-manager-embed-certs-302520" [6bb26bff-7111-40c5-9f18-9ca1b733f990] Running
	I0816 13:50:16.271364   57240 system_pods.go:61] "kube-proxy-spgtw" [e9b2b029-a32e-4dd5-b4ff-bd3d61b97a02] Running
	I0816 13:50:16.271370   57240 system_pods.go:61] "kube-scheduler-embed-certs-302520" [aea7ddf8-67b1-468d-9ab8-c78b0bfecdbb] Running
	I0816 13:50:16.271379   57240 system_pods.go:61] "metrics-server-6867b74b74-q58h2" [1351eabe-df61-4b9c-b67b-2e9c963b0eaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 13:50:16.271389   57240 system_pods.go:61] "storage-provisioner" [8e139aaf-e6d1-4661-8c7b-90c1cc9827d4] Running
	I0816 13:50:16.271398   57240 system_pods.go:74] duration metric: took 179.244421ms to wait for pod list to return data ...
	I0816 13:50:16.271410   57240 default_sa.go:34] waiting for default service account to be created ...
	I0816 13:50:16.468167   57240 default_sa.go:45] found service account: "default"
	I0816 13:50:16.468196   57240 default_sa.go:55] duration metric: took 196.779435ms for default service account to be created ...
	I0816 13:50:16.468207   57240 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 13:50:16.670917   57240 system_pods.go:86] 9 kube-system pods found
	I0816 13:50:16.670943   57240 system_pods.go:89] "coredns-6f6b679f8f-whnqh" [6f4d69de-4130-4959-b1ef-9ddfbe5d6a72] Running
	I0816 13:50:16.670949   57240 system_pods.go:89] "coredns-6f6b679f8f-zh69g" [b65235cd-590b-4108-b5fc-b5f6072c8f5f] Running
	I0816 13:50:16.670953   57240 system_pods.go:89] "etcd-embed-certs-302520" [54a46f37-7b4c-4732-908d-df64558dd74f] Running
	I0816 13:50:16.670957   57240 system_pods.go:89] "kube-apiserver-embed-certs-302520" [d58b625b-c94e-44a7-ac30-18b1e2e8691e] Running
	I0816 13:50:16.670960   57240 system_pods.go:89] "kube-controller-manager-embed-certs-302520" [6bb26bff-7111-40c5-9f18-9ca1b733f990] Running
	I0816 13:50:16.670963   57240 system_pods.go:89] "kube-proxy-spgtw" [e9b2b029-a32e-4dd5-b4ff-bd3d61b97a02] Running
	I0816 13:50:16.670967   57240 system_pods.go:89] "kube-scheduler-embed-certs-302520" [aea7ddf8-67b1-468d-9ab8-c78b0bfecdbb] Running
	I0816 13:50:16.670972   57240 system_pods.go:89] "metrics-server-6867b74b74-q58h2" [1351eabe-df61-4b9c-b67b-2e9c963b0eaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 13:50:16.670976   57240 system_pods.go:89] "storage-provisioner" [8e139aaf-e6d1-4661-8c7b-90c1cc9827d4] Running
	I0816 13:50:16.670984   57240 system_pods.go:126] duration metric: took 202.771216ms to wait for k8s-apps to be running ...
	I0816 13:50:16.670990   57240 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 13:50:16.671040   57240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 13:50:16.686873   57240 system_svc.go:56] duration metric: took 15.876641ms WaitForService to wait for kubelet
	I0816 13:50:16.686906   57240 kubeadm.go:582] duration metric: took 7.025397638s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 13:50:16.686925   57240 node_conditions.go:102] verifying NodePressure condition ...
	I0816 13:50:16.869367   57240 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 13:50:16.869393   57240 node_conditions.go:123] node cpu capacity is 2
	I0816 13:50:16.869405   57240 node_conditions.go:105] duration metric: took 182.475776ms to run NodePressure ...
	I0816 13:50:16.869420   57240 start.go:241] waiting for startup goroutines ...
	I0816 13:50:16.869427   57240 start.go:246] waiting for cluster config update ...
	I0816 13:50:16.869436   57240 start.go:255] writing updated cluster config ...
	I0816 13:50:16.869686   57240 ssh_runner.go:195] Run: rm -f paused
	I0816 13:50:16.919168   57240 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 13:50:16.921207   57240 out.go:177] * Done! kubectl is now configured to use "embed-certs-302520" cluster and "default" namespace by default
	I0816 13:50:32.875973   57945 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0816 13:50:32.876092   57945 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0816 13:50:32.877853   57945 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0816 13:50:32.877964   57945 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 13:50:32.878066   57945 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 13:50:32.878184   57945 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 13:50:32.878286   57945 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0816 13:50:32.878362   57945 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 13:50:32.880211   57945 out.go:235]   - Generating certificates and keys ...
	I0816 13:50:32.880308   57945 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 13:50:32.880389   57945 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 13:50:32.880480   57945 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 13:50:32.880575   57945 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 13:50:32.880684   57945 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 13:50:32.880782   57945 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 13:50:32.880874   57945 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 13:50:32.880988   57945 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 13:50:32.881100   57945 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 13:50:32.881190   57945 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 13:50:32.881228   57945 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 13:50:32.881274   57945 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 13:50:32.881318   57945 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 13:50:32.881362   57945 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 13:50:32.881418   57945 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 13:50:32.881473   57945 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 13:50:32.881585   57945 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 13:50:32.881676   57945 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 13:50:32.881747   57945 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 13:50:32.881846   57945 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 13:50:32.883309   57945 out.go:235]   - Booting up control plane ...
	I0816 13:50:32.883394   57945 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 13:50:32.883493   57945 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 13:50:32.883563   57945 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 13:50:32.883661   57945 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 13:50:32.883867   57945 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0816 13:50:32.883916   57945 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0816 13:50:32.883985   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:50:32.884185   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:50:32.884285   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:50:32.884483   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:50:32.884557   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:50:32.884718   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:50:32.884775   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:50:32.884984   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:50:32.885058   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:50:32.885258   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:50:32.885272   57945 kubeadm.go:310] 
	I0816 13:50:32.885367   57945 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0816 13:50:32.885419   57945 kubeadm.go:310] 		timed out waiting for the condition
	I0816 13:50:32.885426   57945 kubeadm.go:310] 
	I0816 13:50:32.885455   57945 kubeadm.go:310] 	This error is likely caused by:
	I0816 13:50:32.885489   57945 kubeadm.go:310] 		- The kubelet is not running
	I0816 13:50:32.885579   57945 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0816 13:50:32.885587   57945 kubeadm.go:310] 
	I0816 13:50:32.885709   57945 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0816 13:50:32.885745   57945 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0816 13:50:32.885774   57945 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0816 13:50:32.885781   57945 kubeadm.go:310] 
	I0816 13:50:32.885866   57945 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0816 13:50:32.885938   57945 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0816 13:50:32.885945   57945 kubeadm.go:310] 
	I0816 13:50:32.886039   57945 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0816 13:50:32.886139   57945 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0816 13:50:32.886251   57945 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0816 13:50:32.886331   57945 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0816 13:50:32.886369   57945 kubeadm.go:310] 
	W0816 13:50:32.886438   57945 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0816 13:50:32.886474   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 13:50:33.351503   57945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 13:50:33.366285   57945 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 13:50:33.378157   57945 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 13:50:33.378180   57945 kubeadm.go:157] found existing configuration files:
	
	I0816 13:50:33.378241   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 13:50:33.389301   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 13:50:33.389358   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 13:50:33.400730   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 13:50:33.412130   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 13:50:33.412209   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 13:50:33.423484   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 13:50:33.433610   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 13:50:33.433676   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 13:50:33.445384   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 13:50:33.456098   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 13:50:33.456159   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 13:50:33.466036   57945 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 13:50:33.693238   57945 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 13:52:29.699171   57945 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0816 13:52:29.699367   57945 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0816 13:52:29.700903   57945 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0816 13:52:29.701036   57945 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 13:52:29.701228   57945 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 13:52:29.701460   57945 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 13:52:29.701761   57945 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0816 13:52:29.701863   57945 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 13:52:29.703486   57945 out.go:235]   - Generating certificates and keys ...
	I0816 13:52:29.703550   57945 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 13:52:29.703603   57945 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 13:52:29.703671   57945 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 13:52:29.703732   57945 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 13:52:29.703823   57945 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 13:52:29.703918   57945 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 13:52:29.704016   57945 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 13:52:29.704098   57945 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 13:52:29.704190   57945 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 13:52:29.704283   57945 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 13:52:29.704344   57945 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 13:52:29.704407   57945 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 13:52:29.704469   57945 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 13:52:29.704541   57945 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 13:52:29.704630   57945 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 13:52:29.704674   57945 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 13:52:29.704753   57945 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 13:52:29.704824   57945 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 13:52:29.704855   57945 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 13:52:29.704939   57945 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 13:52:29.706461   57945 out.go:235]   - Booting up control plane ...
	I0816 13:52:29.706555   57945 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 13:52:29.706672   57945 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 13:52:29.706744   57945 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 13:52:29.706836   57945 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 13:52:29.707002   57945 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0816 13:52:29.707047   57945 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0816 13:52:29.707126   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:52:29.707345   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:52:29.707438   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:52:29.707691   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:52:29.707752   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:52:29.707892   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:52:29.707969   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:52:29.708132   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:52:29.708219   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:52:29.708478   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:52:29.708500   57945 kubeadm.go:310] 
	I0816 13:52:29.708538   57945 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0816 13:52:29.708579   57945 kubeadm.go:310] 		timed out waiting for the condition
	I0816 13:52:29.708593   57945 kubeadm.go:310] 
	I0816 13:52:29.708633   57945 kubeadm.go:310] 	This error is likely caused by:
	I0816 13:52:29.708660   57945 kubeadm.go:310] 		- The kubelet is not running
	I0816 13:52:29.708743   57945 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0816 13:52:29.708750   57945 kubeadm.go:310] 
	I0816 13:52:29.708841   57945 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0816 13:52:29.708892   57945 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0816 13:52:29.708959   57945 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0816 13:52:29.708969   57945 kubeadm.go:310] 
	I0816 13:52:29.709120   57945 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0816 13:52:29.709237   57945 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0816 13:52:29.709248   57945 kubeadm.go:310] 
	I0816 13:52:29.709412   57945 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0816 13:52:29.709551   57945 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0816 13:52:29.709660   57945 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0816 13:52:29.709755   57945 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0816 13:52:29.709782   57945 kubeadm.go:310] 
	I0816 13:52:29.709836   57945 kubeadm.go:394] duration metric: took 7m57.514215667s to StartCluster
	I0816 13:52:29.709886   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:52:29.709942   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:52:29.753540   57945 cri.go:89] found id: ""
	I0816 13:52:29.753569   57945 logs.go:276] 0 containers: []
	W0816 13:52:29.753580   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:52:29.753588   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:52:29.753655   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:52:29.793951   57945 cri.go:89] found id: ""
	I0816 13:52:29.793975   57945 logs.go:276] 0 containers: []
	W0816 13:52:29.793983   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:52:29.793988   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:52:29.794040   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:52:29.831303   57945 cri.go:89] found id: ""
	I0816 13:52:29.831334   57945 logs.go:276] 0 containers: []
	W0816 13:52:29.831345   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:52:29.831356   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:52:29.831420   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:52:29.867252   57945 cri.go:89] found id: ""
	I0816 13:52:29.867277   57945 logs.go:276] 0 containers: []
	W0816 13:52:29.867285   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:52:29.867296   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:52:29.867349   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:52:29.901161   57945 cri.go:89] found id: ""
	I0816 13:52:29.901188   57945 logs.go:276] 0 containers: []
	W0816 13:52:29.901204   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:52:29.901212   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:52:29.901268   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:52:29.935781   57945 cri.go:89] found id: ""
	I0816 13:52:29.935808   57945 logs.go:276] 0 containers: []
	W0816 13:52:29.935816   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:52:29.935823   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:52:29.935873   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:52:29.970262   57945 cri.go:89] found id: ""
	I0816 13:52:29.970292   57945 logs.go:276] 0 containers: []
	W0816 13:52:29.970303   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:52:29.970310   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:52:29.970370   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:52:30.026580   57945 cri.go:89] found id: ""
	I0816 13:52:30.026610   57945 logs.go:276] 0 containers: []
	W0816 13:52:30.026621   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:52:30.026642   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:52:30.026657   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:52:30.050718   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:52:30.050747   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:52:30.146600   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:52:30.146623   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:52:30.146637   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:52:30.268976   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:52:30.269012   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:52:30.312306   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:52:30.312341   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0816 13:52:30.363242   57945 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0816 13:52:30.363303   57945 out.go:270] * 
	W0816 13:52:30.363365   57945 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0816 13:52:30.363377   57945 out.go:270] * 
	W0816 13:52:30.364104   57945 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 13:52:30.366989   57945 out.go:201] 
	W0816 13:52:30.368192   57945 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0816 13:52:30.368293   57945 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0816 13:52:30.368318   57945 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0816 13:52:30.369674   57945 out.go:201] 
	
	
	==> CRI-O <==
	Aug 16 13:59:18 embed-certs-302520 crio[731]: time="2024-08-16 13:59:18.944403904Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723816758944380741,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a9d1fc16-ba99-46c4-9406-7f2b087c8aab name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 13:59:18 embed-certs-302520 crio[731]: time="2024-08-16 13:59:18.944913896Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4c88c945-48bc-4fb8-a1d2-3725d2d6f18e name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:59:18 embed-certs-302520 crio[731]: time="2024-08-16 13:59:18.944989322Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4c88c945-48bc-4fb8-a1d2-3725d2d6f18e name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:59:18 embed-certs-302520 crio[731]: time="2024-08-16 13:59:18.945250596Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5d55f680e17867bafc1c19e765974907ab36d34fd5cc2d97ce049e2dae88cdb9,PodSandboxId:3263fbb7130a4509352aea8c9440162a738feafa6856e2e2b3d34d3db2ba7679,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723816211588747512,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e139aaf-e6d1-4661-8c7b-90c1cc9827d4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffcd4d176bf4be7886729826a56f7273e9e0838c165cc5fa840fd94d50c7c03a,PodSandboxId:9438a8a614cf7bbd8de7689ca3bc3629b2c1edf2e139f13ba4e815927f970ec5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723816210735087714,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-whnqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f4d69de-4130-4959-b1ef-9ddfbe5d6a72,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25b192393075efc6eb2238107efce33495fe6e172de9fcf1e68112955e90f670,PodSandboxId:f510cef5d08928e5b77a25465bb6e3a6eeea17c0801112c7d5360071d666cb7c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723816210671089660,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-zh69g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
65235cd-590b-4108-b5fc-b5f6072c8f5f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4414024807b0b08671bfa31dc0b388df67e122d81578315b8f1fc3bddab16b1,PodSandboxId:8f708e8ed40790f92dfaaebdc46d9ea5682c9cd2e827b524c98f18525041a515,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1723816209933953021,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-spgtw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9b2b029-a32e-4dd5-b4ff-bd3d61b97a02,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8473f5fc22d8f11c7a750bc72270174924ff8715e66f72e84090c6619f56d998,PodSandboxId:b32a6f7073caa7ea55df0673ea4d09884f9aecd96a926a08b267943d559f76a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723816199176703329,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-302520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18192e92212a656a4a67a5434489cfbb,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd62b9f92fb761c4352d0d4d0130256f136acb5a5bc6022f6c954b0b101ab4ed,PodSandboxId:d5efdc758602e539fb18df58bbdb5fd1a2a2f9af4ab74aefa3059ea11cea6fde,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723816199179375113,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-302520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9298f22fc34ff49d8719b2eab520558,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ac14a9494897cd2451337557224661bd534820b7d2b6abbe0b7f06a60433577,PodSandboxId:4724ddd4a2ac02adb82deac2dfe603c724cabdd076f443a88879ad90576bbfb6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723816199100245070,Labels:map[string]string{io.kubernetes.container.name: kube-api
server,io.kubernetes.pod.name: kube-apiserver-embed-certs-302520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea84440da82a0e6e97e5514a37c8c507,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c587865a89293996175b297ce0b354299a1c2641b13d975322b0180e1d9c22bd,PodSandboxId:0d8ae2faf420d01cc7a6e8c3f9b4b66d15cf8f400620b88d05777735165a0997,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723816199076235411,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-302520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5497839c317a5df8d8ab75b11fa2ea7f,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecc28eb673520758cb2eee6f8dba642d92cb4383b0bb5e85ee1b0608d7f24fa6,PodSandboxId:8682941ad9d1df82e5d9018cf36a238cdbe95cdb87e67b4bfc9901d879461f22,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723815913254305548,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-302520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea84440da82a0e6e97e5514a37c8c507,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4c88c945-48bc-4fb8-a1d2-3725d2d6f18e name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:59:18 embed-certs-302520 crio[731]: time="2024-08-16 13:59:18.981474981Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1a4c7cf8-af59-4592-9a35-395aa549d1b7 name=/runtime.v1.RuntimeService/Version
	Aug 16 13:59:18 embed-certs-302520 crio[731]: time="2024-08-16 13:59:18.981547695Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1a4c7cf8-af59-4592-9a35-395aa549d1b7 name=/runtime.v1.RuntimeService/Version
	Aug 16 13:59:18 embed-certs-302520 crio[731]: time="2024-08-16 13:59:18.983998396Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=78911142-ae7e-4849-85d4-d7c42a6f2973 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 13:59:18 embed-certs-302520 crio[731]: time="2024-08-16 13:59:18.984398088Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723816758984375522,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=78911142-ae7e-4849-85d4-d7c42a6f2973 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 13:59:18 embed-certs-302520 crio[731]: time="2024-08-16 13:59:18.985197737Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fece007e-cded-4463-ad5d-db72feaea6f8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:59:18 embed-certs-302520 crio[731]: time="2024-08-16 13:59:18.985267454Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fece007e-cded-4463-ad5d-db72feaea6f8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:59:18 embed-certs-302520 crio[731]: time="2024-08-16 13:59:18.985515542Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5d55f680e17867bafc1c19e765974907ab36d34fd5cc2d97ce049e2dae88cdb9,PodSandboxId:3263fbb7130a4509352aea8c9440162a738feafa6856e2e2b3d34d3db2ba7679,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723816211588747512,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e139aaf-e6d1-4661-8c7b-90c1cc9827d4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffcd4d176bf4be7886729826a56f7273e9e0838c165cc5fa840fd94d50c7c03a,PodSandboxId:9438a8a614cf7bbd8de7689ca3bc3629b2c1edf2e139f13ba4e815927f970ec5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723816210735087714,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-whnqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f4d69de-4130-4959-b1ef-9ddfbe5d6a72,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25b192393075efc6eb2238107efce33495fe6e172de9fcf1e68112955e90f670,PodSandboxId:f510cef5d08928e5b77a25465bb6e3a6eeea17c0801112c7d5360071d666cb7c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723816210671089660,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-zh69g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
65235cd-590b-4108-b5fc-b5f6072c8f5f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4414024807b0b08671bfa31dc0b388df67e122d81578315b8f1fc3bddab16b1,PodSandboxId:8f708e8ed40790f92dfaaebdc46d9ea5682c9cd2e827b524c98f18525041a515,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1723816209933953021,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-spgtw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9b2b029-a32e-4dd5-b4ff-bd3d61b97a02,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8473f5fc22d8f11c7a750bc72270174924ff8715e66f72e84090c6619f56d998,PodSandboxId:b32a6f7073caa7ea55df0673ea4d09884f9aecd96a926a08b267943d559f76a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723816199176703329,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-302520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18192e92212a656a4a67a5434489cfbb,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd62b9f92fb761c4352d0d4d0130256f136acb5a5bc6022f6c954b0b101ab4ed,PodSandboxId:d5efdc758602e539fb18df58bbdb5fd1a2a2f9af4ab74aefa3059ea11cea6fde,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723816199179375113,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-302520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9298f22fc34ff49d8719b2eab520558,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ac14a9494897cd2451337557224661bd534820b7d2b6abbe0b7f06a60433577,PodSandboxId:4724ddd4a2ac02adb82deac2dfe603c724cabdd076f443a88879ad90576bbfb6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723816199100245070,Labels:map[string]string{io.kubernetes.container.name: kube-api
server,io.kubernetes.pod.name: kube-apiserver-embed-certs-302520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea84440da82a0e6e97e5514a37c8c507,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c587865a89293996175b297ce0b354299a1c2641b13d975322b0180e1d9c22bd,PodSandboxId:0d8ae2faf420d01cc7a6e8c3f9b4b66d15cf8f400620b88d05777735165a0997,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723816199076235411,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-302520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5497839c317a5df8d8ab75b11fa2ea7f,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecc28eb673520758cb2eee6f8dba642d92cb4383b0bb5e85ee1b0608d7f24fa6,PodSandboxId:8682941ad9d1df82e5d9018cf36a238cdbe95cdb87e67b4bfc9901d879461f22,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723815913254305548,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-302520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea84440da82a0e6e97e5514a37c8c507,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fece007e-cded-4463-ad5d-db72feaea6f8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:59:19 embed-certs-302520 crio[731]: time="2024-08-16 13:59:19.029018069Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=562cd6f4-b802-491b-a298-379a74d65341 name=/runtime.v1.RuntimeService/Version
	Aug 16 13:59:19 embed-certs-302520 crio[731]: time="2024-08-16 13:59:19.029140960Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=562cd6f4-b802-491b-a298-379a74d65341 name=/runtime.v1.RuntimeService/Version
	Aug 16 13:59:19 embed-certs-302520 crio[731]: time="2024-08-16 13:59:19.030225443Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3a1cb1d6-4076-457f-bc86-9405e2242fb2 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 13:59:19 embed-certs-302520 crio[731]: time="2024-08-16 13:59:19.030617654Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723816759030596124,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3a1cb1d6-4076-457f-bc86-9405e2242fb2 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 13:59:19 embed-certs-302520 crio[731]: time="2024-08-16 13:59:19.031256853Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b7d081f7-fb21-43ab-8c9c-ec6ee2f65de7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:59:19 embed-certs-302520 crio[731]: time="2024-08-16 13:59:19.031323178Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b7d081f7-fb21-43ab-8c9c-ec6ee2f65de7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:59:19 embed-certs-302520 crio[731]: time="2024-08-16 13:59:19.031525983Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5d55f680e17867bafc1c19e765974907ab36d34fd5cc2d97ce049e2dae88cdb9,PodSandboxId:3263fbb7130a4509352aea8c9440162a738feafa6856e2e2b3d34d3db2ba7679,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723816211588747512,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e139aaf-e6d1-4661-8c7b-90c1cc9827d4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffcd4d176bf4be7886729826a56f7273e9e0838c165cc5fa840fd94d50c7c03a,PodSandboxId:9438a8a614cf7bbd8de7689ca3bc3629b2c1edf2e139f13ba4e815927f970ec5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723816210735087714,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-whnqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f4d69de-4130-4959-b1ef-9ddfbe5d6a72,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25b192393075efc6eb2238107efce33495fe6e172de9fcf1e68112955e90f670,PodSandboxId:f510cef5d08928e5b77a25465bb6e3a6eeea17c0801112c7d5360071d666cb7c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723816210671089660,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-zh69g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
65235cd-590b-4108-b5fc-b5f6072c8f5f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4414024807b0b08671bfa31dc0b388df67e122d81578315b8f1fc3bddab16b1,PodSandboxId:8f708e8ed40790f92dfaaebdc46d9ea5682c9cd2e827b524c98f18525041a515,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1723816209933953021,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-spgtw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9b2b029-a32e-4dd5-b4ff-bd3d61b97a02,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8473f5fc22d8f11c7a750bc72270174924ff8715e66f72e84090c6619f56d998,PodSandboxId:b32a6f7073caa7ea55df0673ea4d09884f9aecd96a926a08b267943d559f76a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723816199176703329,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-302520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18192e92212a656a4a67a5434489cfbb,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd62b9f92fb761c4352d0d4d0130256f136acb5a5bc6022f6c954b0b101ab4ed,PodSandboxId:d5efdc758602e539fb18df58bbdb5fd1a2a2f9af4ab74aefa3059ea11cea6fde,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723816199179375113,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-302520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9298f22fc34ff49d8719b2eab520558,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ac14a9494897cd2451337557224661bd534820b7d2b6abbe0b7f06a60433577,PodSandboxId:4724ddd4a2ac02adb82deac2dfe603c724cabdd076f443a88879ad90576bbfb6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723816199100245070,Labels:map[string]string{io.kubernetes.container.name: kube-api
server,io.kubernetes.pod.name: kube-apiserver-embed-certs-302520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea84440da82a0e6e97e5514a37c8c507,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c587865a89293996175b297ce0b354299a1c2641b13d975322b0180e1d9c22bd,PodSandboxId:0d8ae2faf420d01cc7a6e8c3f9b4b66d15cf8f400620b88d05777735165a0997,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723816199076235411,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-302520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5497839c317a5df8d8ab75b11fa2ea7f,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecc28eb673520758cb2eee6f8dba642d92cb4383b0bb5e85ee1b0608d7f24fa6,PodSandboxId:8682941ad9d1df82e5d9018cf36a238cdbe95cdb87e67b4bfc9901d879461f22,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723815913254305548,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-302520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea84440da82a0e6e97e5514a37c8c507,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b7d081f7-fb21-43ab-8c9c-ec6ee2f65de7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:59:19 embed-certs-302520 crio[731]: time="2024-08-16 13:59:19.063963327Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2cff064c-c135-4ab8-afbc-60aa3e8ea3f3 name=/runtime.v1.RuntimeService/Version
	Aug 16 13:59:19 embed-certs-302520 crio[731]: time="2024-08-16 13:59:19.064060371Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2cff064c-c135-4ab8-afbc-60aa3e8ea3f3 name=/runtime.v1.RuntimeService/Version
	Aug 16 13:59:19 embed-certs-302520 crio[731]: time="2024-08-16 13:59:19.065079514Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=28de4c08-5b2e-462d-a2e2-a368568bda55 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 13:59:19 embed-certs-302520 crio[731]: time="2024-08-16 13:59:19.065515004Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723816759065491198,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=28de4c08-5b2e-462d-a2e2-a368568bda55 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 13:59:19 embed-certs-302520 crio[731]: time="2024-08-16 13:59:19.066055907Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a2ef8f1a-cf06-4532-ae74-4b3a3419212e name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:59:19 embed-certs-302520 crio[731]: time="2024-08-16 13:59:19.066128140Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a2ef8f1a-cf06-4532-ae74-4b3a3419212e name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 13:59:19 embed-certs-302520 crio[731]: time="2024-08-16 13:59:19.066449847Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5d55f680e17867bafc1c19e765974907ab36d34fd5cc2d97ce049e2dae88cdb9,PodSandboxId:3263fbb7130a4509352aea8c9440162a738feafa6856e2e2b3d34d3db2ba7679,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723816211588747512,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e139aaf-e6d1-4661-8c7b-90c1cc9827d4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffcd4d176bf4be7886729826a56f7273e9e0838c165cc5fa840fd94d50c7c03a,PodSandboxId:9438a8a614cf7bbd8de7689ca3bc3629b2c1edf2e139f13ba4e815927f970ec5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723816210735087714,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-whnqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f4d69de-4130-4959-b1ef-9ddfbe5d6a72,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25b192393075efc6eb2238107efce33495fe6e172de9fcf1e68112955e90f670,PodSandboxId:f510cef5d08928e5b77a25465bb6e3a6eeea17c0801112c7d5360071d666cb7c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723816210671089660,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-zh69g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
65235cd-590b-4108-b5fc-b5f6072c8f5f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4414024807b0b08671bfa31dc0b388df67e122d81578315b8f1fc3bddab16b1,PodSandboxId:8f708e8ed40790f92dfaaebdc46d9ea5682c9cd2e827b524c98f18525041a515,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1723816209933953021,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-spgtw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9b2b029-a32e-4dd5-b4ff-bd3d61b97a02,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8473f5fc22d8f11c7a750bc72270174924ff8715e66f72e84090c6619f56d998,PodSandboxId:b32a6f7073caa7ea55df0673ea4d09884f9aecd96a926a08b267943d559f76a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723816199176703329,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-302520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18192e92212a656a4a67a5434489cfbb,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd62b9f92fb761c4352d0d4d0130256f136acb5a5bc6022f6c954b0b101ab4ed,PodSandboxId:d5efdc758602e539fb18df58bbdb5fd1a2a2f9af4ab74aefa3059ea11cea6fde,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723816199179375113,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-302520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9298f22fc34ff49d8719b2eab520558,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ac14a9494897cd2451337557224661bd534820b7d2b6abbe0b7f06a60433577,PodSandboxId:4724ddd4a2ac02adb82deac2dfe603c724cabdd076f443a88879ad90576bbfb6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723816199100245070,Labels:map[string]string{io.kubernetes.container.name: kube-api
server,io.kubernetes.pod.name: kube-apiserver-embed-certs-302520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea84440da82a0e6e97e5514a37c8c507,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c587865a89293996175b297ce0b354299a1c2641b13d975322b0180e1d9c22bd,PodSandboxId:0d8ae2faf420d01cc7a6e8c3f9b4b66d15cf8f400620b88d05777735165a0997,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723816199076235411,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-302520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5497839c317a5df8d8ab75b11fa2ea7f,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecc28eb673520758cb2eee6f8dba642d92cb4383b0bb5e85ee1b0608d7f24fa6,PodSandboxId:8682941ad9d1df82e5d9018cf36a238cdbe95cdb87e67b4bfc9901d879461f22,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723815913254305548,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-302520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea84440da82a0e6e97e5514a37c8c507,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a2ef8f1a-cf06-4532-ae74-4b3a3419212e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5d55f680e1786       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   3263fbb7130a4       storage-provisioner
	ffcd4d176bf4b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   9438a8a614cf7       coredns-6f6b679f8f-whnqh
	25b192393075e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   f510cef5d0892       coredns-6f6b679f8f-zh69g
	c4414024807b0       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   9 minutes ago       Running             kube-proxy                0                   8f708e8ed4079       kube-proxy-spgtw
	bd62b9f92fb76       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   9 minutes ago       Running             kube-scheduler            2                   d5efdc758602e       kube-scheduler-embed-certs-302520
	8473f5fc22d8f       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   b32a6f7073caa       etcd-embed-certs-302520
	3ac14a9494897       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   9 minutes ago       Running             kube-apiserver            2                   4724ddd4a2ac0       kube-apiserver-embed-certs-302520
	c587865a89293       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   9 minutes ago       Running             kube-controller-manager   2                   0d8ae2faf420d       kube-controller-manager-embed-certs-302520
	ecc28eb673520       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   14 minutes ago      Exited              kube-apiserver            1                   8682941ad9d1d       kube-apiserver-embed-certs-302520
	
	
	==> coredns [25b192393075efc6eb2238107efce33495fe6e172de9fcf1e68112955e90f670] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [ffcd4d176bf4be7886729826a56f7273e9e0838c165cc5fa840fd94d50c7c03a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-302520
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-302520
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab84f9bc76071a77c857a14f5c66dccc01002b05
	                    minikube.k8s.io/name=embed-certs-302520
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_16T13_50_05_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 13:50:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-302520
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 13:59:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 13:55:21 +0000   Fri, 16 Aug 2024 13:50:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 13:55:21 +0000   Fri, 16 Aug 2024 13:50:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 13:55:21 +0000   Fri, 16 Aug 2024 13:50:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 13:55:21 +0000   Fri, 16 Aug 2024 13:50:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.125
	  Hostname:    embed-certs-302520
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 66a9767091de4d4dbfef467bedb1fef1
	  System UUID:                66a97670-91de-4d4d-bfef-467bedb1fef1
	  Boot ID:                    214002d4-e2fe-469e-a5c9-fe7ebc908da5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-whnqh                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m10s
	  kube-system                 coredns-6f6b679f8f-zh69g                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m10s
	  kube-system                 etcd-embed-certs-302520                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m15s
	  kube-system                 kube-apiserver-embed-certs-302520             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m15s
	  kube-system                 kube-controller-manager-embed-certs-302520    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m15s
	  kube-system                 kube-proxy-spgtw                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m10s
	  kube-system                 kube-scheduler-embed-certs-302520             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m15s
	  kube-system                 metrics-server-6867b74b74-q58h2               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m8s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m8s   kube-proxy       
	  Normal  Starting                 9m15s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m15s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m15s  kubelet          Node embed-certs-302520 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m15s  kubelet          Node embed-certs-302520 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m15s  kubelet          Node embed-certs-302520 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m11s  node-controller  Node embed-certs-302520 event: Registered Node embed-certs-302520 in Controller
	
	
	==> dmesg <==
	[  +0.055279] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042166] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.001510] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.484198] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.621418] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000012] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Aug16 13:45] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.061219] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070451] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +0.184541] systemd-fstab-generator[674]: Ignoring "noauto" option for root device
	[  +0.162796] systemd-fstab-generator[686]: Ignoring "noauto" option for root device
	[  +0.294148] systemd-fstab-generator[715]: Ignoring "noauto" option for root device
	[  +4.223794] systemd-fstab-generator[813]: Ignoring "noauto" option for root device
	[  +0.065464] kauditd_printk_skb: 132 callbacks suppressed
	[  +2.210258] systemd-fstab-generator[935]: Ignoring "noauto" option for root device
	[  +4.635571] kauditd_printk_skb: 95 callbacks suppressed
	[  +6.898897] kauditd_printk_skb: 85 callbacks suppressed
	[Aug16 13:49] kauditd_printk_skb: 6 callbacks suppressed
	[  +0.970635] systemd-fstab-generator[2570]: Ignoring "noauto" option for root device
	[Aug16 13:50] kauditd_printk_skb: 54 callbacks suppressed
	[  +1.776109] systemd-fstab-generator[2888]: Ignoring "noauto" option for root device
	[  +5.452761] systemd-fstab-generator[3007]: Ignoring "noauto" option for root device
	[  +0.100947] kauditd_printk_skb: 14 callbacks suppressed
	[Aug16 13:51] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [8473f5fc22d8f11c7a750bc72270174924ff8715e66f72e84090c6619f56d998] <==
	{"level":"info","ts":"2024-08-16T13:49:59.575189Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-16T13:49:59.575286Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.125:2380"}
	{"level":"info","ts":"2024-08-16T13:49:59.575413Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.125:2380"}
	{"level":"info","ts":"2024-08-16T13:49:59.575614Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"f4d3edba9e42b28c","initial-advertise-peer-urls":["https://192.168.39.125:2380"],"listen-peer-urls":["https://192.168.39.125:2380"],"advertise-client-urls":["https://192.168.39.125:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.125:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-16T13:49:59.575651Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-16T13:50:00.090849Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-16T13:50:00.090905Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-16T13:50:00.090921Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c received MsgPreVoteResp from f4d3edba9e42b28c at term 1"}
	{"level":"info","ts":"2024-08-16T13:50:00.090931Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c became candidate at term 2"}
	{"level":"info","ts":"2024-08-16T13:50:00.090951Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c received MsgVoteResp from f4d3edba9e42b28c at term 2"}
	{"level":"info","ts":"2024-08-16T13:50:00.090965Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c became leader at term 2"}
	{"level":"info","ts":"2024-08-16T13:50:00.090972Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f4d3edba9e42b28c elected leader f4d3edba9e42b28c at term 2"}
	{"level":"info","ts":"2024-08-16T13:50:00.096867Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T13:50:00.099973Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"f4d3edba9e42b28c","local-member-attributes":"{Name:embed-certs-302520 ClientURLs:[https://192.168.39.125:2379]}","request-path":"/0/members/f4d3edba9e42b28c/attributes","cluster-id":"9838e9e2cfdaeabf","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-16T13:50:00.100876Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T13:50:00.101919Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T13:50:00.105877Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-16T13:50:00.106488Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T13:50:00.106916Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-16T13:50:00.106946Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-16T13:50:00.107445Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T13:50:00.111326Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.125:2379"}
	{"level":"info","ts":"2024-08-16T13:50:00.111727Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9838e9e2cfdaeabf","local-member-id":"f4d3edba9e42b28c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T13:50:00.111904Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T13:50:00.111951Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 13:59:19 up 14 min,  0 users,  load average: 0.31, 0.23, 0.18
	Linux embed-certs-302520 5.10.207 #1 SMP Wed Aug 14 19:18:01 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3ac14a9494897cd2451337557224661bd534820b7d2b6abbe0b7f06a60433577] <==
	W0816 13:55:02.615430       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 13:55:02.615758       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0816 13:55:02.616886       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0816 13:55:02.616935       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0816 13:56:02.617910       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 13:56:02.618002       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0816 13:56:02.618098       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 13:56:02.618132       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0816 13:56:02.619185       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0816 13:56:02.619252       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0816 13:58:02.619657       1 handler_proxy.go:99] no RequestInfo found in the context
	W0816 13:58:02.619669       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 13:58:02.620077       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0816 13:58:02.620154       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0816 13:58:02.621238       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0816 13:58:02.621324       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [ecc28eb673520758cb2eee6f8dba642d92cb4383b0bb5e85ee1b0608d7f24fa6] <==
	W0816 13:49:52.938631       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:49:53.028311       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:49:53.078602       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:49:53.081132       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:49:53.110155       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:49:53.145297       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:49:53.178944       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:49:53.196842       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:49:53.207551       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:49:53.239335       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:49:53.283672       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:49:53.323123       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:49:53.388561       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:49:53.399093       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:49:53.411666       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:49:53.434924       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:49:53.541079       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:49:53.732667       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:49:53.749106       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:49:53.794230       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:49:53.955247       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:49:53.970728       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:49:54.041710       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:49:54.294717       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:49:54.363176       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [c587865a89293996175b297ce0b354299a1c2641b13d975322b0180e1d9c22bd] <==
	E0816 13:54:08.567860       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 13:54:09.109115       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 13:54:38.573256       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 13:54:39.117202       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 13:55:08.580634       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 13:55:09.126244       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0816 13:55:21.637107       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-302520"
	E0816 13:55:38.587453       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 13:55:39.134952       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0816 13:55:59.547341       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="310.227µs"
	E0816 13:56:08.594182       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 13:56:09.142366       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0816 13:56:11.543533       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="168.11µs"
	E0816 13:56:38.600365       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 13:56:39.150381       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 13:57:08.607291       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 13:57:09.164293       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 13:57:38.614386       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 13:57:39.172892       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 13:58:08.621597       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 13:58:09.181001       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 13:58:38.628150       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 13:58:39.189306       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 13:59:08.634927       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 13:59:09.197535       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [c4414024807b0b08671bfa31dc0b388df67e122d81578315b8f1fc3bddab16b1] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0816 13:50:10.327993       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0816 13:50:10.338192       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.125"]
	E0816 13:50:10.338293       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 13:50:10.575891       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0816 13:50:10.575969       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0816 13:50:10.575999       1 server_linux.go:169] "Using iptables Proxier"
	I0816 13:50:10.599917       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 13:50:10.600255       1 server.go:483] "Version info" version="v1.31.0"
	I0816 13:50:10.600290       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 13:50:10.601582       1 config.go:197] "Starting service config controller"
	I0816 13:50:10.601608       1 config.go:104] "Starting endpoint slice config controller"
	I0816 13:50:10.601621       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 13:50:10.601646       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 13:50:10.602201       1 config.go:326] "Starting node config controller"
	I0816 13:50:10.602208       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 13:50:10.702067       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0816 13:50:10.702160       1 shared_informer.go:320] Caches are synced for service config
	I0816 13:50:10.702297       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [bd62b9f92fb761c4352d0d4d0130256f136acb5a5bc6022f6c954b0b101ab4ed] <==
	W0816 13:50:02.504029       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 13:50:02.504160       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0816 13:50:02.558348       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 13:50:02.558584       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0816 13:50:02.606550       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 13:50:02.606694       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 13:50:02.655457       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 13:50:02.655584       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 13:50:02.665498       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0816 13:50:02.665692       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 13:50:02.746206       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 13:50:02.746470       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 13:50:02.805547       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 13:50:02.805756       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 13:50:02.842397       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 13:50:02.844663       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 13:50:02.845160       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 13:50:02.845210       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 13:50:02.845169       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 13:50:02.845254       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 13:50:02.915603       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 13:50:02.915662       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 13:50:02.955048       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 13:50:02.955201       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0816 13:50:05.733675       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 16 13:58:04 embed-certs-302520 kubelet[2895]: E0816 13:58:04.688373    2895 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723816684688125562,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:58:14 embed-certs-302520 kubelet[2895]: E0816 13:58:14.690885    2895 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723816694690452111,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:58:14 embed-certs-302520 kubelet[2895]: E0816 13:58:14.690930    2895 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723816694690452111,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:58:15 embed-certs-302520 kubelet[2895]: E0816 13:58:15.530988    2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-q58h2" podUID="1351eabe-df61-4b9c-b67b-2e9c963b0eaf"
	Aug 16 13:58:24 embed-certs-302520 kubelet[2895]: E0816 13:58:24.693351    2895 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723816704692607252,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:58:24 embed-certs-302520 kubelet[2895]: E0816 13:58:24.693842    2895 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723816704692607252,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:58:27 embed-certs-302520 kubelet[2895]: E0816 13:58:27.530592    2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-q58h2" podUID="1351eabe-df61-4b9c-b67b-2e9c963b0eaf"
	Aug 16 13:58:34 embed-certs-302520 kubelet[2895]: E0816 13:58:34.694946    2895 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723816714694590229,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:58:34 embed-certs-302520 kubelet[2895]: E0816 13:58:34.695114    2895 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723816714694590229,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:58:39 embed-certs-302520 kubelet[2895]: E0816 13:58:39.530558    2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-q58h2" podUID="1351eabe-df61-4b9c-b67b-2e9c963b0eaf"
	Aug 16 13:58:44 embed-certs-302520 kubelet[2895]: E0816 13:58:44.697664    2895 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723816724697424812,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:58:44 embed-certs-302520 kubelet[2895]: E0816 13:58:44.697836    2895 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723816724697424812,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:58:54 embed-certs-302520 kubelet[2895]: E0816 13:58:54.534587    2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-q58h2" podUID="1351eabe-df61-4b9c-b67b-2e9c963b0eaf"
	Aug 16 13:58:54 embed-certs-302520 kubelet[2895]: E0816 13:58:54.699583    2895 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723816734699371196,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:58:54 embed-certs-302520 kubelet[2895]: E0816 13:58:54.699608    2895 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723816734699371196,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:59:04 embed-certs-302520 kubelet[2895]: E0816 13:59:04.552961    2895 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 13:59:04 embed-certs-302520 kubelet[2895]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 13:59:04 embed-certs-302520 kubelet[2895]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 13:59:04 embed-certs-302520 kubelet[2895]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 13:59:04 embed-certs-302520 kubelet[2895]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 13:59:04 embed-certs-302520 kubelet[2895]: E0816 13:59:04.701741    2895 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723816744701320280,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:59:04 embed-certs-302520 kubelet[2895]: E0816 13:59:04.701927    2895 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723816744701320280,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:59:07 embed-certs-302520 kubelet[2895]: E0816 13:59:07.531155    2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-q58h2" podUID="1351eabe-df61-4b9c-b67b-2e9c963b0eaf"
	Aug 16 13:59:14 embed-certs-302520 kubelet[2895]: E0816 13:59:14.704255    2895 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723816754703571993,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 13:59:14 embed-certs-302520 kubelet[2895]: E0816 13:59:14.704324    2895 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723816754703571993,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [5d55f680e17867bafc1c19e765974907ab36d34fd5cc2d97ce049e2dae88cdb9] <==
	I0816 13:50:11.745010       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0816 13:50:11.754761       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0816 13:50:11.755002       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0816 13:50:11.762949       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0816 13:50:11.763179       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-302520_93dce01a-f641-4cdf-ad96-662b0604b4bf!
	I0816 13:50:11.767323       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9bfc3439-9ea9-4a3c-8502-c0e0a228ca4f", APIVersion:"v1", ResourceVersion:"405", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-302520_93dce01a-f641-4cdf-ad96-662b0604b4bf became leader
	I0816 13:50:11.864009       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-302520_93dce01a-f641-4cdf-ad96-662b0604b4bf!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-302520 -n embed-certs-302520
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-302520 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-q58h2
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-302520 describe pod metrics-server-6867b74b74-q58h2
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-302520 describe pod metrics-server-6867b74b74-q58h2: exit status 1 (60.414435ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-q58h2" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-302520 describe pod metrics-server-6867b74b74-q58h2: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
E0816 13:53:56.823115   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
E0816 13:55:40.921614   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/functional-756697/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
E0816 13:56:59.896321   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
E0816 13:58:56.823377   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
E0816 14:00:40.921287   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/functional-756697/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-882237 -n old-k8s-version-882237
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-882237 -n old-k8s-version-882237: exit status 2 (215.583313ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-882237" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-882237 -n old-k8s-version-882237
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-882237 -n old-k8s-version-882237: exit status 2 (217.156843ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-882237 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-882237 logs -n 25: (1.600114235s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cert-options-779306 -- sudo                         | cert-options-779306          | jenkins | v1.33.1 | 16 Aug 24 13:34 UTC | 16 Aug 24 13:34 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-779306                                 | cert-options-779306          | jenkins | v1.33.1 | 16 Aug 24 13:34 UTC | 16 Aug 24 13:34 UTC |
	| start   | -p no-preload-311070                                   | no-preload-311070            | jenkins | v1.33.1 | 16 Aug 24 13:34 UTC | 16 Aug 24 13:36 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-759623                           | kubernetes-upgrade-759623    | jenkins | v1.33.1 | 16 Aug 24 13:35 UTC | 16 Aug 24 13:35 UTC |
	| start   | -p embed-certs-302520                                  | embed-certs-302520           | jenkins | v1.33.1 | 16 Aug 24 13:35 UTC | 16 Aug 24 13:36 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-302520            | embed-certs-302520           | jenkins | v1.33.1 | 16 Aug 24 13:36 UTC | 16 Aug 24 13:36 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-302520                                  | embed-certs-302520           | jenkins | v1.33.1 | 16 Aug 24 13:36 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-311070             | no-preload-311070            | jenkins | v1.33.1 | 16 Aug 24 13:36 UTC | 16 Aug 24 13:37 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-311070                                   | no-preload-311070            | jenkins | v1.33.1 | 16 Aug 24 13:37 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-050553                              | cert-expiration-050553       | jenkins | v1.33.1 | 16 Aug 24 13:37 UTC | 16 Aug 24 13:38 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-050553                              | cert-expiration-050553       | jenkins | v1.33.1 | 16 Aug 24 13:38 UTC | 16 Aug 24 13:38 UTC |
	| delete  | -p                                                     | disable-driver-mounts-338033 | jenkins | v1.33.1 | 16 Aug 24 13:38 UTC | 16 Aug 24 13:38 UTC |
	|         | disable-driver-mounts-338033                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-893736 | jenkins | v1.33.1 | 16 Aug 24 13:38 UTC | 16 Aug 24 13:39 UTC |
	|         | default-k8s-diff-port-893736                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-302520                 | embed-certs-302520           | jenkins | v1.33.1 | 16 Aug 24 13:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-882237        | old-k8s-version-882237       | jenkins | v1.33.1 | 16 Aug 24 13:39 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-302520                                  | embed-certs-302520           | jenkins | v1.33.1 | 16 Aug 24 13:39 UTC | 16 Aug 24 13:50 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-311070                  | no-preload-311070            | jenkins | v1.33.1 | 16 Aug 24 13:39 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-311070                                   | no-preload-311070            | jenkins | v1.33.1 | 16 Aug 24 13:39 UTC | 16 Aug 24 13:48 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-893736  | default-k8s-diff-port-893736 | jenkins | v1.33.1 | 16 Aug 24 13:39 UTC | 16 Aug 24 13:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-893736 | jenkins | v1.33.1 | 16 Aug 24 13:39 UTC |                     |
	|         | default-k8s-diff-port-893736                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-882237                              | old-k8s-version-882237       | jenkins | v1.33.1 | 16 Aug 24 13:40 UTC | 16 Aug 24 13:40 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-882237             | old-k8s-version-882237       | jenkins | v1.33.1 | 16 Aug 24 13:40 UTC | 16 Aug 24 13:40 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-882237                              | old-k8s-version-882237       | jenkins | v1.33.1 | 16 Aug 24 13:40 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-893736       | default-k8s-diff-port-893736 | jenkins | v1.33.1 | 16 Aug 24 13:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-893736 | jenkins | v1.33.1 | 16 Aug 24 13:42 UTC | 16 Aug 24 13:49 UTC |
	|         | default-k8s-diff-port-893736                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 13:42:15
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 13:42:15.998819   58430 out.go:345] Setting OutFile to fd 1 ...
	I0816 13:42:15.998960   58430 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 13:42:15.998970   58430 out.go:358] Setting ErrFile to fd 2...
	I0816 13:42:15.998976   58430 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 13:42:15.999197   58430 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-3966/.minikube/bin
	I0816 13:42:15.999747   58430 out.go:352] Setting JSON to false
	I0816 13:42:16.000715   58430 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5081,"bootTime":1723810655,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 13:42:16.000770   58430 start.go:139] virtualization: kvm guest
	I0816 13:42:16.003216   58430 out.go:177] * [default-k8s-diff-port-893736] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 13:42:16.004663   58430 notify.go:220] Checking for updates...
	I0816 13:42:16.004698   58430 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 13:42:16.006298   58430 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 13:42:16.007719   58430 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 13:42:16.009073   58430 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-3966/.minikube
	I0816 13:42:16.010602   58430 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 13:42:16.012058   58430 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 13:42:16.013799   58430 config.go:182] Loaded profile config "default-k8s-diff-port-893736": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 13:42:16.014204   58430 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:42:16.014278   58430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:42:16.029427   58430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34093
	I0816 13:42:16.029977   58430 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:42:16.030548   58430 main.go:141] libmachine: Using API Version  1
	I0816 13:42:16.030573   58430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:42:16.030903   58430 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:42:16.031164   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:42:16.031412   58430 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 13:42:16.031691   58430 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:42:16.031731   58430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:42:16.046245   58430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38243
	I0816 13:42:16.046668   58430 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:42:16.047205   58430 main.go:141] libmachine: Using API Version  1
	I0816 13:42:16.047244   58430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:42:16.047537   58430 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:42:16.047730   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:42:16.080470   58430 out.go:177] * Using the kvm2 driver based on existing profile
	I0816 13:42:16.081700   58430 start.go:297] selected driver: kvm2
	I0816 13:42:16.081721   58430 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-893736 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-893736 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.186 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 13:42:16.081825   58430 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 13:42:16.082512   58430 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 13:42:16.082593   58430 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-3966/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 13:42:16.097784   58430 install.go:137] /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0816 13:42:16.098155   58430 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 13:42:16.098223   58430 cni.go:84] Creating CNI manager for ""
	I0816 13:42:16.098233   58430 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:42:16.098274   58430 start.go:340] cluster config:
	{Name:default-k8s-diff-port-893736 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-893736 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.186 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 13:42:16.098365   58430 iso.go:125] acquiring lock: {Name:mk00139b359c6fe4c84d80e843a2099303fb7c5b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 13:42:16.100341   58430 out.go:177] * Starting "default-k8s-diff-port-893736" primary control-plane node in "default-k8s-diff-port-893736" cluster
	I0816 13:42:17.205125   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:42:16.101925   58430 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 13:42:16.101966   58430 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0816 13:42:16.101973   58430 cache.go:56] Caching tarball of preloaded images
	I0816 13:42:16.102052   58430 preload.go:172] Found /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 13:42:16.102063   58430 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0816 13:42:16.102162   58430 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/default-k8s-diff-port-893736/config.json ...
	I0816 13:42:16.102344   58430 start.go:360] acquireMachinesLock for default-k8s-diff-port-893736: {Name:mk0b4b88ee8893cefe753073bc08069ac50e5641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 13:42:23.285172   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:42:26.357214   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:42:32.437218   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:42:35.509221   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:42:41.589174   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:42:44.661162   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:42:50.741223   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:42:53.813193   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:42:59.893180   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:43:02.965205   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:43:09.045252   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:43:12.117232   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:43:18.197189   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:43:21.269234   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:43:27.349182   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:43:30.421174   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:43:36.501197   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:43:39.573246   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:43:42.577406   57440 start.go:364] duration metric: took 4m10.318515071s to acquireMachinesLock for "no-preload-311070"
	I0816 13:43:42.577513   57440 start.go:96] Skipping create...Using existing machine configuration
	I0816 13:43:42.577529   57440 fix.go:54] fixHost starting: 
	I0816 13:43:42.577955   57440 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:43:42.577989   57440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:43:42.593032   57440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43785
	I0816 13:43:42.593416   57440 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:43:42.593860   57440 main.go:141] libmachine: Using API Version  1
	I0816 13:43:42.593882   57440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:43:42.594256   57440 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:43:42.594434   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	I0816 13:43:42.594586   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetState
	I0816 13:43:42.596234   57440 fix.go:112] recreateIfNeeded on no-preload-311070: state=Stopped err=<nil>
	I0816 13:43:42.596261   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	W0816 13:43:42.596431   57440 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 13:43:42.598334   57440 out.go:177] * Restarting existing kvm2 VM for "no-preload-311070" ...
	I0816 13:43:42.574954   57240 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 13:43:42.574990   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetMachineName
	I0816 13:43:42.575324   57240 buildroot.go:166] provisioning hostname "embed-certs-302520"
	I0816 13:43:42.575349   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetMachineName
	I0816 13:43:42.575554   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:43:42.577250   57240 machine.go:96] duration metric: took 4m37.4289608s to provisionDockerMachine
	I0816 13:43:42.577309   57240 fix.go:56] duration metric: took 4m37.450613575s for fixHost
	I0816 13:43:42.577314   57240 start.go:83] releasing machines lock for "embed-certs-302520", held for 4m37.450631849s
	W0816 13:43:42.577330   57240 start.go:714] error starting host: provision: host is not running
	W0816 13:43:42.577401   57240 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0816 13:43:42.577410   57240 start.go:729] Will try again in 5 seconds ...
	I0816 13:43:42.599558   57440 main.go:141] libmachine: (no-preload-311070) Calling .Start
	I0816 13:43:42.599720   57440 main.go:141] libmachine: (no-preload-311070) Ensuring networks are active...
	I0816 13:43:42.600383   57440 main.go:141] libmachine: (no-preload-311070) Ensuring network default is active
	I0816 13:43:42.600682   57440 main.go:141] libmachine: (no-preload-311070) Ensuring network mk-no-preload-311070 is active
	I0816 13:43:42.601157   57440 main.go:141] libmachine: (no-preload-311070) Getting domain xml...
	I0816 13:43:42.601868   57440 main.go:141] libmachine: (no-preload-311070) Creating domain...
	I0816 13:43:43.816308   57440 main.go:141] libmachine: (no-preload-311070) Waiting to get IP...
	I0816 13:43:43.817179   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:43.817566   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:43.817586   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:43.817516   58770 retry.go:31] will retry after 295.385031ms: waiting for machine to come up
	I0816 13:43:44.115046   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:44.115850   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:44.115875   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:44.115787   58770 retry.go:31] will retry after 340.249659ms: waiting for machine to come up
	I0816 13:43:44.457278   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:44.457722   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:44.457752   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:44.457657   58770 retry.go:31] will retry after 476.905089ms: waiting for machine to come up
	I0816 13:43:44.936230   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:44.936674   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:44.936714   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:44.936640   58770 retry.go:31] will retry after 555.288542ms: waiting for machine to come up
	I0816 13:43:45.493301   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:45.493698   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:45.493724   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:45.493657   58770 retry.go:31] will retry after 462.336365ms: waiting for machine to come up
	I0816 13:43:45.957163   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:45.957553   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:45.957580   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:45.957509   58770 retry.go:31] will retry after 886.665194ms: waiting for machine to come up
	I0816 13:43:46.845380   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:46.845743   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:46.845763   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:46.845723   58770 retry.go:31] will retry after 909.05227ms: waiting for machine to come up
	I0816 13:43:47.579134   57240 start.go:360] acquireMachinesLock for embed-certs-302520: {Name:mk0b4b88ee8893cefe753073bc08069ac50e5641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 13:43:47.755998   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:47.756439   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:47.756460   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:47.756407   58770 retry.go:31] will retry after 1.380778497s: waiting for machine to come up
	I0816 13:43:49.138398   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:49.138861   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:49.138884   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:49.138811   58770 retry.go:31] will retry after 1.788185586s: waiting for machine to come up
	I0816 13:43:50.929915   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:50.930326   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:50.930356   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:50.930276   58770 retry.go:31] will retry after 1.603049262s: waiting for machine to come up
	I0816 13:43:52.536034   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:52.536492   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:52.536518   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:52.536438   58770 retry.go:31] will retry after 1.964966349s: waiting for machine to come up
	I0816 13:43:54.504003   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:54.504408   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:54.504440   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:54.504363   58770 retry.go:31] will retry after 3.616796835s: waiting for machine to come up
	I0816 13:43:58.122295   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:58.122714   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:58.122747   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:58.122673   58770 retry.go:31] will retry after 3.893804146s: waiting for machine to come up
	I0816 13:44:02.020870   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.021351   57440 main.go:141] libmachine: (no-preload-311070) Found IP for machine: 192.168.61.116
	I0816 13:44:02.021372   57440 main.go:141] libmachine: (no-preload-311070) Reserving static IP address...
	I0816 13:44:02.021385   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has current primary IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.021917   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "no-preload-311070", mac: "52:54:00:14:17:b3", ip: "192.168.61.116"} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:02.021948   57440 main.go:141] libmachine: (no-preload-311070) Reserved static IP address: 192.168.61.116
	I0816 13:44:02.021966   57440 main.go:141] libmachine: (no-preload-311070) DBG | skip adding static IP to network mk-no-preload-311070 - found existing host DHCP lease matching {name: "no-preload-311070", mac: "52:54:00:14:17:b3", ip: "192.168.61.116"}
	I0816 13:44:02.021977   57440 main.go:141] libmachine: (no-preload-311070) DBG | Getting to WaitForSSH function...
	I0816 13:44:02.021989   57440 main.go:141] libmachine: (no-preload-311070) Waiting for SSH to be available...
	I0816 13:44:02.024661   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.025071   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:02.025094   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.025327   57440 main.go:141] libmachine: (no-preload-311070) DBG | Using SSH client type: external
	I0816 13:44:02.025349   57440 main.go:141] libmachine: (no-preload-311070) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/no-preload-311070/id_rsa (-rw-------)
	I0816 13:44:02.025376   57440 main.go:141] libmachine: (no-preload-311070) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-3966/.minikube/machines/no-preload-311070/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 13:44:02.025387   57440 main.go:141] libmachine: (no-preload-311070) DBG | About to run SSH command:
	I0816 13:44:02.025406   57440 main.go:141] libmachine: (no-preload-311070) DBG | exit 0
	I0816 13:44:02.148864   57440 main.go:141] libmachine: (no-preload-311070) DBG | SSH cmd err, output: <nil>: 
	I0816 13:44:02.149279   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetConfigRaw
	I0816 13:44:02.149868   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetIP
	I0816 13:44:02.152149   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.152460   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:02.152481   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.152681   57440 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070/config.json ...
	I0816 13:44:02.152853   57440 machine.go:93] provisionDockerMachine start ...
	I0816 13:44:02.152869   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	I0816 13:44:02.153131   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:02.155341   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.155703   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:02.155743   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.155845   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:03.389847   57945 start.go:364] duration metric: took 3m33.186277254s to acquireMachinesLock for "old-k8s-version-882237"
	I0816 13:44:03.389911   57945 start.go:96] Skipping create...Using existing machine configuration
	I0816 13:44:03.389923   57945 fix.go:54] fixHost starting: 
	I0816 13:44:03.390344   57945 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:03.390384   57945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:03.406808   57945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40841
	I0816 13:44:03.407227   57945 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:03.407790   57945 main.go:141] libmachine: Using API Version  1
	I0816 13:44:03.407819   57945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:03.408124   57945 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:03.408341   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:44:03.408506   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetState
	I0816 13:44:03.409993   57945 fix.go:112] recreateIfNeeded on old-k8s-version-882237: state=Stopped err=<nil>
	I0816 13:44:03.410029   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	W0816 13:44:03.410200   57945 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 13:44:03.412299   57945 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-882237" ...
	I0816 13:44:02.156024   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:02.156199   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:02.156350   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:02.156557   57440 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:02.156747   57440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0816 13:44:02.156758   57440 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 13:44:02.261263   57440 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 13:44:02.261290   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetMachineName
	I0816 13:44:02.261514   57440 buildroot.go:166] provisioning hostname "no-preload-311070"
	I0816 13:44:02.261528   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetMachineName
	I0816 13:44:02.261696   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:02.264473   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.264892   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:02.264936   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.265030   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:02.265198   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:02.265365   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:02.265485   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:02.265624   57440 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:02.265796   57440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0816 13:44:02.265813   57440 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-311070 && echo "no-preload-311070" | sudo tee /etc/hostname
	I0816 13:44:02.384079   57440 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-311070
	
	I0816 13:44:02.384112   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:02.386756   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.387065   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:02.387104   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.387285   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:02.387501   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:02.387699   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:02.387843   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:02.387997   57440 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:02.388193   57440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0816 13:44:02.388218   57440 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-311070' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-311070/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-311070' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 13:44:02.502089   57440 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 13:44:02.502122   57440 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-3966/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-3966/.minikube}
	I0816 13:44:02.502159   57440 buildroot.go:174] setting up certificates
	I0816 13:44:02.502173   57440 provision.go:84] configureAuth start
	I0816 13:44:02.502191   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetMachineName
	I0816 13:44:02.502484   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetIP
	I0816 13:44:02.505215   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.505523   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:02.505560   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.505726   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:02.507770   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.508033   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:02.508062   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.508193   57440 provision.go:143] copyHostCerts
	I0816 13:44:02.508249   57440 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem, removing ...
	I0816 13:44:02.508267   57440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem
	I0816 13:44:02.508336   57440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem (1082 bytes)
	I0816 13:44:02.508426   57440 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem, removing ...
	I0816 13:44:02.508435   57440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem
	I0816 13:44:02.508460   57440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem (1123 bytes)
	I0816 13:44:02.508520   57440 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem, removing ...
	I0816 13:44:02.508527   57440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem
	I0816 13:44:02.508548   57440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem (1675 bytes)
	I0816 13:44:02.508597   57440 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem org=jenkins.no-preload-311070 san=[127.0.0.1 192.168.61.116 localhost minikube no-preload-311070]
	I0816 13:44:02.732379   57440 provision.go:177] copyRemoteCerts
	I0816 13:44:02.732434   57440 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 13:44:02.732458   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:02.735444   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.735803   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:02.735837   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.736040   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:02.736274   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:02.736428   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:02.736587   57440 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/no-preload-311070/id_rsa Username:docker}
	I0816 13:44:02.819602   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 13:44:02.843489   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0816 13:44:02.866482   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 13:44:02.889908   57440 provision.go:87] duration metric: took 387.723287ms to configureAuth
	I0816 13:44:02.889936   57440 buildroot.go:189] setting minikube options for container-runtime
	I0816 13:44:02.890151   57440 config.go:182] Loaded profile config "no-preload-311070": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 13:44:02.890250   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:02.892851   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.893158   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:02.893184   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.893381   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:02.893607   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:02.893777   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:02.893925   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:02.894076   57440 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:02.894267   57440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0816 13:44:02.894286   57440 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 13:44:03.153730   57440 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 13:44:03.153755   57440 machine.go:96] duration metric: took 1.000891309s to provisionDockerMachine
	I0816 13:44:03.153766   57440 start.go:293] postStartSetup for "no-preload-311070" (driver="kvm2")
	I0816 13:44:03.153776   57440 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 13:44:03.153790   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	I0816 13:44:03.154088   57440 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 13:44:03.154122   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:03.156612   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:03.156931   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:03.156969   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:03.157113   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:03.157299   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:03.157438   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:03.157595   57440 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/no-preload-311070/id_rsa Username:docker}
	I0816 13:44:03.241700   57440 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 13:44:03.246133   57440 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 13:44:03.246155   57440 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/addons for local assets ...
	I0816 13:44:03.246221   57440 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/files for local assets ...
	I0816 13:44:03.246292   57440 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem -> 111492.pem in /etc/ssl/certs
	I0816 13:44:03.246379   57440 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 13:44:03.257778   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /etc/ssl/certs/111492.pem (1708 bytes)
	I0816 13:44:03.283511   57440 start.go:296] duration metric: took 129.718161ms for postStartSetup
	I0816 13:44:03.283552   57440 fix.go:56] duration metric: took 20.706029776s for fixHost
	I0816 13:44:03.283603   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:03.286296   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:03.286608   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:03.286651   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:03.286803   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:03.287016   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:03.287158   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:03.287298   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:03.287477   57440 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:03.287639   57440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0816 13:44:03.287649   57440 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 13:44:03.389691   57440 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723815843.358144829
	
	I0816 13:44:03.389710   57440 fix.go:216] guest clock: 1723815843.358144829
	I0816 13:44:03.389717   57440 fix.go:229] Guest: 2024-08-16 13:44:03.358144829 +0000 UTC Remote: 2024-08-16 13:44:03.283556408 +0000 UTC m=+271.159980604 (delta=74.588421ms)
	I0816 13:44:03.389749   57440 fix.go:200] guest clock delta is within tolerance: 74.588421ms
	I0816 13:44:03.389754   57440 start.go:83] releasing machines lock for "no-preload-311070", held for 20.812259998s
	I0816 13:44:03.389779   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	I0816 13:44:03.390029   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetIP
	I0816 13:44:03.392788   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:03.393137   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:03.393160   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:03.393365   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	I0816 13:44:03.393870   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	I0816 13:44:03.394042   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	I0816 13:44:03.394125   57440 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 13:44:03.394180   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:03.394215   57440 ssh_runner.go:195] Run: cat /version.json
	I0816 13:44:03.394235   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:03.396749   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:03.396813   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:03.397124   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:03.397152   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:03.397180   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:03.397197   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:03.397466   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:03.397543   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:03.397717   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:03.397731   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:03.397874   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:03.397921   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:03.397998   57440 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/no-preload-311070/id_rsa Username:docker}
	I0816 13:44:03.398077   57440 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/no-preload-311070/id_rsa Username:docker}
	I0816 13:44:03.473552   57440 ssh_runner.go:195] Run: systemctl --version
	I0816 13:44:03.497958   57440 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 13:44:03.644212   57440 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 13:44:03.651347   57440 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 13:44:03.651455   57440 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 13:44:03.667822   57440 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 13:44:03.667842   57440 start.go:495] detecting cgroup driver to use...
	I0816 13:44:03.667915   57440 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 13:44:03.685838   57440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 13:44:03.700002   57440 docker.go:217] disabling cri-docker service (if available) ...
	I0816 13:44:03.700073   57440 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 13:44:03.713465   57440 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 13:44:03.726793   57440 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 13:44:03.838274   57440 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 13:44:03.967880   57440 docker.go:233] disabling docker service ...
	I0816 13:44:03.967951   57440 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 13:44:03.982178   57440 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 13:44:03.994574   57440 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 13:44:04.132374   57440 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 13:44:04.242820   57440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 13:44:04.257254   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 13:44:04.277961   57440 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 13:44:04.278018   57440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:04.288557   57440 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 13:44:04.288621   57440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:04.299108   57440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:04.310139   57440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:04.320850   57440 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 13:44:04.332224   57440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:04.342905   57440 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:04.361606   57440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:04.372423   57440 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 13:44:04.382305   57440 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 13:44:04.382355   57440 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 13:44:04.396774   57440 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 13:44:04.408417   57440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:44:04.516638   57440 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 13:44:04.684247   57440 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 13:44:04.684316   57440 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 13:44:04.689824   57440 start.go:563] Will wait 60s for crictl version
	I0816 13:44:04.689878   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:44:04.693456   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 13:44:04.732628   57440 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 13:44:04.732712   57440 ssh_runner.go:195] Run: crio --version
	I0816 13:44:04.760111   57440 ssh_runner.go:195] Run: crio --version
	I0816 13:44:04.790127   57440 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 13:44:03.413613   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .Start
	I0816 13:44:03.413783   57945 main.go:141] libmachine: (old-k8s-version-882237) Ensuring networks are active...
	I0816 13:44:03.414567   57945 main.go:141] libmachine: (old-k8s-version-882237) Ensuring network default is active
	I0816 13:44:03.414873   57945 main.go:141] libmachine: (old-k8s-version-882237) Ensuring network mk-old-k8s-version-882237 is active
	I0816 13:44:03.415336   57945 main.go:141] libmachine: (old-k8s-version-882237) Getting domain xml...
	I0816 13:44:03.416198   57945 main.go:141] libmachine: (old-k8s-version-882237) Creating domain...
	I0816 13:44:04.671017   57945 main.go:141] libmachine: (old-k8s-version-882237) Waiting to get IP...
	I0816 13:44:04.672035   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:04.672467   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:04.672560   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:04.672467   58914 retry.go:31] will retry after 271.707338ms: waiting for machine to come up
	I0816 13:44:04.946147   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:04.946549   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:04.946577   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:04.946506   58914 retry.go:31] will retry after 324.872897ms: waiting for machine to come up
	I0816 13:44:04.791315   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetIP
	I0816 13:44:04.794224   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:04.794587   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:04.794613   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:04.794794   57440 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0816 13:44:04.798848   57440 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 13:44:04.811522   57440 kubeadm.go:883] updating cluster {Name:no-preload-311070 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-311070 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 13:44:04.811628   57440 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 13:44:04.811685   57440 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 13:44:04.845546   57440 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 13:44:04.845567   57440 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0816 13:44:04.845630   57440 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:04.845654   57440 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0816 13:44:04.845687   57440 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 13:44:04.845714   57440 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0816 13:44:04.845694   57440 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0816 13:44:04.845789   57440 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0816 13:44:04.845839   57440 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0816 13:44:04.845875   57440 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0816 13:44:04.847428   57440 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0816 13:44:04.847440   57440 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0816 13:44:04.847454   57440 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0816 13:44:04.847428   57440 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0816 13:44:04.847484   57440 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0816 13:44:04.847429   57440 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0816 13:44:04.847431   57440 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:04.847508   57440 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 13:44:05.036225   57440 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0816 13:44:05.071514   57440 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0816 13:44:05.075186   57440 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0816 13:44:05.075233   57440 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0816 13:44:05.075273   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:44:05.111591   57440 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0816 13:44:05.111634   57440 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0816 13:44:05.111687   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:44:05.111704   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0816 13:44:05.145127   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0816 13:44:05.145289   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0816 13:44:05.186194   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0816 13:44:05.200886   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0816 13:44:05.203824   57440 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0816 13:44:05.208201   57440 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0816 13:44:05.209021   57440 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 13:44:05.234117   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0816 13:44:05.234893   57440 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0816 13:44:05.245119   57440 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0816 13:44:05.305971   57440 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0816 13:44:05.306084   57440 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0816 13:44:05.374880   57440 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0816 13:44:05.374922   57440 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0816 13:44:05.374971   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:44:05.399114   57440 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0816 13:44:05.399156   57440 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0816 13:44:05.399187   57440 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0816 13:44:05.399216   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:44:05.399225   57440 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 13:44:05.399267   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:44:05.399318   57440 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0816 13:44:05.399414   57440 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0816 13:44:05.401940   57440 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0816 13:44:05.401975   57440 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0816 13:44:05.402006   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:44:05.513930   57440 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0816 13:44:05.513961   57440 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0816 13:44:05.514017   57440 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0816 13:44:05.514032   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0816 13:44:05.514059   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0816 13:44:05.514112   57440 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0816 13:44:05.514115   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 13:44:05.514150   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0816 13:44:05.634275   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0816 13:44:05.634340   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 13:44:05.864118   57440 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:05.273252   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:05.273730   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:05.273758   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:05.273682   58914 retry.go:31] will retry after 300.46858ms: waiting for machine to come up
	I0816 13:44:05.576567   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:05.577060   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:05.577088   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:05.577023   58914 retry.go:31] will retry after 471.968976ms: waiting for machine to come up
	I0816 13:44:06.050651   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:06.051035   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:06.051075   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:06.051005   58914 retry.go:31] will retry after 696.85088ms: waiting for machine to come up
	I0816 13:44:06.750108   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:06.750611   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:06.750643   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:06.750548   58914 retry.go:31] will retry after 752.204898ms: waiting for machine to come up
	I0816 13:44:07.504321   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:07.504741   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:07.504766   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:07.504706   58914 retry.go:31] will retry after 734.932569ms: waiting for machine to come up
	I0816 13:44:08.241587   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:08.241950   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:08.241977   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:08.241895   58914 retry.go:31] will retry after 1.245731112s: waiting for machine to come up
	I0816 13:44:09.488787   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:09.489326   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:09.489370   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:09.489282   58914 retry.go:31] will retry after 1.454286295s: waiting for machine to come up
	I0816 13:44:07.542707   57440 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (2.028664898s)
	I0816 13:44:07.542745   57440 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0816 13:44:07.542770   57440 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0816 13:44:07.542773   57440 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1: (2.028589727s)
	I0816 13:44:07.542817   57440 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0: (2.028737534s)
	I0816 13:44:07.542831   57440 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0816 13:44:07.542837   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0816 13:44:07.542869   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0816 13:44:07.542888   57440 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0: (1.908584925s)
	I0816 13:44:07.542937   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0816 13:44:07.542951   57440 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0: (1.908590671s)
	I0816 13:44:07.542995   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 13:44:07.543034   57440 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.678883978s)
	I0816 13:44:07.543068   57440 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0816 13:44:07.543103   57440 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:07.543138   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:44:11.390456   57440 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0: (3.847434032s)
	I0816 13:44:11.390507   57440 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0816 13:44:11.390610   57440 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0816 13:44:11.390647   57440 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.847797916s)
	I0816 13:44:11.390674   57440 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0816 13:44:11.390684   57440 ssh_runner.go:235] Completed: which crictl: (3.847535001s)
	I0816 13:44:11.390740   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:11.390780   57440 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0: (3.847819859s)
	I0816 13:44:11.390810   57440 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1: (3.847960553s)
	I0816 13:44:11.390825   57440 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0816 13:44:11.390848   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0816 13:44:11.390908   57440 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0816 13:44:11.390923   57440 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0: (3.848033361s)
	I0816 13:44:11.390978   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0816 13:44:11.461833   57440 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0816 13:44:11.461859   57440 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0816 13:44:11.461905   57440 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0816 13:44:11.461922   57440 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0816 13:44:11.461843   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:11.461990   57440 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0816 13:44:11.462013   57440 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0816 13:44:11.462557   57440 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0816 13:44:11.462649   57440 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0816 13:44:10.944947   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:10.945395   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:10.945459   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:10.945352   58914 retry.go:31] will retry after 1.738238967s: waiting for machine to come up
	I0816 13:44:12.686147   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:12.686673   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:12.686701   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:12.686630   58914 retry.go:31] will retry after 2.778761596s: waiting for machine to come up
	I0816 13:44:13.839070   57440 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (2.377139357s)
	I0816 13:44:13.839101   57440 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0816 13:44:13.839141   57440 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0816 13:44:13.839207   57440 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0816 13:44:13.839255   57440 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.377282192s)
	I0816 13:44:13.839312   57440 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1: (2.377281378s)
	I0816 13:44:13.839358   57440 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0816 13:44:13.839358   57440 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0: (2.376690281s)
	I0816 13:44:13.839379   57440 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0816 13:44:13.839318   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:13.880059   57440 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0816 13:44:13.880203   57440 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0816 13:44:15.818912   57440 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (1.979684366s)
	I0816 13:44:15.818943   57440 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0816 13:44:15.818975   57440 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.938747663s)
	I0816 13:44:15.818986   57440 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0816 13:44:15.819000   57440 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0816 13:44:15.819043   57440 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0816 13:44:15.468356   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:15.468788   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:15.468817   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:15.468739   58914 retry.go:31] will retry after 2.807621726s: waiting for machine to come up
	I0816 13:44:18.277604   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:18.277980   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:18.278013   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:18.277937   58914 retry.go:31] will retry after 4.131806684s: waiting for machine to come up
	I0816 13:44:17.795989   57440 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.976923514s)
	I0816 13:44:17.796013   57440 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0816 13:44:17.796040   57440 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0816 13:44:17.796088   57440 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0816 13:44:19.147815   57440 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.351703003s)
	I0816 13:44:19.147843   57440 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0816 13:44:19.147869   57440 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0816 13:44:19.147919   57440 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0816 13:44:19.791370   57440 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0816 13:44:19.791414   57440 cache_images.go:123] Successfully loaded all cached images
	I0816 13:44:19.791421   57440 cache_images.go:92] duration metric: took 14.945842963s to LoadCachedImages
	I0816 13:44:19.791440   57440 kubeadm.go:934] updating node { 192.168.61.116 8443 v1.31.0 crio true true} ...
	I0816 13:44:19.791593   57440 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-311070 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-311070 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 13:44:19.791681   57440 ssh_runner.go:195] Run: crio config
	I0816 13:44:19.843963   57440 cni.go:84] Creating CNI manager for ""
	I0816 13:44:19.843984   57440 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:44:19.844003   57440 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 13:44:19.844029   57440 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.116 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-311070 NodeName:no-preload-311070 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 13:44:19.844189   57440 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.116
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-311070"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 13:44:19.844250   57440 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 13:44:19.854942   57440 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 13:44:19.855014   57440 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 13:44:19.864794   57440 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0816 13:44:19.881210   57440 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 13:44:19.897450   57440 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0816 13:44:19.916038   57440 ssh_runner.go:195] Run: grep 192.168.61.116	control-plane.minikube.internal$ /etc/hosts
	I0816 13:44:19.919995   57440 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 13:44:19.934081   57440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:44:20.077422   57440 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 13:44:20.093846   57440 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070 for IP: 192.168.61.116
	I0816 13:44:20.093864   57440 certs.go:194] generating shared ca certs ...
	I0816 13:44:20.093881   57440 certs.go:226] acquiring lock for ca certs: {Name:mkdb46755373c135acf8239d2d4352f4b0b3d1d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:44:20.094055   57440 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key
	I0816 13:44:20.094120   57440 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key
	I0816 13:44:20.094135   57440 certs.go:256] generating profile certs ...
	I0816 13:44:20.094236   57440 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070/client.key
	I0816 13:44:20.094325   57440 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070/apiserver.key.000c4f90
	I0816 13:44:20.094391   57440 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070/proxy-client.key
	I0816 13:44:20.094529   57440 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem (1338 bytes)
	W0816 13:44:20.094571   57440 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149_empty.pem, impossibly tiny 0 bytes
	I0816 13:44:20.094584   57440 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 13:44:20.094621   57440 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem (1082 bytes)
	I0816 13:44:20.094654   57440 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem (1123 bytes)
	I0816 13:44:20.094795   57440 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem (1675 bytes)
	I0816 13:44:20.094874   57440 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem (1708 bytes)
	I0816 13:44:20.096132   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 13:44:20.130987   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 13:44:20.160701   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 13:44:20.187948   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 13:44:20.217162   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0816 13:44:20.242522   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 13:44:20.273582   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 13:44:20.300613   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 13:44:20.328363   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /usr/share/ca-certificates/111492.pem (1708 bytes)
	I0816 13:44:20.353396   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 13:44:20.377770   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem --> /usr/share/ca-certificates/11149.pem (1338 bytes)
	I0816 13:44:20.401760   57440 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 13:44:20.418302   57440 ssh_runner.go:195] Run: openssl version
	I0816 13:44:20.424065   57440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111492.pem && ln -fs /usr/share/ca-certificates/111492.pem /etc/ssl/certs/111492.pem"
	I0816 13:44:20.434841   57440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111492.pem
	I0816 13:44:20.439352   57440 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 12:33 /usr/share/ca-certificates/111492.pem
	I0816 13:44:20.439398   57440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111492.pem
	I0816 13:44:20.445210   57440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111492.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 13:44:20.455727   57440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 13:44:20.466095   57440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:44:20.470528   57440 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 12:22 /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:44:20.470568   57440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:44:20.476080   57440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 13:44:20.486189   57440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11149.pem && ln -fs /usr/share/ca-certificates/11149.pem /etc/ssl/certs/11149.pem"
	I0816 13:44:20.496373   57440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11149.pem
	I0816 13:44:20.500696   57440 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 12:33 /usr/share/ca-certificates/11149.pem
	I0816 13:44:20.500737   57440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11149.pem
	I0816 13:44:20.506426   57440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11149.pem /etc/ssl/certs/51391683.0"
	I0816 13:44:20.517130   57440 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 13:44:20.521664   57440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 13:44:20.527604   57440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 13:44:20.533478   57440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 13:44:20.539285   57440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 13:44:20.545042   57440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 13:44:20.550681   57440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 13:44:20.556239   57440 kubeadm.go:392] StartCluster: {Name:no-preload-311070 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-311070 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 13:44:20.556314   57440 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 13:44:20.556391   57440 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 13:44:20.594069   57440 cri.go:89] found id: ""
	I0816 13:44:20.594128   57440 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 13:44:20.604067   57440 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 13:44:20.604085   57440 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 13:44:20.604131   57440 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 13:44:20.614182   57440 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 13:44:20.615072   57440 kubeconfig.go:125] found "no-preload-311070" server: "https://192.168.61.116:8443"
	I0816 13:44:20.617096   57440 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 13:44:20.626046   57440 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.116
	I0816 13:44:20.626069   57440 kubeadm.go:1160] stopping kube-system containers ...
	I0816 13:44:20.626078   57440 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 13:44:20.626114   57440 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 13:44:20.659889   57440 cri.go:89] found id: ""
	I0816 13:44:20.659954   57440 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 13:44:20.676977   57440 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 13:44:20.686930   57440 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 13:44:20.686946   57440 kubeadm.go:157] found existing configuration files:
	
	I0816 13:44:20.686985   57440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 13:44:20.696144   57440 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 13:44:20.696222   57440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 13:44:20.705550   57440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 13:44:20.714350   57440 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 13:44:20.714399   57440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 13:44:20.723636   57440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 13:44:20.732287   57440 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 13:44:20.732329   57440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 13:44:20.741390   57440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 13:44:20.749913   57440 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 13:44:20.749956   57440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 13:44:20.758968   57440 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 13:44:20.768054   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:20.872847   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:21.933273   57440 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.060394194s)
	I0816 13:44:21.933303   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:22.130462   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:23.689897   58430 start.go:364] duration metric: took 2m7.587518205s to acquireMachinesLock for "default-k8s-diff-port-893736"
	I0816 13:44:23.689958   58430 start.go:96] Skipping create...Using existing machine configuration
	I0816 13:44:23.689965   58430 fix.go:54] fixHost starting: 
	I0816 13:44:23.690363   58430 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:23.690401   58430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:23.707766   58430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40097
	I0816 13:44:23.708281   58430 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:23.709439   58430 main.go:141] libmachine: Using API Version  1
	I0816 13:44:23.709462   58430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:23.709757   58430 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:23.709906   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:44:23.710017   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetState
	I0816 13:44:23.711612   58430 fix.go:112] recreateIfNeeded on default-k8s-diff-port-893736: state=Stopped err=<nil>
	I0816 13:44:23.711655   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	W0816 13:44:23.711797   58430 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 13:44:23.713600   58430 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-893736" ...
	I0816 13:44:22.413954   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.414552   57945 main.go:141] libmachine: (old-k8s-version-882237) Found IP for machine: 192.168.72.105
	I0816 13:44:22.414575   57945 main.go:141] libmachine: (old-k8s-version-882237) Reserving static IP address...
	I0816 13:44:22.414591   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has current primary IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.415085   57945 main.go:141] libmachine: (old-k8s-version-882237) Reserved static IP address: 192.168.72.105
	I0816 13:44:22.415142   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "old-k8s-version-882237", mac: "52:54:00:ce:02:bd", ip: "192.168.72.105"} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:22.415157   57945 main.go:141] libmachine: (old-k8s-version-882237) Waiting for SSH to be available...
	I0816 13:44:22.415183   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | skip adding static IP to network mk-old-k8s-version-882237 - found existing host DHCP lease matching {name: "old-k8s-version-882237", mac: "52:54:00:ce:02:bd", ip: "192.168.72.105"}
	I0816 13:44:22.415195   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | Getting to WaitForSSH function...
	I0816 13:44:22.417524   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.417844   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:22.417875   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.417987   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | Using SSH client type: external
	I0816 13:44:22.418014   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/old-k8s-version-882237/id_rsa (-rw-------)
	I0816 13:44:22.418052   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.105 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-3966/.minikube/machines/old-k8s-version-882237/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 13:44:22.418072   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | About to run SSH command:
	I0816 13:44:22.418086   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | exit 0
	I0816 13:44:22.536890   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | SSH cmd err, output: <nil>: 
	I0816 13:44:22.537284   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetConfigRaw
	I0816 13:44:22.537843   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetIP
	I0816 13:44:22.540100   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.540454   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:22.540490   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.540683   57945 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/config.json ...
	I0816 13:44:22.540939   57945 machine.go:93] provisionDockerMachine start ...
	I0816 13:44:22.540960   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:44:22.541184   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:22.543102   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.543385   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:22.543413   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.543505   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:44:22.543664   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:22.543798   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:22.543991   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:44:22.544177   57945 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:22.544497   57945 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.105 22 <nil> <nil>}
	I0816 13:44:22.544519   57945 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 13:44:22.641319   57945 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 13:44:22.641355   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetMachineName
	I0816 13:44:22.641606   57945 buildroot.go:166] provisioning hostname "old-k8s-version-882237"
	I0816 13:44:22.641630   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetMachineName
	I0816 13:44:22.641820   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:22.644657   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.645053   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:22.645085   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.645279   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:44:22.645476   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:22.645656   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:22.645827   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:44:22.646013   57945 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:22.646233   57945 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.105 22 <nil> <nil>}
	I0816 13:44:22.646248   57945 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-882237 && echo "old-k8s-version-882237" | sudo tee /etc/hostname
	I0816 13:44:22.759488   57945 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-882237
	
	I0816 13:44:22.759526   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:22.762382   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.762774   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:22.762811   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.762959   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:44:22.763163   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:22.763353   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:22.763534   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:44:22.763738   57945 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:22.763967   57945 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.105 22 <nil> <nil>}
	I0816 13:44:22.763995   57945 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-882237' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-882237/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-882237' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 13:44:22.878120   57945 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 13:44:22.878158   57945 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-3966/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-3966/.minikube}
	I0816 13:44:22.878215   57945 buildroot.go:174] setting up certificates
	I0816 13:44:22.878230   57945 provision.go:84] configureAuth start
	I0816 13:44:22.878244   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetMachineName
	I0816 13:44:22.878581   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetIP
	I0816 13:44:22.881426   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.881808   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:22.881843   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.881971   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:22.884352   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.884750   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:22.884778   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.884932   57945 provision.go:143] copyHostCerts
	I0816 13:44:22.884994   57945 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem, removing ...
	I0816 13:44:22.885016   57945 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem
	I0816 13:44:22.885084   57945 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem (1082 bytes)
	I0816 13:44:22.885230   57945 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem, removing ...
	I0816 13:44:22.885242   57945 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem
	I0816 13:44:22.885276   57945 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem (1123 bytes)
	I0816 13:44:22.885374   57945 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem, removing ...
	I0816 13:44:22.885383   57945 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem
	I0816 13:44:22.885415   57945 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem (1675 bytes)
	I0816 13:44:22.885503   57945 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-882237 san=[127.0.0.1 192.168.72.105 localhost minikube old-k8s-version-882237]
	I0816 13:44:23.017446   57945 provision.go:177] copyRemoteCerts
	I0816 13:44:23.017519   57945 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 13:44:23.017555   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:23.020030   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.020423   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:23.020460   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.020678   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:44:23.020888   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:23.021076   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:44:23.021199   57945 sshutil.go:53] new ssh client: &{IP:192.168.72.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/old-k8s-version-882237/id_rsa Username:docker}
	I0816 13:44:23.100006   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0816 13:44:23.128795   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 13:44:23.157542   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 13:44:23.182619   57945 provision.go:87] duration metric: took 304.375843ms to configureAuth
	I0816 13:44:23.182652   57945 buildroot.go:189] setting minikube options for container-runtime
	I0816 13:44:23.182882   57945 config.go:182] Loaded profile config "old-k8s-version-882237": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0816 13:44:23.182984   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:23.186043   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.186441   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:23.186474   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.186648   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:44:23.186844   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:23.187015   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:23.187196   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:44:23.187383   57945 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:23.187566   57945 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.105 22 <nil> <nil>}
	I0816 13:44:23.187587   57945 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 13:44:23.459221   57945 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 13:44:23.459248   57945 machine.go:96] duration metric: took 918.295024ms to provisionDockerMachine
	I0816 13:44:23.459261   57945 start.go:293] postStartSetup for "old-k8s-version-882237" (driver="kvm2")
	I0816 13:44:23.459275   57945 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 13:44:23.459305   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:44:23.459614   57945 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 13:44:23.459649   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:23.462624   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.463010   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:23.463033   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.463210   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:44:23.463405   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:23.463584   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:44:23.463715   57945 sshutil.go:53] new ssh client: &{IP:192.168.72.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/old-k8s-version-882237/id_rsa Username:docker}
	I0816 13:44:23.550785   57945 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 13:44:23.554984   57945 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 13:44:23.555009   57945 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/addons for local assets ...
	I0816 13:44:23.555078   57945 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/files for local assets ...
	I0816 13:44:23.555171   57945 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem -> 111492.pem in /etc/ssl/certs
	I0816 13:44:23.555290   57945 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 13:44:23.564655   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /etc/ssl/certs/111492.pem (1708 bytes)
	I0816 13:44:23.588471   57945 start.go:296] duration metric: took 129.196791ms for postStartSetup
	I0816 13:44:23.588515   57945 fix.go:56] duration metric: took 20.198590598s for fixHost
	I0816 13:44:23.588544   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:23.591443   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.591805   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:23.591835   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.591959   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:44:23.592145   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:23.592354   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:23.592492   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:44:23.592668   57945 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:23.592868   57945 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.105 22 <nil> <nil>}
	I0816 13:44:23.592885   57945 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 13:44:23.689724   57945 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723815863.663875328
	
	I0816 13:44:23.689760   57945 fix.go:216] guest clock: 1723815863.663875328
	I0816 13:44:23.689771   57945 fix.go:229] Guest: 2024-08-16 13:44:23.663875328 +0000 UTC Remote: 2024-08-16 13:44:23.588520483 +0000 UTC m=+233.521229154 (delta=75.354845ms)
	I0816 13:44:23.689796   57945 fix.go:200] guest clock delta is within tolerance: 75.354845ms
	I0816 13:44:23.689806   57945 start.go:83] releasing machines lock for "old-k8s-version-882237", held for 20.299922092s
	I0816 13:44:23.689839   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:44:23.690115   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetIP
	I0816 13:44:23.692683   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.693079   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:23.693108   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.693268   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:44:23.693753   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:44:23.693926   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:44:23.694009   57945 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 13:44:23.694062   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:23.694142   57945 ssh_runner.go:195] Run: cat /version.json
	I0816 13:44:23.694167   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:23.696872   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.696897   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.697247   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:23.697281   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:23.697309   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.697340   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.697622   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:44:23.697801   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:44:23.697830   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:23.697974   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:44:23.698010   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:23.698144   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:44:23.698155   57945 sshutil.go:53] new ssh client: &{IP:192.168.72.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/old-k8s-version-882237/id_rsa Username:docker}
	I0816 13:44:23.698312   57945 sshutil.go:53] new ssh client: &{IP:192.168.72.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/old-k8s-version-882237/id_rsa Username:docker}
	I0816 13:44:23.774706   57945 ssh_runner.go:195] Run: systemctl --version
	I0816 13:44:23.802788   57945 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 13:44:23.955361   57945 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 13:44:23.963291   57945 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 13:44:23.963363   57945 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 13:44:23.979542   57945 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 13:44:23.979579   57945 start.go:495] detecting cgroup driver to use...
	I0816 13:44:23.979645   57945 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 13:44:24.002509   57945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 13:44:24.019715   57945 docker.go:217] disabling cri-docker service (if available) ...
	I0816 13:44:24.019773   57945 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 13:44:24.033677   57945 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 13:44:24.049195   57945 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 13:44:24.168789   57945 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 13:44:24.346709   57945 docker.go:233] disabling docker service ...
	I0816 13:44:24.346772   57945 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 13:44:24.363739   57945 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 13:44:24.378785   57945 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 13:44:24.547705   57945 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 13:44:24.738866   57945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 13:44:24.756139   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 13:44:24.775999   57945 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0816 13:44:24.776060   57945 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:24.786682   57945 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 13:44:24.786783   57945 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:24.797385   57945 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:24.807825   57945 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:24.817919   57945 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 13:44:24.828884   57945 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 13:44:24.838725   57945 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 13:44:24.838782   57945 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 13:44:24.852544   57945 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 13:44:24.868302   57945 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:44:24.980614   57945 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 13:44:25.122584   57945 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 13:44:25.122660   57945 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 13:44:25.128622   57945 start.go:563] Will wait 60s for crictl version
	I0816 13:44:25.128694   57945 ssh_runner.go:195] Run: which crictl
	I0816 13:44:25.133726   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 13:44:25.188714   57945 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 13:44:25.188801   57945 ssh_runner.go:195] Run: crio --version
	I0816 13:44:25.223719   57945 ssh_runner.go:195] Run: crio --version
	I0816 13:44:25.263894   57945 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0816 13:44:23.714877   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .Start
	I0816 13:44:23.715069   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Ensuring networks are active...
	I0816 13:44:23.715788   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Ensuring network default is active
	I0816 13:44:23.716164   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Ensuring network mk-default-k8s-diff-port-893736 is active
	I0816 13:44:23.716648   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Getting domain xml...
	I0816 13:44:23.717424   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Creating domain...
	I0816 13:44:24.979917   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting to get IP...
	I0816 13:44:24.980942   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:24.981375   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:24.981448   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:24.981363   59082 retry.go:31] will retry after 199.038336ms: waiting for machine to come up
	I0816 13:44:25.181886   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:25.182350   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:25.182374   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:25.182330   59082 retry.go:31] will retry after 297.566018ms: waiting for machine to come up
	I0816 13:44:25.481811   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:25.482271   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:25.482296   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:25.482234   59082 retry.go:31] will retry after 297.833233ms: waiting for machine to come up
	I0816 13:44:25.781831   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:25.782445   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:25.782479   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:25.782400   59082 retry.go:31] will retry after 459.810978ms: waiting for machine to come up
	I0816 13:44:22.220022   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:22.317717   57440 api_server.go:52] waiting for apiserver process to appear ...
	I0816 13:44:22.317800   57440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:22.818025   57440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:23.318171   57440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:23.354996   57440 api_server.go:72] duration metric: took 1.037294965s to wait for apiserver process to appear ...
	I0816 13:44:23.355023   57440 api_server.go:88] waiting for apiserver healthz status ...
	I0816 13:44:23.355043   57440 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0816 13:44:23.355677   57440 api_server.go:269] stopped: https://192.168.61.116:8443/healthz: Get "https://192.168.61.116:8443/healthz": dial tcp 192.168.61.116:8443: connect: connection refused
	I0816 13:44:23.855190   57440 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0816 13:44:26.719152   57440 api_server.go:279] https://192.168.61.116:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 13:44:26.719184   57440 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 13:44:26.719204   57440 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0816 13:44:26.756329   57440 api_server.go:279] https://192.168.61.116:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 13:44:26.756366   57440 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 13:44:26.855581   57440 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0816 13:44:26.862856   57440 api_server.go:279] https://192.168.61.116:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 13:44:26.862885   57440 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 13:44:27.355555   57440 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0816 13:44:27.365664   57440 api_server.go:279] https://192.168.61.116:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 13:44:27.365702   57440 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 13:44:27.855844   57440 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0816 13:44:27.863185   57440 api_server.go:279] https://192.168.61.116:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 13:44:27.863227   57440 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 13:44:28.355490   57440 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0816 13:44:28.361410   57440 api_server.go:279] https://192.168.61.116:8443/healthz returned 200:
	ok
	I0816 13:44:28.374558   57440 api_server.go:141] control plane version: v1.31.0
	I0816 13:44:28.374593   57440 api_server.go:131] duration metric: took 5.019562023s to wait for apiserver health ...
	I0816 13:44:28.374604   57440 cni.go:84] Creating CNI manager for ""
	I0816 13:44:28.374613   57440 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:44:28.376749   57440 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 13:44:28.378413   57440 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 13:44:28.401199   57440 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 13:44:28.420798   57440 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 13:44:28.452605   57440 system_pods.go:59] 8 kube-system pods found
	I0816 13:44:28.452645   57440 system_pods.go:61] "coredns-6f6b679f8f-8kbs6" [e732183e-3b22-4a11-909a-246de5fc1c8a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 13:44:28.452655   57440 system_pods.go:61] "etcd-no-preload-311070" [e05deb95-06ff-4a8d-b520-0350b2332814] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 13:44:28.452663   57440 system_pods.go:61] "kube-apiserver-no-preload-311070" [06cb4989-0511-4c14-a3ec-e966829cf2ec] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 13:44:28.452671   57440 system_pods.go:61] "kube-controller-manager-no-preload-311070" [baa9f379-4e14-4873-a456-42e90a816b0b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 13:44:28.452680   57440 system_pods.go:61] "kube-proxy-b8d5b" [9ed1c33b-903f-43e8-880c-b9a49c658806] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0816 13:44:28.452689   57440 system_pods.go:61] "kube-scheduler-no-preload-311070" [166c0f64-ebf6-413f-82d5-f1c32991c63a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 13:44:28.452704   57440 system_pods.go:61] "metrics-server-6867b74b74-mgxhv" [e9654a8e-4db2-494d-93a7-a134b0e2bb50] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 13:44:28.452710   57440 system_pods.go:61] "storage-provisioner" [f340d2e3-2889-4200-b477-830494b827c6] Running
	I0816 13:44:28.452719   57440 system_pods.go:74] duration metric: took 31.89892ms to wait for pod list to return data ...
	I0816 13:44:28.452726   57440 node_conditions.go:102] verifying NodePressure condition ...
	I0816 13:44:28.463229   57440 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 13:44:28.463262   57440 node_conditions.go:123] node cpu capacity is 2
	I0816 13:44:28.463275   57440 node_conditions.go:105] duration metric: took 10.544476ms to run NodePressure ...
	I0816 13:44:28.463296   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:28.809304   57440 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0816 13:44:28.819091   57440 kubeadm.go:739] kubelet initialised
	I0816 13:44:28.819115   57440 kubeadm.go:740] duration metric: took 9.779672ms waiting for restarted kubelet to initialise ...
	I0816 13:44:28.819124   57440 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:44:28.827828   57440 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-8kbs6" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:28.840277   57440 pod_ready.go:98] node "no-preload-311070" hosting pod "coredns-6f6b679f8f-8kbs6" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:28.840310   57440 pod_ready.go:82] duration metric: took 12.450089ms for pod "coredns-6f6b679f8f-8kbs6" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:28.840322   57440 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-311070" hosting pod "coredns-6f6b679f8f-8kbs6" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:28.840333   57440 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:28.847012   57440 pod_ready.go:98] node "no-preload-311070" hosting pod "etcd-no-preload-311070" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:28.847036   57440 pod_ready.go:82] duration metric: took 6.692927ms for pod "etcd-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:28.847045   57440 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-311070" hosting pod "etcd-no-preload-311070" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:28.847050   57440 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:28.861358   57440 pod_ready.go:98] node "no-preload-311070" hosting pod "kube-apiserver-no-preload-311070" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:28.861404   57440 pod_ready.go:82] duration metric: took 14.346379ms for pod "kube-apiserver-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:28.861417   57440 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-311070" hosting pod "kube-apiserver-no-preload-311070" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:28.861428   57440 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:28.870641   57440 pod_ready.go:98] node "no-preload-311070" hosting pod "kube-controller-manager-no-preload-311070" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:28.870663   57440 pod_ready.go:82] duration metric: took 9.224713ms for pod "kube-controller-manager-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:28.870671   57440 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-311070" hosting pod "kube-controller-manager-no-preload-311070" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:28.870678   57440 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-b8d5b" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:29.224281   57440 pod_ready.go:98] node "no-preload-311070" hosting pod "kube-proxy-b8d5b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:29.224310   57440 pod_ready.go:82] duration metric: took 353.622663ms for pod "kube-proxy-b8d5b" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:29.224322   57440 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-311070" hosting pod "kube-proxy-b8d5b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:29.224331   57440 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:29.624518   57440 pod_ready.go:98] node "no-preload-311070" hosting pod "kube-scheduler-no-preload-311070" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:29.624552   57440 pod_ready.go:82] duration metric: took 400.212041ms for pod "kube-scheduler-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:29.624567   57440 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-311070" hosting pod "kube-scheduler-no-preload-311070" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:29.624577   57440 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:30.030291   57440 pod_ready.go:98] node "no-preload-311070" hosting pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:30.030327   57440 pod_ready.go:82] duration metric: took 405.73495ms for pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:30.030341   57440 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-311070" hosting pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:30.030352   57440 pod_ready.go:39] duration metric: took 1.211214389s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:44:30.030372   57440 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 13:44:30.045247   57440 ops.go:34] apiserver oom_adj: -16
	I0816 13:44:30.045279   57440 kubeadm.go:597] duration metric: took 9.441179951s to restartPrimaryControlPlane
	I0816 13:44:30.045291   57440 kubeadm.go:394] duration metric: took 9.489057744s to StartCluster
	I0816 13:44:30.045312   57440 settings.go:142] acquiring lock: {Name:mk96ffc9331ceacd8f3c1c33a59b38e047898a0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:44:30.045410   57440 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 13:44:30.047053   57440 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/kubeconfig: {Name:mk102d7d0e1fecb6a50b9d6c1ee82dcff7f7a898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:44:30.047310   57440 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 13:44:30.047415   57440 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 13:44:30.047486   57440 addons.go:69] Setting storage-provisioner=true in profile "no-preload-311070"
	I0816 13:44:30.047521   57440 addons.go:234] Setting addon storage-provisioner=true in "no-preload-311070"
	W0816 13:44:30.047534   57440 addons.go:243] addon storage-provisioner should already be in state true
	I0816 13:44:30.047569   57440 host.go:66] Checking if "no-preload-311070" exists ...
	I0816 13:44:30.048048   57440 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:30.048079   57440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:30.048302   57440 addons.go:69] Setting default-storageclass=true in profile "no-preload-311070"
	I0816 13:44:30.048339   57440 addons.go:69] Setting metrics-server=true in profile "no-preload-311070"
	I0816 13:44:30.048377   57440 addons.go:234] Setting addon metrics-server=true in "no-preload-311070"
	W0816 13:44:30.048387   57440 addons.go:243] addon metrics-server should already be in state true
	I0816 13:44:30.048424   57440 host.go:66] Checking if "no-preload-311070" exists ...
	I0816 13:44:30.048343   57440 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-311070"
	I0816 13:44:30.048812   57440 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:30.048834   57440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:30.048933   57440 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:30.048957   57440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:30.049282   57440 config.go:182] Loaded profile config "no-preload-311070": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 13:44:30.050905   57440 out.go:177] * Verifying Kubernetes components...
	I0816 13:44:30.052478   57440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:44:30.069405   57440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40065
	I0816 13:44:30.069463   57440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33057
	I0816 13:44:30.069735   57440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46331
	I0816 13:44:30.069949   57440 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:30.070072   57440 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:30.070145   57440 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:30.070488   57440 main.go:141] libmachine: Using API Version  1
	I0816 13:44:30.070506   57440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:30.070586   57440 main.go:141] libmachine: Using API Version  1
	I0816 13:44:30.070598   57440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:30.070618   57440 main.go:141] libmachine: Using API Version  1
	I0816 13:44:30.070627   57440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:30.070977   57440 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:30.071006   57440 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:30.071031   57440 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:30.071212   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetState
	I0816 13:44:30.071602   57440 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:30.071602   57440 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:30.071639   57440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:30.071621   57440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:30.074680   57440 addons.go:234] Setting addon default-storageclass=true in "no-preload-311070"
	W0816 13:44:30.074699   57440 addons.go:243] addon default-storageclass should already be in state true
	I0816 13:44:30.074730   57440 host.go:66] Checking if "no-preload-311070" exists ...
	I0816 13:44:30.075073   57440 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:30.075100   57440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:30.088961   57440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46717
	I0816 13:44:30.089421   57440 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:30.089952   57440 main.go:141] libmachine: Using API Version  1
	I0816 13:44:30.089971   57440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:30.090128   57440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37475
	I0816 13:44:30.090429   57440 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:30.090491   57440 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:30.090744   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetState
	I0816 13:44:30.090933   57440 main.go:141] libmachine: Using API Version  1
	I0816 13:44:30.090950   57440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:30.091263   57440 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:30.091463   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetState
	I0816 13:44:30.093258   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	I0816 13:44:30.093571   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	I0816 13:44:25.265126   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetIP
	I0816 13:44:25.268186   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:25.268630   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:25.268662   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:25.268927   57945 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0816 13:44:25.274101   57945 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 13:44:25.288155   57945 kubeadm.go:883] updating cluster {Name:old-k8s-version-882237 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-882237 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.105 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 13:44:25.288260   57945 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0816 13:44:25.288311   57945 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 13:44:25.342303   57945 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0816 13:44:25.342377   57945 ssh_runner.go:195] Run: which lz4
	I0816 13:44:25.346641   57945 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 13:44:25.350761   57945 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 13:44:25.350793   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0816 13:44:27.052140   57945 crio.go:462] duration metric: took 1.705504554s to copy over tarball
	I0816 13:44:27.052223   57945 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 13:44:30.094479   57440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35323
	I0816 13:44:30.094965   57440 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:30.095482   57440 main.go:141] libmachine: Using API Version  1
	I0816 13:44:30.095502   57440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:30.095857   57440 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:30.096322   57440 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:30.096361   57440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:30.128555   57440 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:30.128676   57440 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0816 13:44:26.244353   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:26.245158   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:26.245183   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:26.245062   59082 retry.go:31] will retry after 680.176025ms: waiting for machine to come up
	I0816 13:44:26.926654   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:26.927139   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:26.927183   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:26.927106   59082 retry.go:31] will retry after 720.530442ms: waiting for machine to come up
	I0816 13:44:27.648858   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:27.649342   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:27.649367   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:27.649289   59082 retry.go:31] will retry after 930.752133ms: waiting for machine to come up
	I0816 13:44:28.581283   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:28.581684   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:28.581709   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:28.581635   59082 retry.go:31] will retry after 972.791503ms: waiting for machine to come up
	I0816 13:44:29.556168   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:29.556563   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:29.556583   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:29.556525   59082 retry.go:31] will retry after 1.18129541s: waiting for machine to come up
	I0816 13:44:30.739498   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:30.739951   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:30.739978   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:30.739883   59082 retry.go:31] will retry after 2.27951459s: waiting for machine to come up
	I0816 13:44:30.133959   57440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39625
	I0816 13:44:30.134516   57440 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:30.135080   57440 main.go:141] libmachine: Using API Version  1
	I0816 13:44:30.135105   57440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:30.135463   57440 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:30.135598   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetState
	I0816 13:44:30.137494   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	I0816 13:44:30.137805   57440 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 13:44:30.137824   57440 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 13:44:30.137839   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:30.141006   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:30.141509   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:30.141544   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:30.141772   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:30.141952   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:30.142150   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:30.142305   57440 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/no-preload-311070/id_rsa Username:docker}
	I0816 13:44:30.164598   57440 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 13:44:30.164627   57440 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 13:44:30.164653   57440 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 13:44:30.164654   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:30.164662   57440 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 13:44:30.164687   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:30.168935   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:30.169259   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:30.169588   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:30.169615   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:30.169806   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:30.169828   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:30.169859   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:30.169953   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:30.170096   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:30.170103   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:30.170243   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:30.170241   57440 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/no-preload-311070/id_rsa Username:docker}
	I0816 13:44:30.170389   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:30.170505   57440 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/no-preload-311070/id_rsa Username:docker}
	I0816 13:44:30.285806   57440 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 13:44:30.312267   57440 node_ready.go:35] waiting up to 6m0s for node "no-preload-311070" to be "Ready" ...
	I0816 13:44:30.406371   57440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 13:44:30.409491   57440 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 13:44:30.409515   57440 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0816 13:44:30.440485   57440 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 13:44:30.440508   57440 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 13:44:30.480735   57440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 13:44:30.484549   57440 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 13:44:30.484573   57440 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 13:44:30.541485   57440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 13:44:32.535406   57440 node_ready.go:53] node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:33.204746   57440 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.723973086s)
	I0816 13:44:33.204802   57440 main.go:141] libmachine: Making call to close driver server
	I0816 13:44:33.204817   57440 main.go:141] libmachine: (no-preload-311070) Calling .Close
	I0816 13:44:33.204843   57440 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.798437569s)
	I0816 13:44:33.204877   57440 main.go:141] libmachine: Making call to close driver server
	I0816 13:44:33.204889   57440 main.go:141] libmachine: (no-preload-311070) Calling .Close
	I0816 13:44:33.205092   57440 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:44:33.205116   57440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:44:33.205126   57440 main.go:141] libmachine: Making call to close driver server
	I0816 13:44:33.205134   57440 main.go:141] libmachine: (no-preload-311070) Calling .Close
	I0816 13:44:33.205357   57440 main.go:141] libmachine: (no-preload-311070) DBG | Closing plugin on server side
	I0816 13:44:33.205359   57440 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:44:33.205379   57440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:44:33.205387   57440 main.go:141] libmachine: Making call to close driver server
	I0816 13:44:33.205395   57440 main.go:141] libmachine: (no-preload-311070) Calling .Close
	I0816 13:44:33.205408   57440 main.go:141] libmachine: (no-preload-311070) DBG | Closing plugin on server side
	I0816 13:44:33.205445   57440 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:44:33.205454   57440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:44:33.205593   57440 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:44:33.205605   57440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:44:33.214075   57440 main.go:141] libmachine: Making call to close driver server
	I0816 13:44:33.214095   57440 main.go:141] libmachine: (no-preload-311070) Calling .Close
	I0816 13:44:33.214307   57440 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:44:33.214320   57440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:44:33.259136   57440 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.717608276s)
	I0816 13:44:33.259188   57440 main.go:141] libmachine: Making call to close driver server
	I0816 13:44:33.259212   57440 main.go:141] libmachine: (no-preload-311070) Calling .Close
	I0816 13:44:33.259468   57440 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:44:33.259485   57440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:44:33.259495   57440 main.go:141] libmachine: Making call to close driver server
	I0816 13:44:33.259503   57440 main.go:141] libmachine: (no-preload-311070) Calling .Close
	I0816 13:44:33.259988   57440 main.go:141] libmachine: (no-preload-311070) DBG | Closing plugin on server side
	I0816 13:44:33.260004   57440 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:44:33.260016   57440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:44:33.260026   57440 addons.go:475] Verifying addon metrics-server=true in "no-preload-311070"
	I0816 13:44:33.262190   57440 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0816 13:44:30.191146   57945 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.138885293s)
	I0816 13:44:30.191188   57945 crio.go:469] duration metric: took 3.139020745s to extract the tarball
	I0816 13:44:30.191198   57945 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 13:44:30.249011   57945 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 13:44:30.285826   57945 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0816 13:44:30.285847   57945 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0816 13:44:30.285918   57945 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:30.285940   57945 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:44:30.285947   57945 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0816 13:44:30.285922   57945 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:44:30.285971   57945 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0816 13:44:30.286019   57945 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:44:30.285922   57945 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:44:30.285979   57945 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0816 13:44:30.288208   57945 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:44:30.288272   57945 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:30.288275   57945 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0816 13:44:30.288205   57945 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0816 13:44:30.288303   57945 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0816 13:44:30.288320   57945 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:44:30.288211   57945 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:44:30.288207   57945 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:44:30.434593   57945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:44:30.434847   57945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:44:30.438852   57945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:44:30.449704   57945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0816 13:44:30.451130   57945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:44:30.454848   57945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0816 13:44:30.513569   57945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0816 13:44:30.594404   57945 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0816 13:44:30.594453   57945 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:44:30.594509   57945 ssh_runner.go:195] Run: which crictl
	I0816 13:44:30.612653   57945 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0816 13:44:30.612699   57945 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:44:30.612746   57945 ssh_runner.go:195] Run: which crictl
	I0816 13:44:30.652117   57945 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0816 13:44:30.652162   57945 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:44:30.652214   57945 ssh_runner.go:195] Run: which crictl
	I0816 13:44:30.681057   57945 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0816 13:44:30.681116   57945 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0816 13:44:30.681163   57945 ssh_runner.go:195] Run: which crictl
	I0816 13:44:30.681239   57945 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0816 13:44:30.681296   57945 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:44:30.681341   57945 ssh_runner.go:195] Run: which crictl
	I0816 13:44:30.688696   57945 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0816 13:44:30.688739   57945 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0816 13:44:30.688785   57945 ssh_runner.go:195] Run: which crictl
	I0816 13:44:30.706749   57945 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0816 13:44:30.706802   57945 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0816 13:44:30.706816   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:44:30.706843   57945 ssh_runner.go:195] Run: which crictl
	I0816 13:44:30.706911   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:44:30.706938   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:44:30.706987   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 13:44:30.707031   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:44:30.707045   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 13:44:30.913446   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 13:44:30.913520   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 13:44:30.913548   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:44:30.913611   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:44:30.913653   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 13:44:30.913675   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:44:30.913813   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:44:31.079066   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:44:31.079100   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 13:44:31.079140   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:44:31.103707   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:44:31.103890   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 13:44:31.106587   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:44:31.106723   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 13:44:31.210359   57945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:31.226549   57945 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0816 13:44:31.226605   57945 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0816 13:44:31.226648   57945 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0816 13:44:31.266238   57945 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0816 13:44:31.266256   57945 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0816 13:44:31.269423   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 13:44:31.270551   57945 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0816 13:44:31.399144   57945 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0816 13:44:31.399220   57945 cache_images.go:92] duration metric: took 1.113354806s to LoadCachedImages
	W0816 13:44:31.399297   57945 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0816 13:44:31.399311   57945 kubeadm.go:934] updating node { 192.168.72.105 8443 v1.20.0 crio true true} ...
	I0816 13:44:31.399426   57945 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-882237 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.105
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-882237 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 13:44:31.399515   57945 ssh_runner.go:195] Run: crio config
	I0816 13:44:31.459182   57945 cni.go:84] Creating CNI manager for ""
	I0816 13:44:31.459226   57945 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:44:31.459244   57945 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 13:44:31.459270   57945 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.105 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-882237 NodeName:old-k8s-version-882237 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.105"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.105 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0816 13:44:31.459439   57945 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.105
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-882237"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.105
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.105"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 13:44:31.459521   57945 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0816 13:44:31.470415   57945 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 13:44:31.470500   57945 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 13:44:31.480890   57945 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0816 13:44:31.498797   57945 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 13:44:31.516425   57945 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0816 13:44:31.536528   57945 ssh_runner.go:195] Run: grep 192.168.72.105	control-plane.minikube.internal$ /etc/hosts
	I0816 13:44:31.540569   57945 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.105	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 13:44:31.553530   57945 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:44:31.693191   57945 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 13:44:31.711162   57945 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237 for IP: 192.168.72.105
	I0816 13:44:31.711190   57945 certs.go:194] generating shared ca certs ...
	I0816 13:44:31.711209   57945 certs.go:226] acquiring lock for ca certs: {Name:mkdb46755373c135acf8239d2d4352f4b0b3d1d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:44:31.711382   57945 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key
	I0816 13:44:31.711465   57945 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key
	I0816 13:44:31.711478   57945 certs.go:256] generating profile certs ...
	I0816 13:44:31.711596   57945 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/client.key
	I0816 13:44:31.711676   57945 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/apiserver.key.e63f19d8
	I0816 13:44:31.711739   57945 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/proxy-client.key
	I0816 13:44:31.711906   57945 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem (1338 bytes)
	W0816 13:44:31.711969   57945 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149_empty.pem, impossibly tiny 0 bytes
	I0816 13:44:31.711984   57945 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 13:44:31.712019   57945 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem (1082 bytes)
	I0816 13:44:31.712058   57945 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem (1123 bytes)
	I0816 13:44:31.712089   57945 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem (1675 bytes)
	I0816 13:44:31.712146   57945 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem (1708 bytes)
	I0816 13:44:31.713101   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 13:44:31.748701   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 13:44:31.789308   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 13:44:31.814410   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 13:44:31.841281   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0816 13:44:31.867939   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 13:44:31.894410   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 13:44:31.921591   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 13:44:31.952356   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /usr/share/ca-certificates/111492.pem (1708 bytes)
	I0816 13:44:31.982171   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 13:44:32.008849   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem --> /usr/share/ca-certificates/11149.pem (1338 bytes)
	I0816 13:44:32.034750   57945 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 13:44:32.051812   57945 ssh_runner.go:195] Run: openssl version
	I0816 13:44:32.057774   57945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11149.pem && ln -fs /usr/share/ca-certificates/11149.pem /etc/ssl/certs/11149.pem"
	I0816 13:44:32.068553   57945 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11149.pem
	I0816 13:44:32.073022   57945 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 12:33 /usr/share/ca-certificates/11149.pem
	I0816 13:44:32.073095   57945 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11149.pem
	I0816 13:44:32.079239   57945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11149.pem /etc/ssl/certs/51391683.0"
	I0816 13:44:32.089825   57945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111492.pem && ln -fs /usr/share/ca-certificates/111492.pem /etc/ssl/certs/111492.pem"
	I0816 13:44:32.100630   57945 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111492.pem
	I0816 13:44:32.105792   57945 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 12:33 /usr/share/ca-certificates/111492.pem
	I0816 13:44:32.105851   57945 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111492.pem
	I0816 13:44:32.112004   57945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111492.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 13:44:32.122723   57945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 13:44:32.133560   57945 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:44:32.138215   57945 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 12:22 /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:44:32.138260   57945 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:44:32.144059   57945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 13:44:32.155210   57945 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 13:44:32.159746   57945 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 13:44:32.165984   57945 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 13:44:32.171617   57945 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 13:44:32.177778   57945 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 13:44:32.183623   57945 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 13:44:32.189537   57945 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 13:44:32.195627   57945 kubeadm.go:392] StartCluster: {Name:old-k8s-version-882237 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-882237 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.105 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 13:44:32.195706   57945 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 13:44:32.195741   57945 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 13:44:32.235910   57945 cri.go:89] found id: ""
	I0816 13:44:32.235978   57945 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 13:44:32.248201   57945 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 13:44:32.248223   57945 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 13:44:32.248286   57945 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 13:44:32.258917   57945 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 13:44:32.260386   57945 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-882237" does not appear in /home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 13:44:32.261475   57945 kubeconfig.go:62] /home/jenkins/minikube-integration/19423-3966/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-882237" cluster setting kubeconfig missing "old-k8s-version-882237" context setting]
	I0816 13:44:32.263041   57945 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/kubeconfig: {Name:mk102d7d0e1fecb6a50b9d6c1ee82dcff7f7a898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:44:32.335150   57945 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 13:44:32.346103   57945 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.105
	I0816 13:44:32.346141   57945 kubeadm.go:1160] stopping kube-system containers ...
	I0816 13:44:32.346155   57945 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 13:44:32.346212   57945 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 13:44:32.390110   57945 cri.go:89] found id: ""
	I0816 13:44:32.390197   57945 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 13:44:32.408685   57945 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 13:44:32.419119   57945 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 13:44:32.419146   57945 kubeadm.go:157] found existing configuration files:
	
	I0816 13:44:32.419227   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 13:44:32.429282   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 13:44:32.429352   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 13:44:32.439444   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 13:44:32.449342   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 13:44:32.449409   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 13:44:32.459836   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 13:44:32.469581   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 13:44:32.469653   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 13:44:32.479655   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 13:44:32.489139   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 13:44:32.489204   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 13:44:32.499439   57945 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 13:44:32.509706   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:32.672388   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:33.787722   57945 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.115294487s)
	I0816 13:44:33.787763   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:34.027016   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:34.141852   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:34.247190   57945 api_server.go:52] waiting for apiserver process to appear ...
	I0816 13:44:34.247286   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:34.747781   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:33.022378   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:33.023000   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:33.023028   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:33.022950   59082 retry.go:31] will retry after 1.906001247s: waiting for machine to come up
	I0816 13:44:34.930169   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:34.930674   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:34.930702   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:34.930612   59082 retry.go:31] will retry after 2.809719622s: waiting for machine to come up
	I0816 13:44:33.263780   57440 addons.go:510] duration metric: took 3.216351591s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0816 13:44:34.816280   57440 node_ready.go:53] node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:36.817474   57440 node_ready.go:53] node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:35.248075   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:35.747575   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:36.247693   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:36.748219   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:37.247519   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:37.748189   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:38.248143   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:38.748193   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:39.247412   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:39.748043   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:37.742122   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:37.742506   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:37.742545   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:37.742464   59082 retry.go:31] will retry after 4.139761236s: waiting for machine to come up
	I0816 13:44:37.815407   57440 node_ready.go:49] node "no-preload-311070" has status "Ready":"True"
	I0816 13:44:37.815428   57440 node_ready.go:38] duration metric: took 7.503128864s for node "no-preload-311070" to be "Ready" ...
	I0816 13:44:37.815437   57440 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:44:37.820318   57440 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-8kbs6" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:37.825460   57440 pod_ready.go:93] pod "coredns-6f6b679f8f-8kbs6" in "kube-system" namespace has status "Ready":"True"
	I0816 13:44:37.825478   57440 pod_ready.go:82] duration metric: took 5.136508ms for pod "coredns-6f6b679f8f-8kbs6" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:37.825486   57440 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:37.829609   57440 pod_ready.go:93] pod "etcd-no-preload-311070" in "kube-system" namespace has status "Ready":"True"
	I0816 13:44:37.829628   57440 pod_ready.go:82] duration metric: took 4.133294ms for pod "etcd-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:37.829635   57440 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:39.835973   57440 pod_ready.go:103] pod "kube-apiserver-no-preload-311070" in "kube-system" namespace has status "Ready":"False"
	I0816 13:44:40.335270   57440 pod_ready.go:93] pod "kube-apiserver-no-preload-311070" in "kube-system" namespace has status "Ready":"True"
	I0816 13:44:40.335289   57440 pod_ready.go:82] duration metric: took 2.505647853s for pod "kube-apiserver-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:40.335298   57440 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:43.233555   57240 start.go:364] duration metric: took 55.654362151s to acquireMachinesLock for "embed-certs-302520"
	I0816 13:44:43.233638   57240 start.go:96] Skipping create...Using existing machine configuration
	I0816 13:44:43.233649   57240 fix.go:54] fixHost starting: 
	I0816 13:44:43.234047   57240 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:43.234078   57240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:43.253929   57240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34851
	I0816 13:44:43.254405   57240 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:43.254878   57240 main.go:141] libmachine: Using API Version  1
	I0816 13:44:43.254900   57240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:43.255235   57240 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:43.255400   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	I0816 13:44:43.255578   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetState
	I0816 13:44:43.257434   57240 fix.go:112] recreateIfNeeded on embed-certs-302520: state=Stopped err=<nil>
	I0816 13:44:43.257472   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	W0816 13:44:43.257637   57240 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 13:44:43.259743   57240 out.go:177] * Restarting existing kvm2 VM for "embed-certs-302520" ...
	I0816 13:44:41.885729   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:41.886143   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Found IP for machine: 192.168.50.186
	I0816 13:44:41.886162   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Reserving static IP address...
	I0816 13:44:41.886178   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has current primary IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:41.886540   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-893736", mac: "52:54:00:5f:b2:25", ip: "192.168.50.186"} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:41.886570   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | skip adding static IP to network mk-default-k8s-diff-port-893736 - found existing host DHCP lease matching {name: "default-k8s-diff-port-893736", mac: "52:54:00:5f:b2:25", ip: "192.168.50.186"}
	I0816 13:44:41.886585   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Reserved static IP address: 192.168.50.186
	I0816 13:44:41.886600   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for SSH to be available...
	I0816 13:44:41.886617   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | Getting to WaitForSSH function...
	I0816 13:44:41.888671   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:41.889003   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:41.889047   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:41.889118   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | Using SSH client type: external
	I0816 13:44:41.889142   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/default-k8s-diff-port-893736/id_rsa (-rw-------)
	I0816 13:44:41.889181   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.186 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-3966/.minikube/machines/default-k8s-diff-port-893736/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 13:44:41.889201   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | About to run SSH command:
	I0816 13:44:41.889215   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | exit 0
	I0816 13:44:42.017010   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | SSH cmd err, output: <nil>: 
	I0816 13:44:42.017374   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetConfigRaw
	I0816 13:44:42.017979   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetIP
	I0816 13:44:42.020580   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.020958   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:42.020992   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.021174   58430 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/default-k8s-diff-port-893736/config.json ...
	I0816 13:44:42.021342   58430 machine.go:93] provisionDockerMachine start ...
	I0816 13:44:42.021356   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:44:42.021521   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:42.023732   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.024033   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:42.024057   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.024201   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:42.024354   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:42.024526   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:42.024667   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:42.024811   58430 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:42.024994   58430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.186 22 <nil> <nil>}
	I0816 13:44:42.025005   58430 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 13:44:42.137459   58430 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 13:44:42.137495   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetMachineName
	I0816 13:44:42.137722   58430 buildroot.go:166] provisioning hostname "default-k8s-diff-port-893736"
	I0816 13:44:42.137745   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetMachineName
	I0816 13:44:42.137925   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:42.140599   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.140987   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:42.141017   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.141148   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:42.141309   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:42.141430   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:42.141536   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:42.141677   58430 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:42.141843   58430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.186 22 <nil> <nil>}
	I0816 13:44:42.141855   58430 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-893736 && echo "default-k8s-diff-port-893736" | sudo tee /etc/hostname
	I0816 13:44:42.267643   58430 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-893736
	
	I0816 13:44:42.267670   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:42.270489   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.270834   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:42.270867   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.271089   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:42.271266   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:42.271405   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:42.271527   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:42.271675   58430 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:42.271829   58430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.186 22 <nil> <nil>}
	I0816 13:44:42.271847   58430 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-893736' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-893736/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-893736' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 13:44:42.398010   58430 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 13:44:42.398057   58430 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-3966/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-3966/.minikube}
	I0816 13:44:42.398122   58430 buildroot.go:174] setting up certificates
	I0816 13:44:42.398139   58430 provision.go:84] configureAuth start
	I0816 13:44:42.398157   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetMachineName
	I0816 13:44:42.398484   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetIP
	I0816 13:44:42.401217   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.401566   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:42.401587   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.401749   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:42.404082   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.404380   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:42.404425   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.404541   58430 provision.go:143] copyHostCerts
	I0816 13:44:42.404596   58430 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem, removing ...
	I0816 13:44:42.404606   58430 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem
	I0816 13:44:42.404666   58430 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem (1123 bytes)
	I0816 13:44:42.404758   58430 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem, removing ...
	I0816 13:44:42.404767   58430 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem
	I0816 13:44:42.404788   58430 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem (1675 bytes)
	I0816 13:44:42.404850   58430 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem, removing ...
	I0816 13:44:42.404857   58430 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem
	I0816 13:44:42.404873   58430 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem (1082 bytes)
	I0816 13:44:42.404965   58430 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-893736 san=[127.0.0.1 192.168.50.186 default-k8s-diff-port-893736 localhost minikube]
	I0816 13:44:42.551867   58430 provision.go:177] copyRemoteCerts
	I0816 13:44:42.551928   58430 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 13:44:42.551954   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:42.554945   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.555276   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:42.555316   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.555517   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:42.555699   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:42.555838   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:42.555964   58430 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/default-k8s-diff-port-893736/id_rsa Username:docker}
	I0816 13:44:42.643591   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 13:44:42.667108   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0816 13:44:42.690852   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 13:44:42.714001   58430 provision.go:87] duration metric: took 315.84846ms to configureAuth
	I0816 13:44:42.714030   58430 buildroot.go:189] setting minikube options for container-runtime
	I0816 13:44:42.714189   58430 config.go:182] Loaded profile config "default-k8s-diff-port-893736": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 13:44:42.714263   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:42.716726   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.717082   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:42.717110   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.717282   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:42.717486   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:42.717621   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:42.717740   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:42.717883   58430 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:42.718038   58430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.186 22 <nil> <nil>}
	I0816 13:44:42.718055   58430 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 13:44:42.988769   58430 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 13:44:42.988798   58430 machine.go:96] duration metric: took 967.444538ms to provisionDockerMachine
	I0816 13:44:42.988814   58430 start.go:293] postStartSetup for "default-k8s-diff-port-893736" (driver="kvm2")
	I0816 13:44:42.988833   58430 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 13:44:42.988864   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:44:42.989226   58430 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 13:44:42.989261   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:42.991868   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.992162   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:42.992184   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.992364   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:42.992537   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:42.992689   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:42.992838   58430 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/default-k8s-diff-port-893736/id_rsa Username:docker}
	I0816 13:44:43.079199   58430 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 13:44:43.083277   58430 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 13:44:43.083296   58430 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/addons for local assets ...
	I0816 13:44:43.083357   58430 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/files for local assets ...
	I0816 13:44:43.083459   58430 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem -> 111492.pem in /etc/ssl/certs
	I0816 13:44:43.083576   58430 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 13:44:43.092684   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /etc/ssl/certs/111492.pem (1708 bytes)
	I0816 13:44:43.115693   58430 start.go:296] duration metric: took 126.86489ms for postStartSetup
	I0816 13:44:43.115735   58430 fix.go:56] duration metric: took 19.425768942s for fixHost
	I0816 13:44:43.115761   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:43.118597   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:43.118915   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:43.118947   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:43.119100   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:43.119306   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:43.119442   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:43.119563   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:43.119683   58430 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:43.119840   58430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.186 22 <nil> <nil>}
	I0816 13:44:43.119850   58430 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 13:44:43.233362   58430 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723815883.193133132
	
	I0816 13:44:43.233394   58430 fix.go:216] guest clock: 1723815883.193133132
	I0816 13:44:43.233406   58430 fix.go:229] Guest: 2024-08-16 13:44:43.193133132 +0000 UTC Remote: 2024-08-16 13:44:43.115740856 +0000 UTC m=+147.151935383 (delta=77.392276ms)
	I0816 13:44:43.233479   58430 fix.go:200] guest clock delta is within tolerance: 77.392276ms
	I0816 13:44:43.233486   58430 start.go:83] releasing machines lock for "default-k8s-diff-port-893736", held for 19.543554553s
	I0816 13:44:43.233517   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:44:43.233783   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetIP
	I0816 13:44:43.236492   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:43.236875   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:43.236901   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:43.237136   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:44:43.237703   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:44:43.237943   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:44:43.238074   58430 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 13:44:43.238153   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:43.238182   58430 ssh_runner.go:195] Run: cat /version.json
	I0816 13:44:43.238215   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:43.240639   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:43.241000   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:43.241029   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:43.241052   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:43.241193   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:43.241360   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:43.241573   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:43.241581   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:43.241601   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:43.241733   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:43.241732   58430 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/default-k8s-diff-port-893736/id_rsa Username:docker}
	I0816 13:44:43.241895   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:43.242052   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:43.242178   58430 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/default-k8s-diff-port-893736/id_rsa Username:docker}
	I0816 13:44:43.352903   58430 ssh_runner.go:195] Run: systemctl --version
	I0816 13:44:43.359071   58430 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 13:44:43.509233   58430 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 13:44:43.516592   58430 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 13:44:43.516666   58430 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 13:44:43.534069   58430 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 13:44:43.534096   58430 start.go:495] detecting cgroup driver to use...
	I0816 13:44:43.534167   58430 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 13:44:43.553305   58430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 13:44:43.569958   58430 docker.go:217] disabling cri-docker service (if available) ...
	I0816 13:44:43.570007   58430 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 13:44:43.590642   58430 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 13:44:43.606411   58430 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 13:44:43.733331   58430 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 13:44:43.882032   58430 docker.go:233] disabling docker service ...
	I0816 13:44:43.882110   58430 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 13:44:43.896780   58430 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 13:44:43.909702   58430 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 13:44:44.044071   58430 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 13:44:44.170798   58430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 13:44:44.184421   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 13:44:44.203201   58430 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 13:44:44.203269   58430 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:44.213647   58430 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 13:44:44.213708   58430 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:44.224261   58430 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:44.235295   58430 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:44.247670   58430 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 13:44:44.264065   58430 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:44.276212   58430 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:44.296049   58430 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:44.307920   58430 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 13:44:44.319689   58430 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 13:44:44.319746   58430 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 13:44:44.335735   58430 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 13:44:44.352364   58430 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:44:44.476754   58430 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 13:44:44.618847   58430 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 13:44:44.618914   58430 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 13:44:44.623946   58430 start.go:563] Will wait 60s for crictl version
	I0816 13:44:44.624004   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:44:44.627796   58430 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 13:44:44.666274   58430 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 13:44:44.666350   58430 ssh_runner.go:195] Run: crio --version
	I0816 13:44:44.694476   58430 ssh_runner.go:195] Run: crio --version
	I0816 13:44:44.723937   58430 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 13:44:43.261237   57240 main.go:141] libmachine: (embed-certs-302520) Calling .Start
	I0816 13:44:43.261399   57240 main.go:141] libmachine: (embed-certs-302520) Ensuring networks are active...
	I0816 13:44:43.262183   57240 main.go:141] libmachine: (embed-certs-302520) Ensuring network default is active
	I0816 13:44:43.262591   57240 main.go:141] libmachine: (embed-certs-302520) Ensuring network mk-embed-certs-302520 is active
	I0816 13:44:43.263044   57240 main.go:141] libmachine: (embed-certs-302520) Getting domain xml...
	I0816 13:44:43.263849   57240 main.go:141] libmachine: (embed-certs-302520) Creating domain...
	I0816 13:44:44.565632   57240 main.go:141] libmachine: (embed-certs-302520) Waiting to get IP...
	I0816 13:44:44.566705   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:44.567120   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:44.567211   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:44.567113   59274 retry.go:31] will retry after 259.265867ms: waiting for machine to come up
	I0816 13:44:44.827603   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:44.828117   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:44.828152   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:44.828043   59274 retry.go:31] will retry after 271.270487ms: waiting for machine to come up
	I0816 13:44:40.247541   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:40.747938   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:41.247408   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:41.747777   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:42.248295   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:42.747393   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:43.247508   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:43.748151   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:44.247958   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:44.747978   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:44.725112   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetIP
	I0816 13:44:44.728077   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:44.728446   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:44.728469   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:44.728728   58430 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0816 13:44:44.733365   58430 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 13:44:44.746196   58430 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-893736 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-893736 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.186 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 13:44:44.746325   58430 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 13:44:44.746385   58430 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 13:44:44.787402   58430 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 13:44:44.787481   58430 ssh_runner.go:195] Run: which lz4
	I0816 13:44:44.791755   58430 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 13:44:44.797290   58430 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 13:44:44.797320   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0816 13:44:42.342663   57440 pod_ready.go:93] pod "kube-controller-manager-no-preload-311070" in "kube-system" namespace has status "Ready":"True"
	I0816 13:44:42.342685   57440 pod_ready.go:82] duration metric: took 2.007381193s for pod "kube-controller-manager-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:42.342694   57440 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-b8d5b" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:42.346807   57440 pod_ready.go:93] pod "kube-proxy-b8d5b" in "kube-system" namespace has status "Ready":"True"
	I0816 13:44:42.346824   57440 pod_ready.go:82] duration metric: took 4.124529ms for pod "kube-proxy-b8d5b" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:42.346832   57440 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:42.351010   57440 pod_ready.go:93] pod "kube-scheduler-no-preload-311070" in "kube-system" namespace has status "Ready":"True"
	I0816 13:44:42.351025   57440 pod_ready.go:82] duration metric: took 4.186812ms for pod "kube-scheduler-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:42.351032   57440 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:44.358663   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:44:46.359708   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:44:45.100554   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:45.101150   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:45.101265   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:45.101207   59274 retry.go:31] will retry after 309.469795ms: waiting for machine to come up
	I0816 13:44:45.412518   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:45.413009   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:45.413040   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:45.412975   59274 retry.go:31] will retry after 502.564219ms: waiting for machine to come up
	I0816 13:44:45.917731   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:45.918284   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:45.918316   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:45.918235   59274 retry.go:31] will retry after 723.442166ms: waiting for machine to come up
	I0816 13:44:46.642971   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:46.643467   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:46.643498   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:46.643400   59274 retry.go:31] will retry after 600.365383ms: waiting for machine to come up
	I0816 13:44:47.245233   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:47.245756   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:47.245785   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:47.245710   59274 retry.go:31] will retry after 1.06438693s: waiting for machine to come up
	I0816 13:44:48.312043   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:48.312842   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:48.312886   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:48.312840   59274 retry.go:31] will retry after 903.877948ms: waiting for machine to come up
	I0816 13:44:49.218419   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:49.218805   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:49.218835   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:49.218758   59274 retry.go:31] will retry after 1.73892963s: waiting for machine to come up
	I0816 13:44:45.247523   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:45.747694   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:46.248397   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:46.747660   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:47.247382   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:47.748220   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:48.248130   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:48.747818   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:49.248360   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:49.747962   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:46.230345   58430 crio.go:462] duration metric: took 1.438624377s to copy over tarball
	I0816 13:44:46.230429   58430 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 13:44:48.358060   58430 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.127589486s)
	I0816 13:44:48.358131   58430 crio.go:469] duration metric: took 2.127754698s to extract the tarball
	I0816 13:44:48.358145   58430 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 13:44:48.398054   58430 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 13:44:48.449391   58430 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 13:44:48.449416   58430 cache_images.go:84] Images are preloaded, skipping loading
	I0816 13:44:48.449425   58430 kubeadm.go:934] updating node { 192.168.50.186 8444 v1.31.0 crio true true} ...
	I0816 13:44:48.449576   58430 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-893736 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.186
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-893736 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 13:44:48.449662   58430 ssh_runner.go:195] Run: crio config
	I0816 13:44:48.499389   58430 cni.go:84] Creating CNI manager for ""
	I0816 13:44:48.499413   58430 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:44:48.499424   58430 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 13:44:48.499452   58430 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.186 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-893736 NodeName:default-k8s-diff-port-893736 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.186"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.186 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 13:44:48.499576   58430 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.186
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-893736"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.186
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.186"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 13:44:48.499653   58430 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 13:44:48.509639   58430 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 13:44:48.509706   58430 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 13:44:48.519099   58430 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0816 13:44:48.535866   58430 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 13:44:48.552977   58430 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0816 13:44:48.571198   58430 ssh_runner.go:195] Run: grep 192.168.50.186	control-plane.minikube.internal$ /etc/hosts
	I0816 13:44:48.575881   58430 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.186	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 13:44:48.587850   58430 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:44:48.703848   58430 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 13:44:48.721449   58430 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/default-k8s-diff-port-893736 for IP: 192.168.50.186
	I0816 13:44:48.721476   58430 certs.go:194] generating shared ca certs ...
	I0816 13:44:48.721496   58430 certs.go:226] acquiring lock for ca certs: {Name:mkdb46755373c135acf8239d2d4352f4b0b3d1d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:44:48.721677   58430 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key
	I0816 13:44:48.721731   58430 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key
	I0816 13:44:48.721745   58430 certs.go:256] generating profile certs ...
	I0816 13:44:48.721843   58430 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/default-k8s-diff-port-893736/client.key
	I0816 13:44:48.721926   58430 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/default-k8s-diff-port-893736/apiserver.key.64c9b41b
	I0816 13:44:48.721980   58430 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/default-k8s-diff-port-893736/proxy-client.key
	I0816 13:44:48.722107   58430 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem (1338 bytes)
	W0816 13:44:48.722138   58430 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149_empty.pem, impossibly tiny 0 bytes
	I0816 13:44:48.722149   58430 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 13:44:48.722182   58430 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem (1082 bytes)
	I0816 13:44:48.722204   58430 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem (1123 bytes)
	I0816 13:44:48.722225   58430 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem (1675 bytes)
	I0816 13:44:48.722258   58430 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem (1708 bytes)
	I0816 13:44:48.722818   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 13:44:48.779462   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 13:44:48.814653   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 13:44:48.887435   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 13:44:48.913644   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/default-k8s-diff-port-893736/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0816 13:44:48.937536   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/default-k8s-diff-port-893736/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 13:44:48.960729   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/default-k8s-diff-port-893736/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 13:44:48.984375   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/default-k8s-diff-port-893736/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 13:44:49.007997   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 13:44:49.031631   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem --> /usr/share/ca-certificates/11149.pem (1338 bytes)
	I0816 13:44:49.054333   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /usr/share/ca-certificates/111492.pem (1708 bytes)
	I0816 13:44:49.076566   58430 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 13:44:49.092986   58430 ssh_runner.go:195] Run: openssl version
	I0816 13:44:49.098555   58430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111492.pem && ln -fs /usr/share/ca-certificates/111492.pem /etc/ssl/certs/111492.pem"
	I0816 13:44:49.109454   58430 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111492.pem
	I0816 13:44:49.114868   58430 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 12:33 /usr/share/ca-certificates/111492.pem
	I0816 13:44:49.114934   58430 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111492.pem
	I0816 13:44:49.120811   58430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111492.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 13:44:49.131829   58430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 13:44:49.142825   58430 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:44:49.147276   58430 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 12:22 /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:44:49.147322   58430 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:44:49.152678   58430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 13:44:49.163622   58430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11149.pem && ln -fs /usr/share/ca-certificates/11149.pem /etc/ssl/certs/11149.pem"
	I0816 13:44:49.174426   58430 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11149.pem
	I0816 13:44:49.179353   58430 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 12:33 /usr/share/ca-certificates/11149.pem
	I0816 13:44:49.179406   58430 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11149.pem
	I0816 13:44:49.185129   58430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11149.pem /etc/ssl/certs/51391683.0"
	I0816 13:44:49.196668   58430 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 13:44:49.201447   58430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 13:44:49.207718   58430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 13:44:49.213869   58430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 13:44:49.220325   58430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 13:44:49.226220   58430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 13:44:49.231971   58430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 13:44:49.238080   58430 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-893736 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-893736 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.186 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 13:44:49.238178   58430 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 13:44:49.238231   58430 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 13:44:49.276621   58430 cri.go:89] found id: ""
	I0816 13:44:49.276719   58430 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 13:44:49.287765   58430 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 13:44:49.287785   58430 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 13:44:49.287829   58430 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 13:44:49.298038   58430 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 13:44:49.299171   58430 kubeconfig.go:125] found "default-k8s-diff-port-893736" server: "https://192.168.50.186:8444"
	I0816 13:44:49.301521   58430 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 13:44:49.311800   58430 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.186
	I0816 13:44:49.311833   58430 kubeadm.go:1160] stopping kube-system containers ...
	I0816 13:44:49.311845   58430 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 13:44:49.311899   58430 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 13:44:49.363716   58430 cri.go:89] found id: ""
	I0816 13:44:49.363784   58430 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 13:44:49.381053   58430 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 13:44:49.391306   58430 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 13:44:49.391322   58430 kubeadm.go:157] found existing configuration files:
	
	I0816 13:44:49.391370   58430 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0816 13:44:49.400770   58430 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 13:44:49.400829   58430 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 13:44:49.410252   58430 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0816 13:44:49.419405   58430 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 13:44:49.419481   58430 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 13:44:49.429330   58430 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0816 13:44:49.438521   58430 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 13:44:49.438587   58430 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 13:44:49.448144   58430 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0816 13:44:49.456744   58430 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 13:44:49.456805   58430 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 13:44:49.466062   58430 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 13:44:49.476159   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:49.597639   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:50.673182   58430 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.075495766s)
	I0816 13:44:50.673218   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:50.887802   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:50.958384   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:48.858145   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:44:51.358083   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:44:50.959807   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:50.960217   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:50.960236   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:50.960188   59274 retry.go:31] will retry after 2.32558417s: waiting for machine to come up
	I0816 13:44:53.287947   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:53.288441   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:53.288470   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:53.288388   59274 retry.go:31] will retry after 1.85414625s: waiting for machine to come up
	I0816 13:44:50.247710   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:50.747741   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:51.248099   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:51.748052   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:52.247958   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:52.748141   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:53.247751   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:53.747353   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:54.247624   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:54.747699   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:51.054015   58430 api_server.go:52] waiting for apiserver process to appear ...
	I0816 13:44:51.054101   58430 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:51.554846   58430 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:52.055178   58430 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:52.082087   58430 api_server.go:72] duration metric: took 1.028080423s to wait for apiserver process to appear ...
	I0816 13:44:52.082114   58430 api_server.go:88] waiting for apiserver healthz status ...
	I0816 13:44:52.082133   58430 api_server.go:253] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 13:44:52.082624   58430 api_server.go:269] stopped: https://192.168.50.186:8444/healthz: Get "https://192.168.50.186:8444/healthz": dial tcp 192.168.50.186:8444: connect: connection refused
	I0816 13:44:52.582261   58430 api_server.go:253] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 13:44:55.336530   58430 api_server.go:279] https://192.168.50.186:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 13:44:55.336565   58430 api_server.go:103] status: https://192.168.50.186:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 13:44:55.336580   58430 api_server.go:253] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 13:44:55.374699   58430 api_server.go:279] https://192.168.50.186:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 13:44:55.374733   58430 api_server.go:103] status: https://192.168.50.186:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 13:44:55.583112   58430 api_server.go:253] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 13:44:55.588756   58430 api_server.go:279] https://192.168.50.186:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 13:44:55.588782   58430 api_server.go:103] status: https://192.168.50.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 13:44:56.082182   58430 api_server.go:253] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 13:44:56.088062   58430 api_server.go:279] https://192.168.50.186:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 13:44:56.088108   58430 api_server.go:103] status: https://192.168.50.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 13:44:56.582273   58430 api_server.go:253] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 13:44:56.587049   58430 api_server.go:279] https://192.168.50.186:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 13:44:56.587087   58430 api_server.go:103] status: https://192.168.50.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 13:44:57.082664   58430 api_server.go:253] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 13:44:57.092562   58430 api_server.go:279] https://192.168.50.186:8444/healthz returned 200:
	ok
	I0816 13:44:57.100740   58430 api_server.go:141] control plane version: v1.31.0
	I0816 13:44:57.100767   58430 api_server.go:131] duration metric: took 5.018647278s to wait for apiserver health ...
	I0816 13:44:57.100777   58430 cni.go:84] Creating CNI manager for ""
	I0816 13:44:57.100784   58430 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:44:57.102775   58430 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 13:44:53.358390   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:44:55.358437   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:44:57.104079   58430 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 13:44:57.115212   58430 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 13:44:57.137445   58430 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 13:44:57.150376   58430 system_pods.go:59] 8 kube-system pods found
	I0816 13:44:57.150412   58430 system_pods.go:61] "coredns-6f6b679f8f-xdwhx" [66987c52-9a8c-4ddd-a6cf-ac84172d8c8c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 13:44:57.150422   58430 system_pods.go:61] "etcd-default-k8s-diff-port-893736" [54f5bb47-00f5-4c65-88ae-460571872fd1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 13:44:57.150435   58430 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-893736" [c8c57619-3a4c-4cc9-b637-7842194071ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 13:44:57.150448   58430 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-893736" [e553aced-34bd-4d30-b473-082001fe2948] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 13:44:57.150454   58430 system_pods.go:61] "kube-proxy-btq6r" [a2b7b283-da62-4cb8-a039-07a509491e5e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0816 13:44:57.150458   58430 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-893736" [0a30cbd0-a2ac-4ca9-bddc-674e6fc9ae77] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 13:44:57.150463   58430 system_pods.go:61] "metrics-server-6867b74b74-j9tqh" [ef077e6d-f368-4872-bb87-9e031d3ea764] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 13:44:57.150472   58430 system_pods.go:61] "storage-provisioner" [e2fbf16a-3bc7-4300-8023-5dbb20ba70bc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0816 13:44:57.150481   58430 system_pods.go:74] duration metric: took 13.019757ms to wait for pod list to return data ...
	I0816 13:44:57.150489   58430 node_conditions.go:102] verifying NodePressure condition ...
	I0816 13:44:57.153699   58430 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 13:44:57.153721   58430 node_conditions.go:123] node cpu capacity is 2
	I0816 13:44:57.153731   58430 node_conditions.go:105] duration metric: took 3.237407ms to run NodePressure ...
	I0816 13:44:57.153752   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:57.439130   58430 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0816 13:44:57.446848   58430 kubeadm.go:739] kubelet initialised
	I0816 13:44:57.446876   58430 kubeadm.go:740] duration metric: took 7.718113ms waiting for restarted kubelet to initialise ...
	I0816 13:44:57.446885   58430 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:44:57.452263   58430 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-xdwhx" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:57.459002   58430 pod_ready.go:98] node "default-k8s-diff-port-893736" hosting pod "coredns-6f6b679f8f-xdwhx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:57.459024   58430 pod_ready.go:82] duration metric: took 6.735487ms for pod "coredns-6f6b679f8f-xdwhx" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:57.459033   58430 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-893736" hosting pod "coredns-6f6b679f8f-xdwhx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:57.459039   58430 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:57.463723   58430 pod_ready.go:98] node "default-k8s-diff-port-893736" hosting pod "etcd-default-k8s-diff-port-893736" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:57.463742   58430 pod_ready.go:82] duration metric: took 4.695932ms for pod "etcd-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:57.463751   58430 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-893736" hosting pod "etcd-default-k8s-diff-port-893736" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:57.463756   58430 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:57.468593   58430 pod_ready.go:98] node "default-k8s-diff-port-893736" hosting pod "kube-apiserver-default-k8s-diff-port-893736" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:57.468619   58430 pod_ready.go:82] duration metric: took 4.856498ms for pod "kube-apiserver-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:57.468632   58430 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-893736" hosting pod "kube-apiserver-default-k8s-diff-port-893736" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:57.468643   58430 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:57.541251   58430 pod_ready.go:98] node "default-k8s-diff-port-893736" hosting pod "kube-controller-manager-default-k8s-diff-port-893736" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:57.541278   58430 pod_ready.go:82] duration metric: took 72.626413ms for pod "kube-controller-manager-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:57.541290   58430 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-893736" hosting pod "kube-controller-manager-default-k8s-diff-port-893736" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:57.541296   58430 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-btq6r" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:57.940580   58430 pod_ready.go:98] node "default-k8s-diff-port-893736" hosting pod "kube-proxy-btq6r" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:57.940616   58430 pod_ready.go:82] duration metric: took 399.312571ms for pod "kube-proxy-btq6r" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:57.940627   58430 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-893736" hosting pod "kube-proxy-btq6r" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:57.940635   58430 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:58.340647   58430 pod_ready.go:98] node "default-k8s-diff-port-893736" hosting pod "kube-scheduler-default-k8s-diff-port-893736" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:58.340671   58430 pod_ready.go:82] duration metric: took 400.026004ms for pod "kube-scheduler-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:58.340683   58430 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-893736" hosting pod "kube-scheduler-default-k8s-diff-port-893736" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:58.340694   58430 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:58.750549   58430 pod_ready.go:98] node "default-k8s-diff-port-893736" hosting pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:58.750573   58430 pod_ready.go:82] duration metric: took 409.872187ms for pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:58.750588   58430 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-893736" hosting pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:58.750598   58430 pod_ready.go:39] duration metric: took 1.303702313s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:44:58.750626   58430 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 13:44:58.766462   58430 ops.go:34] apiserver oom_adj: -16
	I0816 13:44:58.766482   58430 kubeadm.go:597] duration metric: took 9.478690644s to restartPrimaryControlPlane
	I0816 13:44:58.766491   58430 kubeadm.go:394] duration metric: took 9.528416258s to StartCluster
	I0816 13:44:58.766509   58430 settings.go:142] acquiring lock: {Name:mk96ffc9331ceacd8f3c1c33a59b38e047898a0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:44:58.766572   58430 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 13:44:58.770737   58430 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/kubeconfig: {Name:mk102d7d0e1fecb6a50b9d6c1ee82dcff7f7a898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:44:58.771036   58430 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.186 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 13:44:58.771138   58430 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 13:44:58.771218   58430 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-893736"
	I0816 13:44:58.771232   58430 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-893736"
	I0816 13:44:58.771245   58430 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-893736"
	I0816 13:44:58.771281   58430 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-893736"
	I0816 13:44:58.771252   58430 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-893736"
	W0816 13:44:58.771337   58430 addons.go:243] addon storage-provisioner should already be in state true
	I0816 13:44:58.771371   58430 host.go:66] Checking if "default-k8s-diff-port-893736" exists ...
	I0816 13:44:58.771285   58430 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-893736"
	W0816 13:44:58.771447   58430 addons.go:243] addon metrics-server should already be in state true
	I0816 13:44:58.771485   58430 host.go:66] Checking if "default-k8s-diff-port-893736" exists ...
	I0816 13:44:58.771231   58430 config.go:182] Loaded profile config "default-k8s-diff-port-893736": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 13:44:58.771653   58430 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:58.771682   58430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:58.771750   58430 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:58.771781   58430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:58.771839   58430 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:58.771886   58430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:58.772665   58430 out.go:177] * Verifying Kubernetes components...
	I0816 13:44:58.773992   58430 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:44:58.788717   58430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41949
	I0816 13:44:58.789233   58430 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:58.789833   58430 main.go:141] libmachine: Using API Version  1
	I0816 13:44:58.789859   58430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:58.790269   58430 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:58.790882   58430 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:58.790913   58430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:58.791553   58430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35753
	I0816 13:44:58.791556   58430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36985
	I0816 13:44:58.791945   58430 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:58.791979   58430 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:58.792413   58430 main.go:141] libmachine: Using API Version  1
	I0816 13:44:58.792440   58430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:58.792813   58430 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:58.792963   58430 main.go:141] libmachine: Using API Version  1
	I0816 13:44:58.792986   58430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:58.793013   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetState
	I0816 13:44:58.793374   58430 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:58.793940   58430 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:58.793986   58430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:58.796723   58430 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-893736"
	W0816 13:44:58.796740   58430 addons.go:243] addon default-storageclass should already be in state true
	I0816 13:44:58.796763   58430 host.go:66] Checking if "default-k8s-diff-port-893736" exists ...
	I0816 13:44:58.797138   58430 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:58.797184   58430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:58.806753   58430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41511
	I0816 13:44:58.807162   58430 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:58.807605   58430 main.go:141] libmachine: Using API Version  1
	I0816 13:44:58.807624   58430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:58.807984   58430 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:58.808229   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetState
	I0816 13:44:58.809833   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:44:58.811642   58430 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:58.812716   58430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39567
	I0816 13:44:58.812888   58430 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 13:44:58.812902   58430 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 13:44:58.812937   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:58.813184   58430 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:58.813668   58430 main.go:141] libmachine: Using API Version  1
	I0816 13:44:58.813695   58430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:58.813725   58430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40851
	I0816 13:44:58.814101   58430 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:58.814207   58430 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:58.814696   58430 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:58.814715   58430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:58.814948   58430 main.go:141] libmachine: Using API Version  1
	I0816 13:44:58.814961   58430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:58.815304   58430 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:58.815518   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetState
	I0816 13:44:58.816936   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:58.817482   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:44:58.817529   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:58.817543   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:58.817871   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:58.818057   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:58.818219   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:58.818397   58430 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/default-k8s-diff-port-893736/id_rsa Username:docker}
	I0816 13:44:58.819251   58430 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0816 13:44:55.143862   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:55.144403   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:55.144433   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:55.144354   59274 retry.go:31] will retry after 3.573850343s: waiting for machine to come up
	I0816 13:44:58.720104   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:58.720571   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:58.720606   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:58.720510   59274 retry.go:31] will retry after 4.52867767s: waiting for machine to come up
	I0816 13:44:55.248021   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:55.747406   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:56.247470   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:56.747399   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:57.247462   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:57.747637   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:58.248194   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:58.747381   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:59.247772   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:59.748373   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:58.820720   58430 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 13:44:58.820733   58430 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 13:44:58.820747   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:58.823868   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:58.824290   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:58.824305   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:58.824489   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:58.824629   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:58.824764   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:58.824860   58430 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/default-k8s-diff-port-893736/id_rsa Username:docker}
	I0816 13:44:58.830530   58430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38427
	I0816 13:44:58.830894   58430 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:58.831274   58430 main.go:141] libmachine: Using API Version  1
	I0816 13:44:58.831294   58430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:58.831583   58430 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:58.831729   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetState
	I0816 13:44:58.833321   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:44:58.833512   58430 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 13:44:58.833526   58430 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 13:44:58.833543   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:58.836244   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:58.836626   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:58.836649   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:58.836762   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:58.836947   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:58.837098   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:58.837234   58430 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/default-k8s-diff-port-893736/id_rsa Username:docker}
	I0816 13:44:58.973561   58430 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 13:44:58.995763   58430 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-893736" to be "Ready" ...
	I0816 13:44:59.118558   58430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 13:44:59.126100   58430 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 13:44:59.126125   58430 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0816 13:44:59.154048   58430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 13:44:59.162623   58430 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 13:44:59.162649   58430 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 13:44:59.213614   58430 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 13:44:59.213635   58430 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 13:44:59.233653   58430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 13:44:59.485000   58430 main.go:141] libmachine: Making call to close driver server
	I0816 13:44:59.485030   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .Close
	I0816 13:44:59.485329   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | Closing plugin on server side
	I0816 13:44:59.485384   58430 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:44:59.485397   58430 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:44:59.485406   58430 main.go:141] libmachine: Making call to close driver server
	I0816 13:44:59.485414   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .Close
	I0816 13:44:59.485736   58430 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:44:59.485777   58430 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:44:59.485741   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | Closing plugin on server side
	I0816 13:44:59.491692   58430 main.go:141] libmachine: Making call to close driver server
	I0816 13:44:59.491711   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .Close
	I0816 13:44:59.491938   58430 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:44:59.491957   58430 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:45:00.273964   58430 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.04027784s)
	I0816 13:45:00.274018   58430 main.go:141] libmachine: Making call to close driver server
	I0816 13:45:00.274036   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .Close
	I0816 13:45:00.274032   58430 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.119945545s)
	I0816 13:45:00.274065   58430 main.go:141] libmachine: Making call to close driver server
	I0816 13:45:00.274080   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .Close
	I0816 13:45:00.274373   58430 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:45:00.274388   58430 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:45:00.274398   58430 main.go:141] libmachine: Making call to close driver server
	I0816 13:45:00.274406   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .Close
	I0816 13:45:00.274441   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | Closing plugin on server side
	I0816 13:45:00.274481   58430 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:45:00.274499   58430 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:45:00.274513   58430 main.go:141] libmachine: Making call to close driver server
	I0816 13:45:00.274526   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .Close
	I0816 13:45:00.274620   58430 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:45:00.274633   58430 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:45:00.274643   58430 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-893736"
	I0816 13:45:00.274749   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | Closing plugin on server side
	I0816 13:45:00.274842   58430 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:45:00.274851   58430 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:45:00.276747   58430 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0816 13:45:00.278150   58430 addons.go:510] duration metric: took 1.506994649s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0816 13:44:57.858846   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:00.357028   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:03.253913   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.254379   57240 main.go:141] libmachine: (embed-certs-302520) Found IP for machine: 192.168.39.125
	I0816 13:45:03.254401   57240 main.go:141] libmachine: (embed-certs-302520) Reserving static IP address...
	I0816 13:45:03.254418   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has current primary IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.254776   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "embed-certs-302520", mac: "52:54:00:15:a3:1b", ip: "192.168.39.125"} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:03.254804   57240 main.go:141] libmachine: (embed-certs-302520) Reserved static IP address: 192.168.39.125
	I0816 13:45:03.254822   57240 main.go:141] libmachine: (embed-certs-302520) DBG | skip adding static IP to network mk-embed-certs-302520 - found existing host DHCP lease matching {name: "embed-certs-302520", mac: "52:54:00:15:a3:1b", ip: "192.168.39.125"}
	I0816 13:45:03.254840   57240 main.go:141] libmachine: (embed-certs-302520) DBG | Getting to WaitForSSH function...
	I0816 13:45:03.254848   57240 main.go:141] libmachine: (embed-certs-302520) Waiting for SSH to be available...
	I0816 13:45:03.256951   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.257302   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:03.257327   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.257462   57240 main.go:141] libmachine: (embed-certs-302520) DBG | Using SSH client type: external
	I0816 13:45:03.257483   57240 main.go:141] libmachine: (embed-certs-302520) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/embed-certs-302520/id_rsa (-rw-------)
	I0816 13:45:03.257519   57240 main.go:141] libmachine: (embed-certs-302520) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.125 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-3966/.minikube/machines/embed-certs-302520/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 13:45:03.257528   57240 main.go:141] libmachine: (embed-certs-302520) DBG | About to run SSH command:
	I0816 13:45:03.257537   57240 main.go:141] libmachine: (embed-certs-302520) DBG | exit 0
	I0816 13:45:03.389262   57240 main.go:141] libmachine: (embed-certs-302520) DBG | SSH cmd err, output: <nil>: 
	I0816 13:45:03.389630   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetConfigRaw
	I0816 13:45:03.390305   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetIP
	I0816 13:45:03.392462   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.392767   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:03.392795   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.393012   57240 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/embed-certs-302520/config.json ...
	I0816 13:45:03.393212   57240 machine.go:93] provisionDockerMachine start ...
	I0816 13:45:03.393230   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	I0816 13:45:03.393453   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:45:03.395589   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.395949   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:03.395971   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.396086   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:45:03.396258   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:03.396447   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:03.396589   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:45:03.396785   57240 main.go:141] libmachine: Using SSH client type: native
	I0816 13:45:03.397004   57240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0816 13:45:03.397042   57240 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 13:45:03.513624   57240 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 13:45:03.513655   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetMachineName
	I0816 13:45:03.513954   57240 buildroot.go:166] provisioning hostname "embed-certs-302520"
	I0816 13:45:03.513976   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetMachineName
	I0816 13:45:03.514199   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:45:03.517138   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.517499   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:03.517520   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.517672   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:45:03.517867   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:03.518007   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:03.518168   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:45:03.518364   57240 main.go:141] libmachine: Using SSH client type: native
	I0816 13:45:03.518583   57240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0816 13:45:03.518599   57240 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-302520 && echo "embed-certs-302520" | sudo tee /etc/hostname
	I0816 13:45:03.647799   57240 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-302520
	
	I0816 13:45:03.647840   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:45:03.650491   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.650846   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:03.650880   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.651103   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:45:03.651301   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:03.651469   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:03.651614   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:45:03.651778   57240 main.go:141] libmachine: Using SSH client type: native
	I0816 13:45:03.651935   57240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0816 13:45:03.651951   57240 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-302520' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-302520/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-302520' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 13:45:03.778350   57240 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 13:45:03.778381   57240 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-3966/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-3966/.minikube}
	I0816 13:45:03.778411   57240 buildroot.go:174] setting up certificates
	I0816 13:45:03.778423   57240 provision.go:84] configureAuth start
	I0816 13:45:03.778435   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetMachineName
	I0816 13:45:03.778689   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetIP
	I0816 13:45:03.781319   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.781673   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:03.781695   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.781829   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:45:03.783724   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.784035   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:03.784064   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.784180   57240 provision.go:143] copyHostCerts
	I0816 13:45:03.784243   57240 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem, removing ...
	I0816 13:45:03.784262   57240 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem
	I0816 13:45:03.784335   57240 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem (1082 bytes)
	I0816 13:45:03.784462   57240 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem, removing ...
	I0816 13:45:03.784474   57240 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem
	I0816 13:45:03.784503   57240 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem (1123 bytes)
	I0816 13:45:03.784568   57240 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem, removing ...
	I0816 13:45:03.784578   57240 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem
	I0816 13:45:03.784600   57240 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem (1675 bytes)
	I0816 13:45:03.784647   57240 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem org=jenkins.embed-certs-302520 san=[127.0.0.1 192.168.39.125 embed-certs-302520 localhost minikube]
	I0816 13:45:03.901261   57240 provision.go:177] copyRemoteCerts
	I0816 13:45:03.901314   57240 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 13:45:03.901339   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:45:03.904505   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.904893   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:03.904933   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.905118   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:45:03.905329   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:03.905499   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:45:03.905650   57240 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/embed-certs-302520/id_rsa Username:docker}
	I0816 13:45:03.996083   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 13:45:04.024594   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0816 13:45:04.054080   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 13:45:04.079810   57240 provision.go:87] duration metric: took 301.374056ms to configureAuth
	I0816 13:45:04.079865   57240 buildroot.go:189] setting minikube options for container-runtime
	I0816 13:45:04.080048   57240 config.go:182] Loaded profile config "embed-certs-302520": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 13:45:04.080116   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:45:04.082649   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.083037   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:04.083090   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.083239   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:45:04.083430   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:04.083598   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:04.083775   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:45:04.083951   57240 main.go:141] libmachine: Using SSH client type: native
	I0816 13:45:04.084149   57240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0816 13:45:04.084171   57240 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 13:45:04.404121   57240 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 13:45:04.404150   57240 machine.go:96] duration metric: took 1.010924979s to provisionDockerMachine
	I0816 13:45:04.404163   57240 start.go:293] postStartSetup for "embed-certs-302520" (driver="kvm2")
	I0816 13:45:04.404182   57240 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 13:45:04.404202   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	I0816 13:45:04.404542   57240 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 13:45:04.404574   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:45:04.407763   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.408118   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:04.408145   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.408311   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:45:04.408508   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:04.408685   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:45:04.408864   57240 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/embed-certs-302520/id_rsa Username:docker}
	I0816 13:45:04.496519   57240 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 13:45:04.501262   57240 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 13:45:04.501282   57240 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/addons for local assets ...
	I0816 13:45:04.501352   57240 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/files for local assets ...
	I0816 13:45:04.501440   57240 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem -> 111492.pem in /etc/ssl/certs
	I0816 13:45:04.501554   57240 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 13:45:04.511338   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /etc/ssl/certs/111492.pem (1708 bytes)
	I0816 13:45:04.535372   57240 start.go:296] duration metric: took 131.188411ms for postStartSetup
	I0816 13:45:04.535411   57240 fix.go:56] duration metric: took 21.301761751s for fixHost
	I0816 13:45:04.535435   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:45:04.538286   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.538651   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:04.538676   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.538868   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:45:04.539069   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:04.539208   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:04.539344   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:45:04.539504   57240 main.go:141] libmachine: Using SSH client type: native
	I0816 13:45:04.539702   57240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0816 13:45:04.539715   57240 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 13:45:04.653529   57240 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723815904.606422212
	
	I0816 13:45:04.653556   57240 fix.go:216] guest clock: 1723815904.606422212
	I0816 13:45:04.653566   57240 fix.go:229] Guest: 2024-08-16 13:45:04.606422212 +0000 UTC Remote: 2024-08-16 13:45:04.535416156 +0000 UTC m=+359.547804920 (delta=71.006056ms)
	I0816 13:45:04.653598   57240 fix.go:200] guest clock delta is within tolerance: 71.006056ms
	I0816 13:45:04.653605   57240 start.go:83] releasing machines lock for "embed-certs-302520", held for 21.419990329s
	I0816 13:45:04.653631   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	I0816 13:45:04.653922   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetIP
	I0816 13:45:04.656682   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.657009   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:04.657034   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.657211   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	I0816 13:45:04.657800   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	I0816 13:45:04.657981   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	I0816 13:45:04.658069   57240 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 13:45:04.658114   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:45:04.658172   57240 ssh_runner.go:195] Run: cat /version.json
	I0816 13:45:04.658193   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:45:04.660629   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.660942   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.661051   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:04.661076   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.661315   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:45:04.661433   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:04.661470   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.661474   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:04.661598   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:45:04.661646   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:45:04.661841   57240 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/embed-certs-302520/id_rsa Username:docker}
	I0816 13:45:04.661904   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:04.662054   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:45:04.662199   57240 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/embed-certs-302520/id_rsa Username:docker}
	I0816 13:45:04.767691   57240 ssh_runner.go:195] Run: systemctl --version
	I0816 13:45:04.773984   57240 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 13:45:04.925431   57240 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 13:45:04.931848   57240 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 13:45:04.931931   57240 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 13:45:04.951355   57240 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 13:45:04.951381   57240 start.go:495] detecting cgroup driver to use...
	I0816 13:45:04.951442   57240 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 13:45:04.972903   57240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 13:45:04.987531   57240 docker.go:217] disabling cri-docker service (if available) ...
	I0816 13:45:04.987600   57240 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 13:45:05.001880   57240 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 13:45:05.018403   57240 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 13:45:00.247513   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:00.748342   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:01.248179   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:01.747757   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:02.247789   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:02.748162   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:03.247936   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:03.747434   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:04.247832   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:04.747704   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:00.999833   58430 node_ready.go:53] node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:45:03.500652   58430 node_ready.go:53] node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:45:05.143662   57240 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 13:45:05.297447   57240 docker.go:233] disabling docker service ...
	I0816 13:45:05.297527   57240 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 13:45:05.313382   57240 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 13:45:05.327116   57240 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 13:45:05.486443   57240 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 13:45:05.620465   57240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 13:45:05.634813   57240 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 13:45:05.653822   57240 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 13:45:05.653887   57240 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:45:05.664976   57240 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 13:45:05.665045   57240 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:45:05.676414   57240 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:45:05.688631   57240 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:45:05.700400   57240 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 13:45:05.712822   57240 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:45:05.724573   57240 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:45:05.742934   57240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:45:05.755669   57240 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 13:45:05.766837   57240 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 13:45:05.766890   57240 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 13:45:05.782296   57240 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 13:45:05.793695   57240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:45:05.919559   57240 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 13:45:06.057480   57240 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 13:45:06.057543   57240 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 13:45:06.062348   57240 start.go:563] Will wait 60s for crictl version
	I0816 13:45:06.062414   57240 ssh_runner.go:195] Run: which crictl
	I0816 13:45:06.066456   57240 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 13:45:06.104075   57240 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 13:45:06.104156   57240 ssh_runner.go:195] Run: crio --version
	I0816 13:45:06.132406   57240 ssh_runner.go:195] Run: crio --version
	I0816 13:45:06.161878   57240 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 13:45:02.357119   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:04.361365   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:06.859546   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:06.163233   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetIP
	I0816 13:45:06.165924   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:06.166310   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:06.166333   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:06.166529   57240 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0816 13:45:06.170722   57240 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 13:45:06.183152   57240 kubeadm.go:883] updating cluster {Name:embed-certs-302520 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-302520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 13:45:06.183256   57240 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 13:45:06.183306   57240 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 13:45:06.223405   57240 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 13:45:06.223489   57240 ssh_runner.go:195] Run: which lz4
	I0816 13:45:06.227851   57240 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 13:45:06.232132   57240 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 13:45:06.232156   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0816 13:45:07.642616   57240 crio.go:462] duration metric: took 1.414789512s to copy over tarball
	I0816 13:45:07.642698   57240 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 13:45:09.794329   57240 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.151601472s)
	I0816 13:45:09.794359   57240 crio.go:469] duration metric: took 2.151717024s to extract the tarball
	I0816 13:45:09.794369   57240 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 13:45:09.833609   57240 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 13:45:09.878781   57240 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 13:45:09.878806   57240 cache_images.go:84] Images are preloaded, skipping loading
	I0816 13:45:09.878815   57240 kubeadm.go:934] updating node { 192.168.39.125 8443 v1.31.0 crio true true} ...
	I0816 13:45:09.878944   57240 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-302520 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.125
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-302520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 13:45:09.879032   57240 ssh_runner.go:195] Run: crio config
	I0816 13:45:09.924876   57240 cni.go:84] Creating CNI manager for ""
	I0816 13:45:09.924900   57240 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:45:09.924927   57240 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 13:45:09.924958   57240 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.125 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-302520 NodeName:embed-certs-302520 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.125"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.125 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 13:45:09.925150   57240 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.125
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-302520"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.125
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.125"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 13:45:09.925226   57240 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 13:45:09.935204   57240 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 13:45:09.935280   57240 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 13:45:09.945366   57240 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0816 13:45:09.961881   57240 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 13:45:09.978495   57240 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0816 13:45:09.995664   57240 ssh_runner.go:195] Run: grep 192.168.39.125	control-plane.minikube.internal$ /etc/hosts
	I0816 13:45:10.000132   57240 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.125	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 13:45:10.013039   57240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:45:05.247343   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:05.747420   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:06.247801   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:06.747978   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:07.248393   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:07.747801   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:08.248388   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:08.747624   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:09.247530   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:09.748311   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:06.000553   58430 node_ready.go:49] node "default-k8s-diff-port-893736" has status "Ready":"True"
	I0816 13:45:06.000579   58430 node_ready.go:38] duration metric: took 7.004778161s for node "default-k8s-diff-port-893736" to be "Ready" ...
	I0816 13:45:06.000590   58430 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:45:06.006987   58430 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-xdwhx" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:06.012552   58430 pod_ready.go:93] pod "coredns-6f6b679f8f-xdwhx" in "kube-system" namespace has status "Ready":"True"
	I0816 13:45:06.012577   58430 pod_ready.go:82] duration metric: took 5.565882ms for pod "coredns-6f6b679f8f-xdwhx" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:06.012588   58430 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:06.519889   58430 pod_ready.go:93] pod "etcd-default-k8s-diff-port-893736" in "kube-system" namespace has status "Ready":"True"
	I0816 13:45:06.519919   58430 pod_ready.go:82] duration metric: took 507.322547ms for pod "etcd-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:06.519932   58430 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:08.527411   58430 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-893736" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:09.527923   58430 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-893736" in "kube-system" namespace has status "Ready":"True"
	I0816 13:45:09.527950   58430 pod_ready.go:82] duration metric: took 3.008009418s for pod "kube-apiserver-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:09.527963   58430 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:09.534422   58430 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-893736" in "kube-system" namespace has status "Ready":"True"
	I0816 13:45:09.534460   58430 pod_ready.go:82] duration metric: took 6.488169ms for pod "kube-controller-manager-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:09.534476   58430 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-btq6r" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:09.538660   58430 pod_ready.go:93] pod "kube-proxy-btq6r" in "kube-system" namespace has status "Ready":"True"
	I0816 13:45:09.538688   58430 pod_ready.go:82] duration metric: took 4.202597ms for pod "kube-proxy-btq6r" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:09.538700   58430 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:09.600350   58430 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-893736" in "kube-system" namespace has status "Ready":"True"
	I0816 13:45:09.600377   58430 pod_ready.go:82] duration metric: took 61.666987ms for pod "kube-scheduler-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:09.600391   58430 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:09.361968   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:11.859112   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:10.143519   57240 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 13:45:10.160358   57240 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/embed-certs-302520 for IP: 192.168.39.125
	I0816 13:45:10.160381   57240 certs.go:194] generating shared ca certs ...
	I0816 13:45:10.160400   57240 certs.go:226] acquiring lock for ca certs: {Name:mkdb46755373c135acf8239d2d4352f4b0b3d1d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:45:10.160591   57240 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key
	I0816 13:45:10.160646   57240 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key
	I0816 13:45:10.160656   57240 certs.go:256] generating profile certs ...
	I0816 13:45:10.160767   57240 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/embed-certs-302520/client.key
	I0816 13:45:10.160845   57240 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/embed-certs-302520/apiserver.key.f0c5f9ff
	I0816 13:45:10.160893   57240 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/embed-certs-302520/proxy-client.key
	I0816 13:45:10.161075   57240 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem (1338 bytes)
	W0816 13:45:10.161133   57240 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149_empty.pem, impossibly tiny 0 bytes
	I0816 13:45:10.161148   57240 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 13:45:10.161182   57240 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem (1082 bytes)
	I0816 13:45:10.161213   57240 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem (1123 bytes)
	I0816 13:45:10.161243   57240 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem (1675 bytes)
	I0816 13:45:10.161298   57240 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem (1708 bytes)
	I0816 13:45:10.161944   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 13:45:10.202268   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 13:45:10.242684   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 13:45:10.287223   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 13:45:10.316762   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/embed-certs-302520/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0816 13:45:10.343352   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/embed-certs-302520/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 13:45:10.371042   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/embed-certs-302520/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 13:45:10.394922   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/embed-certs-302520/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 13:45:10.419358   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /usr/share/ca-certificates/111492.pem (1708 bytes)
	I0816 13:45:10.442301   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 13:45:10.465266   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem --> /usr/share/ca-certificates/11149.pem (1338 bytes)
	I0816 13:45:10.487647   57240 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 13:45:10.504713   57240 ssh_runner.go:195] Run: openssl version
	I0816 13:45:10.510493   57240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111492.pem && ln -fs /usr/share/ca-certificates/111492.pem /etc/ssl/certs/111492.pem"
	I0816 13:45:10.521818   57240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111492.pem
	I0816 13:45:10.526637   57240 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 12:33 /usr/share/ca-certificates/111492.pem
	I0816 13:45:10.526681   57240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111492.pem
	I0816 13:45:10.532660   57240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111492.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 13:45:10.543403   57240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 13:45:10.554344   57240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:45:10.559089   57240 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 12:22 /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:45:10.559149   57240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:45:10.564982   57240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 13:45:10.576074   57240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11149.pem && ln -fs /usr/share/ca-certificates/11149.pem /etc/ssl/certs/11149.pem"
	I0816 13:45:10.586596   57240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11149.pem
	I0816 13:45:10.591586   57240 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 12:33 /usr/share/ca-certificates/11149.pem
	I0816 13:45:10.591637   57240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11149.pem
	I0816 13:45:10.597624   57240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11149.pem /etc/ssl/certs/51391683.0"
	I0816 13:45:10.608838   57240 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 13:45:10.613785   57240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 13:45:10.619902   57240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 13:45:10.625554   57240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 13:45:10.631526   57240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 13:45:10.637251   57240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 13:45:10.643210   57240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 13:45:10.649187   57240 kubeadm.go:392] StartCluster: {Name:embed-certs-302520 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-302520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 13:45:10.649298   57240 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 13:45:10.649349   57240 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 13:45:10.686074   57240 cri.go:89] found id: ""
	I0816 13:45:10.686153   57240 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 13:45:10.696504   57240 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 13:45:10.696527   57240 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 13:45:10.696581   57240 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 13:45:10.706447   57240 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 13:45:10.707413   57240 kubeconfig.go:125] found "embed-certs-302520" server: "https://192.168.39.125:8443"
	I0816 13:45:10.710045   57240 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 13:45:10.719563   57240 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.125
	I0816 13:45:10.719599   57240 kubeadm.go:1160] stopping kube-system containers ...
	I0816 13:45:10.719613   57240 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 13:45:10.719665   57240 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 13:45:10.759584   57240 cri.go:89] found id: ""
	I0816 13:45:10.759661   57240 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 13:45:10.776355   57240 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 13:45:10.786187   57240 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 13:45:10.786205   57240 kubeadm.go:157] found existing configuration files:
	
	I0816 13:45:10.786244   57240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 13:45:10.795644   57240 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 13:45:10.795723   57240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 13:45:10.807988   57240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 13:45:10.817234   57240 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 13:45:10.817299   57240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 13:45:10.826601   57240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 13:45:10.835702   57240 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 13:45:10.835763   57240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 13:45:10.845160   57240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 13:45:10.855522   57240 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 13:45:10.855578   57240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 13:45:10.865445   57240 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 13:45:10.875429   57240 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:45:10.988958   57240 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:45:12.195215   57240 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.206217359s)
	I0816 13:45:12.195241   57240 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:45:12.432322   57240 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:45:12.514631   57240 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:45:12.606133   57240 api_server.go:52] waiting for apiserver process to appear ...
	I0816 13:45:12.606238   57240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:13.106823   57240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:13.606856   57240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:13.624866   57240 api_server.go:72] duration metric: took 1.018743147s to wait for apiserver process to appear ...
	I0816 13:45:13.624897   57240 api_server.go:88] waiting for apiserver healthz status ...
	I0816 13:45:13.624930   57240 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0816 13:45:13.625953   57240 api_server.go:269] stopped: https://192.168.39.125:8443/healthz: Get "https://192.168.39.125:8443/healthz": dial tcp 192.168.39.125:8443: connect: connection refused
	I0816 13:45:14.124979   57240 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0816 13:45:10.247689   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:10.747756   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:11.247963   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:11.747523   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:12.247397   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:12.748146   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:13.247976   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:13.748109   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:14.247662   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:14.748041   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:11.607443   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:14.107647   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:14.357916   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:16.358986   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:16.404020   57240 api_server.go:279] https://192.168.39.125:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 13:45:16.404049   57240 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 13:45:16.404062   57240 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0816 13:45:16.462649   57240 api_server.go:279] https://192.168.39.125:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 13:45:16.462685   57240 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 13:45:16.625998   57240 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0816 13:45:16.632560   57240 api_server.go:279] https://192.168.39.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 13:45:16.632586   57240 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 13:45:17.124984   57240 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0816 13:45:17.133533   57240 api_server.go:279] https://192.168.39.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 13:45:17.133563   57240 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 13:45:17.624993   57240 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0816 13:45:17.629720   57240 api_server.go:279] https://192.168.39.125:8443/healthz returned 200:
	ok
	I0816 13:45:17.635848   57240 api_server.go:141] control plane version: v1.31.0
	I0816 13:45:17.635874   57240 api_server.go:131] duration metric: took 4.010970063s to wait for apiserver health ...
	I0816 13:45:17.635885   57240 cni.go:84] Creating CNI manager for ""
	I0816 13:45:17.635892   57240 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:45:17.637609   57240 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 13:45:17.638828   57240 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 13:45:17.650034   57240 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 13:45:17.681352   57240 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 13:45:17.691752   57240 system_pods.go:59] 8 kube-system pods found
	I0816 13:45:17.691784   57240 system_pods.go:61] "coredns-6f6b679f8f-phxht" [df7bd896-d1c6-4a0e-aead-e3db36e915da] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 13:45:17.691792   57240 system_pods.go:61] "etcd-embed-certs-302520" [ef7bae1c-7cd3-4d8e-b2fc-e5837f4c5a1e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 13:45:17.691801   57240 system_pods.go:61] "kube-apiserver-embed-certs-302520" [957ba8ec-91ae-4cea-902f-81a286e35659] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 13:45:17.691806   57240 system_pods.go:61] "kube-controller-manager-embed-certs-302520" [afbfc2da-5435-4ebb-ada0-e0edc9d09a7d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 13:45:17.691817   57240 system_pods.go:61] "kube-proxy-nnc6b" [ec8b820d-6f1d-4777-9f76-7efffb4e6e79] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0816 13:45:17.691824   57240 system_pods.go:61] "kube-scheduler-embed-certs-302520" [077024c8-3dfd-4e8c-850a-333b63d3f23b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 13:45:17.691832   57240 system_pods.go:61] "metrics-server-6867b74b74-9277d" [5d7ee9e5-b40c-4840-9fb4-0b7b8be9faca] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 13:45:17.691837   57240 system_pods.go:61] "storage-provisioner" [6f3dc7f6-a3e0-4bc3-b362-e1d97633d0eb] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0816 13:45:17.691854   57240 system_pods.go:74] duration metric: took 10.481601ms to wait for pod list to return data ...
	I0816 13:45:17.691861   57240 node_conditions.go:102] verifying NodePressure condition ...
	I0816 13:45:17.695253   57240 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 13:45:17.695278   57240 node_conditions.go:123] node cpu capacity is 2
	I0816 13:45:17.695292   57240 node_conditions.go:105] duration metric: took 3.4236ms to run NodePressure ...
	I0816 13:45:17.695311   57240 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:45:17.996024   57240 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0816 13:45:17.999887   57240 kubeadm.go:739] kubelet initialised
	I0816 13:45:17.999906   57240 kubeadm.go:740] duration metric: took 3.859222ms waiting for restarted kubelet to initialise ...
	I0816 13:45:17.999913   57240 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:45:18.004476   57240 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-phxht" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:18.009142   57240 pod_ready.go:98] node "embed-certs-302520" hosting pod "coredns-6f6b679f8f-phxht" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-302520" has status "Ready":"False"
	I0816 13:45:18.009162   57240 pod_ready.go:82] duration metric: took 4.665087ms for pod "coredns-6f6b679f8f-phxht" in "kube-system" namespace to be "Ready" ...
	E0816 13:45:18.009170   57240 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-302520" hosting pod "coredns-6f6b679f8f-phxht" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-302520" has status "Ready":"False"
	I0816 13:45:18.009175   57240 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:18.014083   57240 pod_ready.go:98] node "embed-certs-302520" hosting pod "etcd-embed-certs-302520" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-302520" has status "Ready":"False"
	I0816 13:45:18.014102   57240 pod_ready.go:82] duration metric: took 4.91913ms for pod "etcd-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	E0816 13:45:18.014118   57240 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-302520" hosting pod "etcd-embed-certs-302520" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-302520" has status "Ready":"False"
	I0816 13:45:18.014124   57240 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:18.018257   57240 pod_ready.go:98] node "embed-certs-302520" hosting pod "kube-apiserver-embed-certs-302520" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-302520" has status "Ready":"False"
	I0816 13:45:18.018276   57240 pod_ready.go:82] duration metric: took 4.14471ms for pod "kube-apiserver-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	E0816 13:45:18.018283   57240 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-302520" hosting pod "kube-apiserver-embed-certs-302520" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-302520" has status "Ready":"False"
	I0816 13:45:18.018288   57240 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:18.085229   57240 pod_ready.go:98] node "embed-certs-302520" hosting pod "kube-controller-manager-embed-certs-302520" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-302520" has status "Ready":"False"
	I0816 13:45:18.085257   57240 pod_ready.go:82] duration metric: took 66.95357ms for pod "kube-controller-manager-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	E0816 13:45:18.085269   57240 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-302520" hosting pod "kube-controller-manager-embed-certs-302520" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-302520" has status "Ready":"False"
	I0816 13:45:18.085276   57240 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-nnc6b" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:18.485094   57240 pod_ready.go:93] pod "kube-proxy-nnc6b" in "kube-system" namespace has status "Ready":"True"
	I0816 13:45:18.485124   57240 pod_ready.go:82] duration metric: took 399.831747ms for pod "kube-proxy-nnc6b" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:18.485135   57240 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:15.248141   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:15.747452   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:16.247654   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:16.747569   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:17.248203   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:17.747951   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:18.248147   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:18.747490   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:19.248135   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:19.748201   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:16.107986   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:18.606838   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:18.857109   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:20.858242   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:20.491635   57240 pod_ready.go:103] pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:22.492484   57240 pod_ready.go:103] pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:24.992054   57240 pod_ready.go:103] pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:20.247741   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:20.747432   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:21.247600   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:21.748309   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:22.247438   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:22.748379   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:23.247577   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:23.747950   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:24.247733   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:24.748079   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:21.107371   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:23.607589   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:23.357770   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:25.358102   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:26.992544   57240 pod_ready.go:103] pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:29.491552   57240 pod_ready.go:103] pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:25.247402   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:25.747623   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:26.248101   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:26.747403   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:27.248040   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:27.747380   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:28.247857   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:28.748374   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:29.247819   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:29.747331   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:26.106454   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:28.107564   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:30.115954   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:27.358671   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:29.857631   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:31.862487   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:30.491291   57240 pod_ready.go:93] pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace has status "Ready":"True"
	I0816 13:45:30.491320   57240 pod_ready.go:82] duration metric: took 12.006175772s for pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:30.491333   57240 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:32.497481   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:34.500397   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:30.247771   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:30.747706   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:31.247762   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:31.748013   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:32.247551   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:32.748020   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:33.247432   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:33.747594   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:34.247750   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:45:34.247831   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:45:34.295412   57945 cri.go:89] found id: ""
	I0816 13:45:34.295439   57945 logs.go:276] 0 containers: []
	W0816 13:45:34.295461   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:45:34.295468   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:45:34.295529   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:45:34.332061   57945 cri.go:89] found id: ""
	I0816 13:45:34.332085   57945 logs.go:276] 0 containers: []
	W0816 13:45:34.332093   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:45:34.332100   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:45:34.332158   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:45:34.369512   57945 cri.go:89] found id: ""
	I0816 13:45:34.369535   57945 logs.go:276] 0 containers: []
	W0816 13:45:34.369546   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:45:34.369553   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:45:34.369617   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:45:34.406324   57945 cri.go:89] found id: ""
	I0816 13:45:34.406351   57945 logs.go:276] 0 containers: []
	W0816 13:45:34.406362   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:45:34.406370   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:45:34.406436   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:45:34.442193   57945 cri.go:89] found id: ""
	I0816 13:45:34.442229   57945 logs.go:276] 0 containers: []
	W0816 13:45:34.442239   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:45:34.442244   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:45:34.442301   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:45:34.476563   57945 cri.go:89] found id: ""
	I0816 13:45:34.476600   57945 logs.go:276] 0 containers: []
	W0816 13:45:34.476616   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:45:34.476622   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:45:34.476670   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:45:34.515841   57945 cri.go:89] found id: ""
	I0816 13:45:34.515869   57945 logs.go:276] 0 containers: []
	W0816 13:45:34.515877   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:45:34.515883   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:45:34.515940   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:45:34.551242   57945 cri.go:89] found id: ""
	I0816 13:45:34.551276   57945 logs.go:276] 0 containers: []
	W0816 13:45:34.551288   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:45:34.551305   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:45:34.551321   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:45:34.564902   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:45:34.564944   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:45:34.694323   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:45:34.694349   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:45:34.694366   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:45:34.770548   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:45:34.770589   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:45:34.818339   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:45:34.818366   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:45:32.606912   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:34.607600   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:34.358649   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:36.856727   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:37.003939   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:39.498178   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:37.370390   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:37.383474   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:45:37.383558   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:45:37.419911   57945 cri.go:89] found id: ""
	I0816 13:45:37.419943   57945 logs.go:276] 0 containers: []
	W0816 13:45:37.419954   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:45:37.419961   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:45:37.420027   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:45:37.453845   57945 cri.go:89] found id: ""
	I0816 13:45:37.453876   57945 logs.go:276] 0 containers: []
	W0816 13:45:37.453884   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:45:37.453889   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:45:37.453949   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:45:37.489053   57945 cri.go:89] found id: ""
	I0816 13:45:37.489088   57945 logs.go:276] 0 containers: []
	W0816 13:45:37.489099   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:45:37.489106   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:45:37.489176   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:45:37.525631   57945 cri.go:89] found id: ""
	I0816 13:45:37.525664   57945 logs.go:276] 0 containers: []
	W0816 13:45:37.525676   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:45:37.525684   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:45:37.525743   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:45:37.560064   57945 cri.go:89] found id: ""
	I0816 13:45:37.560089   57945 logs.go:276] 0 containers: []
	W0816 13:45:37.560101   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:45:37.560109   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:45:37.560168   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:45:37.593856   57945 cri.go:89] found id: ""
	I0816 13:45:37.593888   57945 logs.go:276] 0 containers: []
	W0816 13:45:37.593899   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:45:37.593907   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:45:37.593969   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:45:37.627775   57945 cri.go:89] found id: ""
	I0816 13:45:37.627808   57945 logs.go:276] 0 containers: []
	W0816 13:45:37.627818   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:45:37.627828   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:45:37.627888   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:45:37.660926   57945 cri.go:89] found id: ""
	I0816 13:45:37.660962   57945 logs.go:276] 0 containers: []
	W0816 13:45:37.660973   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:45:37.660991   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:45:37.661008   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:45:37.738954   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:45:37.738993   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:45:37.778976   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:45:37.779006   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:45:37.831361   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:45:37.831397   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:45:37.845096   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:45:37.845122   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:45:37.930797   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:45:37.106303   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:39.107343   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:38.857564   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:40.858908   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:41.998945   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:43.999474   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:40.431616   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:40.445298   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:45:40.445365   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:45:40.478229   57945 cri.go:89] found id: ""
	I0816 13:45:40.478252   57945 logs.go:276] 0 containers: []
	W0816 13:45:40.478259   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:45:40.478265   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:45:40.478313   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:45:40.514721   57945 cri.go:89] found id: ""
	I0816 13:45:40.514744   57945 logs.go:276] 0 containers: []
	W0816 13:45:40.514754   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:45:40.514761   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:45:40.514819   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:45:40.550604   57945 cri.go:89] found id: ""
	I0816 13:45:40.550629   57945 logs.go:276] 0 containers: []
	W0816 13:45:40.550637   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:45:40.550644   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:45:40.550700   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:45:40.589286   57945 cri.go:89] found id: ""
	I0816 13:45:40.589312   57945 logs.go:276] 0 containers: []
	W0816 13:45:40.589320   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:45:40.589326   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:45:40.589382   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:45:40.622689   57945 cri.go:89] found id: ""
	I0816 13:45:40.622709   57945 logs.go:276] 0 containers: []
	W0816 13:45:40.622717   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:45:40.622722   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:45:40.622778   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:45:40.660872   57945 cri.go:89] found id: ""
	I0816 13:45:40.660897   57945 logs.go:276] 0 containers: []
	W0816 13:45:40.660915   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:45:40.660925   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:45:40.660986   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:45:40.697369   57945 cri.go:89] found id: ""
	I0816 13:45:40.697395   57945 logs.go:276] 0 containers: []
	W0816 13:45:40.697404   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:45:40.697415   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:45:40.697521   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:45:40.733565   57945 cri.go:89] found id: ""
	I0816 13:45:40.733594   57945 logs.go:276] 0 containers: []
	W0816 13:45:40.733604   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:45:40.733615   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:45:40.733629   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:45:40.770951   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:45:40.770993   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:45:40.824983   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:45:40.825025   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:45:40.838846   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:45:40.838876   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:45:40.915687   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:45:40.915718   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:45:40.915733   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:45:43.496409   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:43.511419   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:45:43.511485   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:45:43.556996   57945 cri.go:89] found id: ""
	I0816 13:45:43.557031   57945 logs.go:276] 0 containers: []
	W0816 13:45:43.557042   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:45:43.557050   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:45:43.557102   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:45:43.609200   57945 cri.go:89] found id: ""
	I0816 13:45:43.609228   57945 logs.go:276] 0 containers: []
	W0816 13:45:43.609237   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:45:43.609244   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:45:43.609305   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:45:43.648434   57945 cri.go:89] found id: ""
	I0816 13:45:43.648458   57945 logs.go:276] 0 containers: []
	W0816 13:45:43.648467   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:45:43.648474   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:45:43.648538   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:45:43.687179   57945 cri.go:89] found id: ""
	I0816 13:45:43.687214   57945 logs.go:276] 0 containers: []
	W0816 13:45:43.687222   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:45:43.687228   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:45:43.687293   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:45:43.721723   57945 cri.go:89] found id: ""
	I0816 13:45:43.721751   57945 logs.go:276] 0 containers: []
	W0816 13:45:43.721762   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:45:43.721769   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:45:43.721847   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:45:43.756469   57945 cri.go:89] found id: ""
	I0816 13:45:43.756492   57945 logs.go:276] 0 containers: []
	W0816 13:45:43.756501   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:45:43.756506   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:45:43.756560   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:45:43.790241   57945 cri.go:89] found id: ""
	I0816 13:45:43.790267   57945 logs.go:276] 0 containers: []
	W0816 13:45:43.790275   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:45:43.790281   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:45:43.790329   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:45:43.828620   57945 cri.go:89] found id: ""
	I0816 13:45:43.828646   57945 logs.go:276] 0 containers: []
	W0816 13:45:43.828654   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:45:43.828662   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:45:43.828677   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:45:43.879573   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:45:43.879607   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:45:43.893813   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:45:43.893842   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:45:43.975188   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:45:43.975209   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:45:43.975220   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:45:44.054231   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:45:44.054266   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:45:41.609813   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:44.116781   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:43.358670   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:45.857710   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:46.497146   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:48.498302   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:46.593190   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:46.607472   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:45:46.607568   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:45:46.642764   57945 cri.go:89] found id: ""
	I0816 13:45:46.642787   57945 logs.go:276] 0 containers: []
	W0816 13:45:46.642795   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:45:46.642800   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:45:46.642848   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:45:46.678965   57945 cri.go:89] found id: ""
	I0816 13:45:46.678992   57945 logs.go:276] 0 containers: []
	W0816 13:45:46.679000   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:45:46.679005   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:45:46.679051   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:45:46.717632   57945 cri.go:89] found id: ""
	I0816 13:45:46.717657   57945 logs.go:276] 0 containers: []
	W0816 13:45:46.717666   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:45:46.717671   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:45:46.717720   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:45:46.758359   57945 cri.go:89] found id: ""
	I0816 13:45:46.758407   57945 logs.go:276] 0 containers: []
	W0816 13:45:46.758419   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:45:46.758427   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:45:46.758487   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:45:46.798405   57945 cri.go:89] found id: ""
	I0816 13:45:46.798437   57945 logs.go:276] 0 containers: []
	W0816 13:45:46.798448   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:45:46.798472   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:45:46.798547   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:45:46.834977   57945 cri.go:89] found id: ""
	I0816 13:45:46.835008   57945 logs.go:276] 0 containers: []
	W0816 13:45:46.835019   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:45:46.835026   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:45:46.835077   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:45:46.873589   57945 cri.go:89] found id: ""
	I0816 13:45:46.873622   57945 logs.go:276] 0 containers: []
	W0816 13:45:46.873631   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:45:46.873638   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:45:46.873689   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:45:46.912649   57945 cri.go:89] found id: ""
	I0816 13:45:46.912680   57945 logs.go:276] 0 containers: []
	W0816 13:45:46.912691   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:45:46.912701   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:45:46.912720   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:45:46.966998   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:45:46.967038   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:45:46.980897   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:45:46.980937   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:45:47.053055   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:45:47.053079   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:45:47.053091   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:45:47.136251   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:45:47.136291   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:45:49.678283   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:49.691134   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:45:49.691244   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:45:49.726598   57945 cri.go:89] found id: ""
	I0816 13:45:49.726644   57945 logs.go:276] 0 containers: []
	W0816 13:45:49.726656   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:45:49.726665   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:45:49.726729   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:45:49.760499   57945 cri.go:89] found id: ""
	I0816 13:45:49.760526   57945 logs.go:276] 0 containers: []
	W0816 13:45:49.760536   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:45:49.760543   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:45:49.760602   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:45:49.794064   57945 cri.go:89] found id: ""
	I0816 13:45:49.794087   57945 logs.go:276] 0 containers: []
	W0816 13:45:49.794094   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:45:49.794099   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:45:49.794162   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:45:49.830016   57945 cri.go:89] found id: ""
	I0816 13:45:49.830045   57945 logs.go:276] 0 containers: []
	W0816 13:45:49.830057   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:45:49.830071   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:45:49.830125   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:45:49.865230   57945 cri.go:89] found id: ""
	I0816 13:45:49.865248   57945 logs.go:276] 0 containers: []
	W0816 13:45:49.865255   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:45:49.865261   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:45:49.865310   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:45:49.898715   57945 cri.go:89] found id: ""
	I0816 13:45:49.898743   57945 logs.go:276] 0 containers: []
	W0816 13:45:49.898752   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:45:49.898758   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:45:49.898807   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:45:49.932831   57945 cri.go:89] found id: ""
	I0816 13:45:49.932857   57945 logs.go:276] 0 containers: []
	W0816 13:45:49.932868   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:45:49.932875   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:45:49.932948   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:45:49.965580   57945 cri.go:89] found id: ""
	I0816 13:45:49.965609   57945 logs.go:276] 0 containers: []
	W0816 13:45:49.965617   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:45:49.965626   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:45:49.965642   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:45:50.058462   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:45:50.058516   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:45:46.606815   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:49.107387   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:47.858274   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:49.861382   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:50.999007   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:53.497248   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:50.111179   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:45:50.111206   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:45:50.162529   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:45:50.162561   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:45:50.176552   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:45:50.176579   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:45:50.243818   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:45:52.744808   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:52.757430   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:45:52.757513   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:45:52.793177   57945 cri.go:89] found id: ""
	I0816 13:45:52.793209   57945 logs.go:276] 0 containers: []
	W0816 13:45:52.793217   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:45:52.793224   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:45:52.793276   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:45:52.827846   57945 cri.go:89] found id: ""
	I0816 13:45:52.827874   57945 logs.go:276] 0 containers: []
	W0816 13:45:52.827886   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:45:52.827894   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:45:52.827959   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:45:52.864662   57945 cri.go:89] found id: ""
	I0816 13:45:52.864693   57945 logs.go:276] 0 containers: []
	W0816 13:45:52.864705   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:45:52.864711   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:45:52.864761   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:45:52.901124   57945 cri.go:89] found id: ""
	I0816 13:45:52.901154   57945 logs.go:276] 0 containers: []
	W0816 13:45:52.901166   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:45:52.901174   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:45:52.901234   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:45:52.939763   57945 cri.go:89] found id: ""
	I0816 13:45:52.939791   57945 logs.go:276] 0 containers: []
	W0816 13:45:52.939799   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:45:52.939805   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:45:52.939858   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:45:52.975045   57945 cri.go:89] found id: ""
	I0816 13:45:52.975075   57945 logs.go:276] 0 containers: []
	W0816 13:45:52.975086   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:45:52.975092   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:45:52.975141   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:45:53.014686   57945 cri.go:89] found id: ""
	I0816 13:45:53.014714   57945 logs.go:276] 0 containers: []
	W0816 13:45:53.014725   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:45:53.014732   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:45:53.014794   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:45:53.049445   57945 cri.go:89] found id: ""
	I0816 13:45:53.049466   57945 logs.go:276] 0 containers: []
	W0816 13:45:53.049473   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:45:53.049482   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:45:53.049492   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:45:53.101819   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:45:53.101850   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:45:53.116165   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:45:53.116191   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:45:53.191022   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:45:53.191047   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:45:53.191062   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:45:53.268901   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:45:53.268952   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:45:51.607047   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:54.106991   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:52.363317   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:54.857924   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:55.497520   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:57.498597   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:59.997729   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:55.814862   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:55.828817   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:45:55.828875   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:45:55.877556   57945 cri.go:89] found id: ""
	I0816 13:45:55.877586   57945 logs.go:276] 0 containers: []
	W0816 13:45:55.877595   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:45:55.877606   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:45:55.877667   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:45:55.912820   57945 cri.go:89] found id: ""
	I0816 13:45:55.912848   57945 logs.go:276] 0 containers: []
	W0816 13:45:55.912855   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:45:55.912862   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:45:55.912918   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:45:55.947419   57945 cri.go:89] found id: ""
	I0816 13:45:55.947449   57945 logs.go:276] 0 containers: []
	W0816 13:45:55.947460   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:45:55.947467   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:45:55.947532   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:45:55.980964   57945 cri.go:89] found id: ""
	I0816 13:45:55.980990   57945 logs.go:276] 0 containers: []
	W0816 13:45:55.981001   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:45:55.981008   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:45:55.981068   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:45:56.019021   57945 cri.go:89] found id: ""
	I0816 13:45:56.019045   57945 logs.go:276] 0 containers: []
	W0816 13:45:56.019053   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:45:56.019059   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:45:56.019116   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:45:56.054950   57945 cri.go:89] found id: ""
	I0816 13:45:56.054974   57945 logs.go:276] 0 containers: []
	W0816 13:45:56.054985   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:45:56.054992   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:45:56.055057   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:45:56.091165   57945 cri.go:89] found id: ""
	I0816 13:45:56.091192   57945 logs.go:276] 0 containers: []
	W0816 13:45:56.091202   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:45:56.091211   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:45:56.091268   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:45:56.125748   57945 cri.go:89] found id: ""
	I0816 13:45:56.125775   57945 logs.go:276] 0 containers: []
	W0816 13:45:56.125787   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:45:56.125797   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:45:56.125811   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:45:56.174836   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:45:56.174870   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:45:56.188501   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:45:56.188529   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:45:56.266017   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:45:56.266038   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:45:56.266050   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:45:56.346482   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:45:56.346519   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:45:58.887176   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:58.900464   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:45:58.900531   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:45:58.939526   57945 cri.go:89] found id: ""
	I0816 13:45:58.939558   57945 logs.go:276] 0 containers: []
	W0816 13:45:58.939568   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:45:58.939576   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:45:58.939639   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:45:58.975256   57945 cri.go:89] found id: ""
	I0816 13:45:58.975281   57945 logs.go:276] 0 containers: []
	W0816 13:45:58.975289   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:45:58.975294   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:45:58.975350   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:45:59.012708   57945 cri.go:89] found id: ""
	I0816 13:45:59.012736   57945 logs.go:276] 0 containers: []
	W0816 13:45:59.012746   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:45:59.012754   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:45:59.012820   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:45:59.049385   57945 cri.go:89] found id: ""
	I0816 13:45:59.049417   57945 logs.go:276] 0 containers: []
	W0816 13:45:59.049430   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:45:59.049438   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:45:59.049505   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:45:59.084750   57945 cri.go:89] found id: ""
	I0816 13:45:59.084773   57945 logs.go:276] 0 containers: []
	W0816 13:45:59.084781   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:45:59.084786   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:45:59.084834   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:45:59.129464   57945 cri.go:89] found id: ""
	I0816 13:45:59.129495   57945 logs.go:276] 0 containers: []
	W0816 13:45:59.129506   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:45:59.129514   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:45:59.129578   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:45:59.166772   57945 cri.go:89] found id: ""
	I0816 13:45:59.166794   57945 logs.go:276] 0 containers: []
	W0816 13:45:59.166802   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:45:59.166808   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:45:59.166867   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:45:59.203843   57945 cri.go:89] found id: ""
	I0816 13:45:59.203876   57945 logs.go:276] 0 containers: []
	W0816 13:45:59.203886   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:45:59.203897   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:45:59.203911   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:45:59.285798   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:45:59.285837   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:45:59.324704   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:45:59.324729   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:45:59.377532   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:45:59.377566   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:45:59.391209   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:45:59.391236   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:45:59.463420   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:45:56.107187   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:58.606550   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:57.358875   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:59.857940   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:01.859677   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:01.998260   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:04.498473   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:01.964395   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:01.977380   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:01.977452   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:02.014480   57945 cri.go:89] found id: ""
	I0816 13:46:02.014504   57945 logs.go:276] 0 containers: []
	W0816 13:46:02.014511   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:02.014517   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:02.014578   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:02.057233   57945 cri.go:89] found id: ""
	I0816 13:46:02.057262   57945 logs.go:276] 0 containers: []
	W0816 13:46:02.057270   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:02.057277   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:02.057326   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:02.095936   57945 cri.go:89] found id: ""
	I0816 13:46:02.095962   57945 logs.go:276] 0 containers: []
	W0816 13:46:02.095970   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:02.095976   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:02.096020   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:02.136949   57945 cri.go:89] found id: ""
	I0816 13:46:02.136980   57945 logs.go:276] 0 containers: []
	W0816 13:46:02.136992   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:02.136998   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:02.137047   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:02.172610   57945 cri.go:89] found id: ""
	I0816 13:46:02.172648   57945 logs.go:276] 0 containers: []
	W0816 13:46:02.172658   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:02.172666   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:02.172729   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:02.211216   57945 cri.go:89] found id: ""
	I0816 13:46:02.211247   57945 logs.go:276] 0 containers: []
	W0816 13:46:02.211257   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:02.211266   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:02.211334   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:02.245705   57945 cri.go:89] found id: ""
	I0816 13:46:02.245735   57945 logs.go:276] 0 containers: []
	W0816 13:46:02.245746   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:02.245753   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:02.245823   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:02.281057   57945 cri.go:89] found id: ""
	I0816 13:46:02.281082   57945 logs.go:276] 0 containers: []
	W0816 13:46:02.281093   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:02.281103   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:02.281128   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:02.333334   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:02.333377   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:02.347520   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:02.347546   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:02.427543   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:02.427572   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:02.427587   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:02.514871   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:02.514908   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:05.057817   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:05.070491   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:05.070554   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:01.106533   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:03.107325   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:05.107629   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:04.359077   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:06.857557   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:06.997606   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:08.998915   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:05.108262   57945 cri.go:89] found id: ""
	I0816 13:46:05.108290   57945 logs.go:276] 0 containers: []
	W0816 13:46:05.108301   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:05.108308   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:05.108361   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:05.143962   57945 cri.go:89] found id: ""
	I0816 13:46:05.143995   57945 logs.go:276] 0 containers: []
	W0816 13:46:05.144005   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:05.144011   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:05.144067   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:05.180032   57945 cri.go:89] found id: ""
	I0816 13:46:05.180058   57945 logs.go:276] 0 containers: []
	W0816 13:46:05.180068   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:05.180076   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:05.180128   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:05.214077   57945 cri.go:89] found id: ""
	I0816 13:46:05.214107   57945 logs.go:276] 0 containers: []
	W0816 13:46:05.214115   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:05.214121   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:05.214171   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:05.250887   57945 cri.go:89] found id: ""
	I0816 13:46:05.250920   57945 logs.go:276] 0 containers: []
	W0816 13:46:05.250930   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:05.250937   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:05.251000   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:05.285592   57945 cri.go:89] found id: ""
	I0816 13:46:05.285615   57945 logs.go:276] 0 containers: []
	W0816 13:46:05.285623   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:05.285629   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:05.285675   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:05.325221   57945 cri.go:89] found id: ""
	I0816 13:46:05.325248   57945 logs.go:276] 0 containers: []
	W0816 13:46:05.325258   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:05.325264   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:05.325307   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:05.364025   57945 cri.go:89] found id: ""
	I0816 13:46:05.364047   57945 logs.go:276] 0 containers: []
	W0816 13:46:05.364055   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:05.364062   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:05.364074   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:05.413364   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:05.413395   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:05.427328   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:05.427358   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:05.504040   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:05.504067   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:05.504086   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:05.580975   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:05.581010   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:08.123111   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:08.136822   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:08.136902   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:08.169471   57945 cri.go:89] found id: ""
	I0816 13:46:08.169495   57945 logs.go:276] 0 containers: []
	W0816 13:46:08.169503   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:08.169508   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:08.169556   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:08.211041   57945 cri.go:89] found id: ""
	I0816 13:46:08.211069   57945 logs.go:276] 0 containers: []
	W0816 13:46:08.211081   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:08.211087   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:08.211148   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:08.247564   57945 cri.go:89] found id: ""
	I0816 13:46:08.247590   57945 logs.go:276] 0 containers: []
	W0816 13:46:08.247600   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:08.247607   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:08.247670   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:08.284283   57945 cri.go:89] found id: ""
	I0816 13:46:08.284312   57945 logs.go:276] 0 containers: []
	W0816 13:46:08.284324   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:08.284332   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:08.284384   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:08.320287   57945 cri.go:89] found id: ""
	I0816 13:46:08.320311   57945 logs.go:276] 0 containers: []
	W0816 13:46:08.320319   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:08.320325   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:08.320371   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:08.358294   57945 cri.go:89] found id: ""
	I0816 13:46:08.358324   57945 logs.go:276] 0 containers: []
	W0816 13:46:08.358342   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:08.358356   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:08.358423   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:08.394386   57945 cri.go:89] found id: ""
	I0816 13:46:08.394414   57945 logs.go:276] 0 containers: []
	W0816 13:46:08.394424   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:08.394432   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:08.394502   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:08.439608   57945 cri.go:89] found id: ""
	I0816 13:46:08.439635   57945 logs.go:276] 0 containers: []
	W0816 13:46:08.439643   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:08.439653   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:08.439668   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:08.493878   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:08.493918   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:08.508080   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:08.508114   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:08.584703   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:08.584727   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:08.584745   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:08.663741   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:08.663776   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:07.606112   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:09.608137   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:09.357201   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:11.359055   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:11.497851   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:13.998849   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:11.204946   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:11.218720   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:11.218800   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:11.257825   57945 cri.go:89] found id: ""
	I0816 13:46:11.257852   57945 logs.go:276] 0 containers: []
	W0816 13:46:11.257862   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:11.257870   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:11.257930   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:11.293910   57945 cri.go:89] found id: ""
	I0816 13:46:11.293946   57945 logs.go:276] 0 containers: []
	W0816 13:46:11.293958   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:11.293966   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:11.294023   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:11.330005   57945 cri.go:89] found id: ""
	I0816 13:46:11.330031   57945 logs.go:276] 0 containers: []
	W0816 13:46:11.330039   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:11.330045   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:11.330101   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:11.365057   57945 cri.go:89] found id: ""
	I0816 13:46:11.365083   57945 logs.go:276] 0 containers: []
	W0816 13:46:11.365093   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:11.365101   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:11.365159   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:11.401440   57945 cri.go:89] found id: ""
	I0816 13:46:11.401467   57945 logs.go:276] 0 containers: []
	W0816 13:46:11.401475   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:11.401481   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:11.401532   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:11.435329   57945 cri.go:89] found id: ""
	I0816 13:46:11.435354   57945 logs.go:276] 0 containers: []
	W0816 13:46:11.435361   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:11.435368   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:11.435427   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:11.468343   57945 cri.go:89] found id: ""
	I0816 13:46:11.468373   57945 logs.go:276] 0 containers: []
	W0816 13:46:11.468393   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:11.468401   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:11.468465   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:11.503326   57945 cri.go:89] found id: ""
	I0816 13:46:11.503347   57945 logs.go:276] 0 containers: []
	W0816 13:46:11.503362   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:11.503370   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:11.503386   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:11.554681   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:11.554712   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:11.568056   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:11.568087   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:11.646023   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:11.646049   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:11.646060   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:11.726154   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:11.726191   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:14.266008   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:14.280328   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:14.280408   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:14.316359   57945 cri.go:89] found id: ""
	I0816 13:46:14.316388   57945 logs.go:276] 0 containers: []
	W0816 13:46:14.316398   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:14.316406   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:14.316470   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:14.360143   57945 cri.go:89] found id: ""
	I0816 13:46:14.360165   57945 logs.go:276] 0 containers: []
	W0816 13:46:14.360172   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:14.360183   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:14.360234   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:14.394692   57945 cri.go:89] found id: ""
	I0816 13:46:14.394717   57945 logs.go:276] 0 containers: []
	W0816 13:46:14.394724   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:14.394730   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:14.394789   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:14.431928   57945 cri.go:89] found id: ""
	I0816 13:46:14.431957   57945 logs.go:276] 0 containers: []
	W0816 13:46:14.431968   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:14.431975   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:14.432041   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:14.469223   57945 cri.go:89] found id: ""
	I0816 13:46:14.469253   57945 logs.go:276] 0 containers: []
	W0816 13:46:14.469265   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:14.469272   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:14.469334   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:14.506893   57945 cri.go:89] found id: ""
	I0816 13:46:14.506917   57945 logs.go:276] 0 containers: []
	W0816 13:46:14.506925   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:14.506931   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:14.506991   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:14.544801   57945 cri.go:89] found id: ""
	I0816 13:46:14.544825   57945 logs.go:276] 0 containers: []
	W0816 13:46:14.544833   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:14.544839   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:14.544891   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:14.579489   57945 cri.go:89] found id: ""
	I0816 13:46:14.579528   57945 logs.go:276] 0 containers: []
	W0816 13:46:14.579541   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:14.579556   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:14.579572   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:14.656527   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:14.656551   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:14.656573   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:14.736792   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:14.736823   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:14.775976   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:14.776010   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:14.827804   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:14.827836   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:12.106330   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:14.106732   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:13.857302   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:15.858233   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:16.497347   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:18.497948   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:17.341506   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:17.357136   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:17.357214   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:17.397810   57945 cri.go:89] found id: ""
	I0816 13:46:17.397839   57945 logs.go:276] 0 containers: []
	W0816 13:46:17.397867   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:17.397874   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:17.397936   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:17.435170   57945 cri.go:89] found id: ""
	I0816 13:46:17.435198   57945 logs.go:276] 0 containers: []
	W0816 13:46:17.435208   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:17.435214   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:17.435260   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:17.468837   57945 cri.go:89] found id: ""
	I0816 13:46:17.468871   57945 logs.go:276] 0 containers: []
	W0816 13:46:17.468882   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:17.468891   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:17.468962   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:17.503884   57945 cri.go:89] found id: ""
	I0816 13:46:17.503910   57945 logs.go:276] 0 containers: []
	W0816 13:46:17.503921   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:17.503930   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:17.503977   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:17.541204   57945 cri.go:89] found id: ""
	I0816 13:46:17.541232   57945 logs.go:276] 0 containers: []
	W0816 13:46:17.541244   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:17.541251   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:17.541312   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:17.577007   57945 cri.go:89] found id: ""
	I0816 13:46:17.577031   57945 logs.go:276] 0 containers: []
	W0816 13:46:17.577038   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:17.577045   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:17.577092   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:17.611352   57945 cri.go:89] found id: ""
	I0816 13:46:17.611373   57945 logs.go:276] 0 containers: []
	W0816 13:46:17.611380   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:17.611386   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:17.611433   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:17.648108   57945 cri.go:89] found id: ""
	I0816 13:46:17.648147   57945 logs.go:276] 0 containers: []
	W0816 13:46:17.648155   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:17.648164   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:17.648176   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:17.720475   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:17.720500   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:17.720512   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:17.797602   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:17.797636   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:17.842985   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:17.843019   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:17.893581   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:17.893617   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:16.107456   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:18.107650   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:20.608155   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:18.357472   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:20.857964   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:20.498563   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:22.998319   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:20.408415   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:20.423303   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:20.423384   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:20.459057   57945 cri.go:89] found id: ""
	I0816 13:46:20.459083   57945 logs.go:276] 0 containers: []
	W0816 13:46:20.459091   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:20.459096   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:20.459152   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:20.496447   57945 cri.go:89] found id: ""
	I0816 13:46:20.496471   57945 logs.go:276] 0 containers: []
	W0816 13:46:20.496479   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:20.496485   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:20.496532   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:20.538508   57945 cri.go:89] found id: ""
	I0816 13:46:20.538531   57945 logs.go:276] 0 containers: []
	W0816 13:46:20.538539   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:20.538544   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:20.538600   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:20.579350   57945 cri.go:89] found id: ""
	I0816 13:46:20.579382   57945 logs.go:276] 0 containers: []
	W0816 13:46:20.579390   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:20.579396   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:20.579465   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:20.615088   57945 cri.go:89] found id: ""
	I0816 13:46:20.615118   57945 logs.go:276] 0 containers: []
	W0816 13:46:20.615130   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:20.615137   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:20.615203   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:20.650849   57945 cri.go:89] found id: ""
	I0816 13:46:20.650877   57945 logs.go:276] 0 containers: []
	W0816 13:46:20.650884   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:20.650890   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:20.650950   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:20.691439   57945 cri.go:89] found id: ""
	I0816 13:46:20.691471   57945 logs.go:276] 0 containers: []
	W0816 13:46:20.691482   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:20.691490   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:20.691553   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:20.727795   57945 cri.go:89] found id: ""
	I0816 13:46:20.727820   57945 logs.go:276] 0 containers: []
	W0816 13:46:20.727829   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:20.727836   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:20.727847   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:20.806369   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:20.806390   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:20.806402   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:20.886313   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:20.886345   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:20.926079   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:20.926104   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:20.981052   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:20.981088   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:23.496179   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:23.509918   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:23.509983   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:23.546175   57945 cri.go:89] found id: ""
	I0816 13:46:23.546214   57945 logs.go:276] 0 containers: []
	W0816 13:46:23.546224   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:23.546231   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:23.546293   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:23.581553   57945 cri.go:89] found id: ""
	I0816 13:46:23.581581   57945 logs.go:276] 0 containers: []
	W0816 13:46:23.581594   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:23.581600   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:23.581648   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:23.614559   57945 cri.go:89] found id: ""
	I0816 13:46:23.614584   57945 logs.go:276] 0 containers: []
	W0816 13:46:23.614592   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:23.614597   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:23.614651   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:23.649239   57945 cri.go:89] found id: ""
	I0816 13:46:23.649272   57945 logs.go:276] 0 containers: []
	W0816 13:46:23.649283   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:23.649291   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:23.649354   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:23.688017   57945 cri.go:89] found id: ""
	I0816 13:46:23.688044   57945 logs.go:276] 0 containers: []
	W0816 13:46:23.688054   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:23.688062   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:23.688126   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:23.723475   57945 cri.go:89] found id: ""
	I0816 13:46:23.723507   57945 logs.go:276] 0 containers: []
	W0816 13:46:23.723517   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:23.723525   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:23.723585   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:23.756028   57945 cri.go:89] found id: ""
	I0816 13:46:23.756055   57945 logs.go:276] 0 containers: []
	W0816 13:46:23.756063   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:23.756069   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:23.756121   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:23.789965   57945 cri.go:89] found id: ""
	I0816 13:46:23.789993   57945 logs.go:276] 0 containers: []
	W0816 13:46:23.790000   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:23.790009   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:23.790029   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:23.803669   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:23.803696   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:23.882614   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:23.882642   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:23.882659   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:23.957733   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:23.957773   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:23.994270   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:23.994298   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:23.106190   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:25.106765   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:23.356443   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:25.356705   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:25.496930   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:27.497933   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:29.500639   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:26.546600   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:26.560153   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:26.560221   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:26.594482   57945 cri.go:89] found id: ""
	I0816 13:46:26.594506   57945 logs.go:276] 0 containers: []
	W0816 13:46:26.594520   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:26.594528   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:26.594585   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:26.628020   57945 cri.go:89] found id: ""
	I0816 13:46:26.628051   57945 logs.go:276] 0 containers: []
	W0816 13:46:26.628061   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:26.628068   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:26.628126   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:26.664248   57945 cri.go:89] found id: ""
	I0816 13:46:26.664277   57945 logs.go:276] 0 containers: []
	W0816 13:46:26.664288   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:26.664295   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:26.664357   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:26.700365   57945 cri.go:89] found id: ""
	I0816 13:46:26.700389   57945 logs.go:276] 0 containers: []
	W0816 13:46:26.700397   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:26.700403   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:26.700464   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:26.736170   57945 cri.go:89] found id: ""
	I0816 13:46:26.736204   57945 logs.go:276] 0 containers: []
	W0816 13:46:26.736212   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:26.736219   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:26.736268   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:26.773411   57945 cri.go:89] found id: ""
	I0816 13:46:26.773441   57945 logs.go:276] 0 containers: []
	W0816 13:46:26.773449   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:26.773455   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:26.773514   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:26.811994   57945 cri.go:89] found id: ""
	I0816 13:46:26.812022   57945 logs.go:276] 0 containers: []
	W0816 13:46:26.812030   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:26.812036   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:26.812087   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:26.846621   57945 cri.go:89] found id: ""
	I0816 13:46:26.846647   57945 logs.go:276] 0 containers: []
	W0816 13:46:26.846654   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:26.846662   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:26.846680   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:26.902255   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:26.902293   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:26.916117   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:26.916148   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:26.986755   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:26.986786   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:26.986802   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:27.069607   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:27.069644   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:29.610859   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:29.624599   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:29.624654   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:29.660421   57945 cri.go:89] found id: ""
	I0816 13:46:29.660454   57945 logs.go:276] 0 containers: []
	W0816 13:46:29.660465   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:29.660474   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:29.660534   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:29.694828   57945 cri.go:89] found id: ""
	I0816 13:46:29.694853   57945 logs.go:276] 0 containers: []
	W0816 13:46:29.694861   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:29.694867   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:29.694933   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:29.734054   57945 cri.go:89] found id: ""
	I0816 13:46:29.734083   57945 logs.go:276] 0 containers: []
	W0816 13:46:29.734093   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:29.734100   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:29.734159   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:29.771358   57945 cri.go:89] found id: ""
	I0816 13:46:29.771383   57945 logs.go:276] 0 containers: []
	W0816 13:46:29.771391   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:29.771397   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:29.771464   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:29.806781   57945 cri.go:89] found id: ""
	I0816 13:46:29.806804   57945 logs.go:276] 0 containers: []
	W0816 13:46:29.806812   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:29.806819   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:29.806879   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:29.841716   57945 cri.go:89] found id: ""
	I0816 13:46:29.841743   57945 logs.go:276] 0 containers: []
	W0816 13:46:29.841754   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:29.841762   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:29.841827   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:29.880115   57945 cri.go:89] found id: ""
	I0816 13:46:29.880144   57945 logs.go:276] 0 containers: []
	W0816 13:46:29.880152   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:29.880158   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:29.880226   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:29.916282   57945 cri.go:89] found id: ""
	I0816 13:46:29.916311   57945 logs.go:276] 0 containers: []
	W0816 13:46:29.916321   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:29.916331   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:29.916347   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:29.996027   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:29.996059   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:30.035284   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:30.035315   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:30.085336   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:30.085368   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:30.099534   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:30.099562   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 13:46:27.606739   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:29.606870   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:27.357970   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:29.861012   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:31.998584   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:34.497511   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	W0816 13:46:30.174105   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:32.674746   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:32.688631   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:32.688699   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:32.722967   57945 cri.go:89] found id: ""
	I0816 13:46:32.722997   57945 logs.go:276] 0 containers: []
	W0816 13:46:32.723007   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:32.723014   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:32.723075   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:32.757223   57945 cri.go:89] found id: ""
	I0816 13:46:32.757257   57945 logs.go:276] 0 containers: []
	W0816 13:46:32.757267   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:32.757275   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:32.757342   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:32.793773   57945 cri.go:89] found id: ""
	I0816 13:46:32.793795   57945 logs.go:276] 0 containers: []
	W0816 13:46:32.793804   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:32.793811   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:32.793879   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:32.829541   57945 cri.go:89] found id: ""
	I0816 13:46:32.829565   57945 logs.go:276] 0 containers: []
	W0816 13:46:32.829573   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:32.829578   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:32.829626   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:32.864053   57945 cri.go:89] found id: ""
	I0816 13:46:32.864079   57945 logs.go:276] 0 containers: []
	W0816 13:46:32.864090   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:32.864097   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:32.864155   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:32.901420   57945 cri.go:89] found id: ""
	I0816 13:46:32.901451   57945 logs.go:276] 0 containers: []
	W0816 13:46:32.901459   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:32.901466   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:32.901522   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:32.933082   57945 cri.go:89] found id: ""
	I0816 13:46:32.933110   57945 logs.go:276] 0 containers: []
	W0816 13:46:32.933118   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:32.933125   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:32.933186   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:32.966640   57945 cri.go:89] found id: ""
	I0816 13:46:32.966664   57945 logs.go:276] 0 containers: []
	W0816 13:46:32.966672   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:32.966680   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:32.966692   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:33.048593   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:33.048627   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:33.089329   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:33.089366   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:33.144728   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:33.144764   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:33.158639   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:33.158666   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:33.227076   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:32.106718   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:34.606961   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:32.357555   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:34.857062   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:36.857679   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:36.997085   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:38.999741   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:35.727465   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:35.740850   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:35.740940   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:35.777294   57945 cri.go:89] found id: ""
	I0816 13:46:35.777317   57945 logs.go:276] 0 containers: []
	W0816 13:46:35.777325   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:35.777330   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:35.777394   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:35.815582   57945 cri.go:89] found id: ""
	I0816 13:46:35.815604   57945 logs.go:276] 0 containers: []
	W0816 13:46:35.815612   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:35.815618   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:35.815672   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:35.848338   57945 cri.go:89] found id: ""
	I0816 13:46:35.848363   57945 logs.go:276] 0 containers: []
	W0816 13:46:35.848370   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:35.848376   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:35.848432   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:35.884834   57945 cri.go:89] found id: ""
	I0816 13:46:35.884862   57945 logs.go:276] 0 containers: []
	W0816 13:46:35.884870   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:35.884876   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:35.884953   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:35.919022   57945 cri.go:89] found id: ""
	I0816 13:46:35.919046   57945 logs.go:276] 0 containers: []
	W0816 13:46:35.919058   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:35.919063   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:35.919150   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:35.953087   57945 cri.go:89] found id: ""
	I0816 13:46:35.953111   57945 logs.go:276] 0 containers: []
	W0816 13:46:35.953119   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:35.953124   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:35.953182   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:35.984776   57945 cri.go:89] found id: ""
	I0816 13:46:35.984804   57945 logs.go:276] 0 containers: []
	W0816 13:46:35.984814   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:35.984821   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:35.984882   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:36.028921   57945 cri.go:89] found id: ""
	I0816 13:46:36.028946   57945 logs.go:276] 0 containers: []
	W0816 13:46:36.028954   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:36.028964   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:36.028976   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:36.091313   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:36.091342   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:36.116881   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:36.116915   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:36.186758   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:36.186778   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:36.186791   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:36.268618   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:36.268653   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:38.808419   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:38.821646   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:38.821708   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:38.860623   57945 cri.go:89] found id: ""
	I0816 13:46:38.860647   57945 logs.go:276] 0 containers: []
	W0816 13:46:38.860655   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:38.860660   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:38.860712   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:38.894728   57945 cri.go:89] found id: ""
	I0816 13:46:38.894782   57945 logs.go:276] 0 containers: []
	W0816 13:46:38.894795   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:38.894804   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:38.894870   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:38.928945   57945 cri.go:89] found id: ""
	I0816 13:46:38.928974   57945 logs.go:276] 0 containers: []
	W0816 13:46:38.928988   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:38.928994   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:38.929048   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:38.966450   57945 cri.go:89] found id: ""
	I0816 13:46:38.966474   57945 logs.go:276] 0 containers: []
	W0816 13:46:38.966482   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:38.966487   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:38.966548   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:39.001554   57945 cri.go:89] found id: ""
	I0816 13:46:39.001577   57945 logs.go:276] 0 containers: []
	W0816 13:46:39.001589   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:39.001595   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:39.001656   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:39.036621   57945 cri.go:89] found id: ""
	I0816 13:46:39.036646   57945 logs.go:276] 0 containers: []
	W0816 13:46:39.036654   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:39.036660   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:39.036725   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:39.071244   57945 cri.go:89] found id: ""
	I0816 13:46:39.071271   57945 logs.go:276] 0 containers: []
	W0816 13:46:39.071281   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:39.071289   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:39.071355   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:39.107325   57945 cri.go:89] found id: ""
	I0816 13:46:39.107352   57945 logs.go:276] 0 containers: []
	W0816 13:46:39.107361   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:39.107371   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:39.107401   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:39.189172   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:39.189208   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:39.229060   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:39.229094   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:39.281983   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:39.282025   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:39.296515   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:39.296545   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:39.368488   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:37.113026   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:39.606526   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:38.857809   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:41.358047   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:41.497724   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:43.498815   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:41.868721   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:41.883796   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:41.883869   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:41.922181   57945 cri.go:89] found id: ""
	I0816 13:46:41.922211   57945 logs.go:276] 0 containers: []
	W0816 13:46:41.922222   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:41.922232   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:41.922297   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:41.962213   57945 cri.go:89] found id: ""
	I0816 13:46:41.962239   57945 logs.go:276] 0 containers: []
	W0816 13:46:41.962249   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:41.962257   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:41.962321   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:42.003214   57945 cri.go:89] found id: ""
	I0816 13:46:42.003243   57945 logs.go:276] 0 containers: []
	W0816 13:46:42.003251   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:42.003257   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:42.003316   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:42.038594   57945 cri.go:89] found id: ""
	I0816 13:46:42.038622   57945 logs.go:276] 0 containers: []
	W0816 13:46:42.038635   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:42.038641   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:42.038691   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:42.071377   57945 cri.go:89] found id: ""
	I0816 13:46:42.071409   57945 logs.go:276] 0 containers: []
	W0816 13:46:42.071421   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:42.071429   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:42.071489   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:42.104777   57945 cri.go:89] found id: ""
	I0816 13:46:42.104804   57945 logs.go:276] 0 containers: []
	W0816 13:46:42.104815   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:42.104823   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:42.104879   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:42.140292   57945 cri.go:89] found id: ""
	I0816 13:46:42.140324   57945 logs.go:276] 0 containers: []
	W0816 13:46:42.140335   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:42.140342   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:42.140404   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:42.174823   57945 cri.go:89] found id: ""
	I0816 13:46:42.174861   57945 logs.go:276] 0 containers: []
	W0816 13:46:42.174870   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:42.174887   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:42.174906   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:42.216308   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:42.216337   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:42.269277   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:42.269304   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:42.282347   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:42.282374   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:42.358776   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:42.358796   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:42.358807   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:44.942195   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:44.955384   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:44.955465   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:44.994181   57945 cri.go:89] found id: ""
	I0816 13:46:44.994212   57945 logs.go:276] 0 containers: []
	W0816 13:46:44.994223   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:44.994230   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:44.994286   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:45.028937   57945 cri.go:89] found id: ""
	I0816 13:46:45.028972   57945 logs.go:276] 0 containers: []
	W0816 13:46:45.028984   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:45.028991   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:45.029049   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:45.068193   57945 cri.go:89] found id: ""
	I0816 13:46:45.068223   57945 logs.go:276] 0 containers: []
	W0816 13:46:45.068237   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:45.068249   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:45.068309   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:42.108651   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:44.606597   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:43.856419   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:45.858360   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:45.998195   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:48.497584   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:45.100553   57945 cri.go:89] found id: ""
	I0816 13:46:45.100653   57945 logs.go:276] 0 containers: []
	W0816 13:46:45.100667   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:45.100674   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:45.100734   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:45.135676   57945 cri.go:89] found id: ""
	I0816 13:46:45.135704   57945 logs.go:276] 0 containers: []
	W0816 13:46:45.135714   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:45.135721   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:45.135784   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:45.174611   57945 cri.go:89] found id: ""
	I0816 13:46:45.174642   57945 logs.go:276] 0 containers: []
	W0816 13:46:45.174653   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:45.174660   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:45.174713   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:45.209544   57945 cri.go:89] found id: ""
	I0816 13:46:45.209573   57945 logs.go:276] 0 containers: []
	W0816 13:46:45.209582   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:45.209588   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:45.209649   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:45.245622   57945 cri.go:89] found id: ""
	I0816 13:46:45.245654   57945 logs.go:276] 0 containers: []
	W0816 13:46:45.245664   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:45.245677   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:45.245692   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:45.284294   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:45.284322   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:45.335720   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:45.335751   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:45.350014   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:45.350039   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:45.419816   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:45.419839   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:45.419854   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:48.005991   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:48.019754   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:48.019814   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:48.053269   57945 cri.go:89] found id: ""
	I0816 13:46:48.053331   57945 logs.go:276] 0 containers: []
	W0816 13:46:48.053344   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:48.053351   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:48.053404   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:48.086992   57945 cri.go:89] found id: ""
	I0816 13:46:48.087024   57945 logs.go:276] 0 containers: []
	W0816 13:46:48.087032   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:48.087037   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:48.087098   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:48.123008   57945 cri.go:89] found id: ""
	I0816 13:46:48.123037   57945 logs.go:276] 0 containers: []
	W0816 13:46:48.123046   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:48.123053   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:48.123110   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:48.158035   57945 cri.go:89] found id: ""
	I0816 13:46:48.158064   57945 logs.go:276] 0 containers: []
	W0816 13:46:48.158075   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:48.158082   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:48.158146   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:48.194576   57945 cri.go:89] found id: ""
	I0816 13:46:48.194605   57945 logs.go:276] 0 containers: []
	W0816 13:46:48.194616   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:48.194624   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:48.194687   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:48.232844   57945 cri.go:89] found id: ""
	I0816 13:46:48.232870   57945 logs.go:276] 0 containers: []
	W0816 13:46:48.232878   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:48.232883   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:48.232955   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:48.267525   57945 cri.go:89] found id: ""
	I0816 13:46:48.267551   57945 logs.go:276] 0 containers: []
	W0816 13:46:48.267559   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:48.267564   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:48.267629   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:48.305436   57945 cri.go:89] found id: ""
	I0816 13:46:48.305465   57945 logs.go:276] 0 containers: []
	W0816 13:46:48.305477   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:48.305487   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:48.305502   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:48.357755   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:48.357781   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:48.372672   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:48.372703   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:48.439076   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:48.439099   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:48.439114   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:48.524142   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:48.524181   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:47.106288   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:49.108117   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:48.357517   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:50.857069   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:50.501014   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:52.998618   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:51.065770   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:51.078797   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:51.078868   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:51.118864   57945 cri.go:89] found id: ""
	I0816 13:46:51.118891   57945 logs.go:276] 0 containers: []
	W0816 13:46:51.118899   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:51.118905   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:51.118964   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:51.153024   57945 cri.go:89] found id: ""
	I0816 13:46:51.153049   57945 logs.go:276] 0 containers: []
	W0816 13:46:51.153057   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:51.153062   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:51.153111   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:51.189505   57945 cri.go:89] found id: ""
	I0816 13:46:51.189531   57945 logs.go:276] 0 containers: []
	W0816 13:46:51.189542   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:51.189550   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:51.189611   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:51.228456   57945 cri.go:89] found id: ""
	I0816 13:46:51.228483   57945 logs.go:276] 0 containers: []
	W0816 13:46:51.228494   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:51.228502   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:51.228565   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:51.264436   57945 cri.go:89] found id: ""
	I0816 13:46:51.264463   57945 logs.go:276] 0 containers: []
	W0816 13:46:51.264474   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:51.264482   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:51.264542   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:51.300291   57945 cri.go:89] found id: ""
	I0816 13:46:51.300315   57945 logs.go:276] 0 containers: []
	W0816 13:46:51.300323   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:51.300329   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:51.300379   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:51.334878   57945 cri.go:89] found id: ""
	I0816 13:46:51.334902   57945 logs.go:276] 0 containers: []
	W0816 13:46:51.334909   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:51.334917   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:51.334969   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:51.376467   57945 cri.go:89] found id: ""
	I0816 13:46:51.376491   57945 logs.go:276] 0 containers: []
	W0816 13:46:51.376499   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:51.376507   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:51.376518   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:51.420168   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:51.420194   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:51.470869   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:51.470900   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:51.484877   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:51.484903   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:51.557587   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:51.557614   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:51.557631   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:54.141123   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:54.154790   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:54.154864   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:54.189468   57945 cri.go:89] found id: ""
	I0816 13:46:54.189495   57945 logs.go:276] 0 containers: []
	W0816 13:46:54.189503   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:54.189509   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:54.189562   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:54.223774   57945 cri.go:89] found id: ""
	I0816 13:46:54.223805   57945 logs.go:276] 0 containers: []
	W0816 13:46:54.223817   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:54.223826   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:54.223883   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:54.257975   57945 cri.go:89] found id: ""
	I0816 13:46:54.258004   57945 logs.go:276] 0 containers: []
	W0816 13:46:54.258014   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:54.258022   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:54.258078   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:54.296144   57945 cri.go:89] found id: ""
	I0816 13:46:54.296174   57945 logs.go:276] 0 containers: []
	W0816 13:46:54.296193   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:54.296201   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:54.296276   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:54.336734   57945 cri.go:89] found id: ""
	I0816 13:46:54.336760   57945 logs.go:276] 0 containers: []
	W0816 13:46:54.336770   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:54.336775   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:54.336839   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:54.370572   57945 cri.go:89] found id: ""
	I0816 13:46:54.370602   57945 logs.go:276] 0 containers: []
	W0816 13:46:54.370609   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:54.370615   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:54.370676   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:54.405703   57945 cri.go:89] found id: ""
	I0816 13:46:54.405735   57945 logs.go:276] 0 containers: []
	W0816 13:46:54.405745   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:54.405753   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:54.405816   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:54.441466   57945 cri.go:89] found id: ""
	I0816 13:46:54.441492   57945 logs.go:276] 0 containers: []
	W0816 13:46:54.441500   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:54.441509   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:54.441521   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:54.492539   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:54.492570   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:54.506313   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:54.506341   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:54.580127   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:54.580151   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:54.580172   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:54.658597   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:54.658633   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:51.607335   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:54.106631   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:53.357847   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:55.857456   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:55.497897   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:57.999173   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:57.198267   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:57.213292   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:57.213354   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:57.248838   57945 cri.go:89] found id: ""
	I0816 13:46:57.248862   57945 logs.go:276] 0 containers: []
	W0816 13:46:57.248870   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:57.248876   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:57.248951   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:57.283868   57945 cri.go:89] found id: ""
	I0816 13:46:57.283895   57945 logs.go:276] 0 containers: []
	W0816 13:46:57.283903   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:57.283908   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:57.283958   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:57.319363   57945 cri.go:89] found id: ""
	I0816 13:46:57.319392   57945 logs.go:276] 0 containers: []
	W0816 13:46:57.319405   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:57.319412   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:57.319465   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:57.359895   57945 cri.go:89] found id: ""
	I0816 13:46:57.359937   57945 logs.go:276] 0 containers: []
	W0816 13:46:57.359949   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:57.359957   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:57.360024   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:57.398025   57945 cri.go:89] found id: ""
	I0816 13:46:57.398057   57945 logs.go:276] 0 containers: []
	W0816 13:46:57.398068   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:57.398075   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:57.398140   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:57.436101   57945 cri.go:89] found id: ""
	I0816 13:46:57.436132   57945 logs.go:276] 0 containers: []
	W0816 13:46:57.436140   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:57.436147   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:57.436223   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:57.471737   57945 cri.go:89] found id: ""
	I0816 13:46:57.471767   57945 logs.go:276] 0 containers: []
	W0816 13:46:57.471778   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:57.471785   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:57.471845   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:57.508664   57945 cri.go:89] found id: ""
	I0816 13:46:57.508694   57945 logs.go:276] 0 containers: []
	W0816 13:46:57.508705   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:57.508716   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:57.508730   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:57.559122   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:57.559155   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:57.572504   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:57.572529   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:57.646721   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:57.646743   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:57.646756   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:57.725107   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:57.725153   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:56.107168   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:58.606805   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:00.607098   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:57.857681   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:00.357433   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:00.497738   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:02.998036   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:04.998316   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:00.269137   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:00.284285   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:00.284363   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:00.325613   57945 cri.go:89] found id: ""
	I0816 13:47:00.325645   57945 logs.go:276] 0 containers: []
	W0816 13:47:00.325654   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:00.325662   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:00.325721   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:00.361706   57945 cri.go:89] found id: ""
	I0816 13:47:00.361732   57945 logs.go:276] 0 containers: []
	W0816 13:47:00.361742   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:00.361750   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:00.361808   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:00.398453   57945 cri.go:89] found id: ""
	I0816 13:47:00.398478   57945 logs.go:276] 0 containers: []
	W0816 13:47:00.398486   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:00.398491   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:00.398544   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:00.434233   57945 cri.go:89] found id: ""
	I0816 13:47:00.434265   57945 logs.go:276] 0 containers: []
	W0816 13:47:00.434278   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:00.434286   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:00.434391   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:00.473020   57945 cri.go:89] found id: ""
	I0816 13:47:00.473042   57945 logs.go:276] 0 containers: []
	W0816 13:47:00.473050   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:00.473056   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:00.473117   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:00.511480   57945 cri.go:89] found id: ""
	I0816 13:47:00.511507   57945 logs.go:276] 0 containers: []
	W0816 13:47:00.511518   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:00.511525   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:00.511595   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:00.546166   57945 cri.go:89] found id: ""
	I0816 13:47:00.546202   57945 logs.go:276] 0 containers: []
	W0816 13:47:00.546209   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:00.546216   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:00.546263   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:00.585285   57945 cri.go:89] found id: ""
	I0816 13:47:00.585310   57945 logs.go:276] 0 containers: []
	W0816 13:47:00.585320   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:00.585329   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:00.585348   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:00.633346   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:00.633373   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:00.687904   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:00.687937   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:00.703773   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:00.703801   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:00.775179   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:00.775210   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:00.775226   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:03.354676   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:03.370107   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:03.370178   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:03.406212   57945 cri.go:89] found id: ""
	I0816 13:47:03.406245   57945 logs.go:276] 0 containers: []
	W0816 13:47:03.406256   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:03.406263   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:03.406333   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:03.442887   57945 cri.go:89] found id: ""
	I0816 13:47:03.442925   57945 logs.go:276] 0 containers: []
	W0816 13:47:03.442937   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:03.442943   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:03.443000   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:03.479225   57945 cri.go:89] found id: ""
	I0816 13:47:03.479259   57945 logs.go:276] 0 containers: []
	W0816 13:47:03.479270   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:03.479278   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:03.479340   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:03.516145   57945 cri.go:89] found id: ""
	I0816 13:47:03.516181   57945 logs.go:276] 0 containers: []
	W0816 13:47:03.516192   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:03.516203   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:03.516265   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:03.548225   57945 cri.go:89] found id: ""
	I0816 13:47:03.548252   57945 logs.go:276] 0 containers: []
	W0816 13:47:03.548260   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:03.548267   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:03.548324   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:03.582038   57945 cri.go:89] found id: ""
	I0816 13:47:03.582071   57945 logs.go:276] 0 containers: []
	W0816 13:47:03.582082   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:03.582089   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:03.582160   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:03.618693   57945 cri.go:89] found id: ""
	I0816 13:47:03.618720   57945 logs.go:276] 0 containers: []
	W0816 13:47:03.618730   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:03.618737   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:03.618793   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:03.653717   57945 cri.go:89] found id: ""
	I0816 13:47:03.653742   57945 logs.go:276] 0 containers: []
	W0816 13:47:03.653751   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:03.653759   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:03.653771   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:03.705909   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:03.705942   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:03.720727   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:03.720751   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:03.795064   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:03.795089   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:03.795104   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:03.874061   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:03.874105   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:02.607546   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:05.106955   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:02.358368   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:04.359618   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:06.858437   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:06.999109   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:09.498087   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:06.420149   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:06.437062   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:06.437124   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:06.473620   57945 cri.go:89] found id: ""
	I0816 13:47:06.473651   57945 logs.go:276] 0 containers: []
	W0816 13:47:06.473659   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:06.473664   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:06.473720   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:06.510281   57945 cri.go:89] found id: ""
	I0816 13:47:06.510307   57945 logs.go:276] 0 containers: []
	W0816 13:47:06.510315   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:06.510321   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:06.510372   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:06.546589   57945 cri.go:89] found id: ""
	I0816 13:47:06.546623   57945 logs.go:276] 0 containers: []
	W0816 13:47:06.546634   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:06.546642   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:06.546702   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:06.580629   57945 cri.go:89] found id: ""
	I0816 13:47:06.580652   57945 logs.go:276] 0 containers: []
	W0816 13:47:06.580665   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:06.580671   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:06.580718   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:06.617411   57945 cri.go:89] found id: ""
	I0816 13:47:06.617439   57945 logs.go:276] 0 containers: []
	W0816 13:47:06.617459   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:06.617468   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:06.617533   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:06.654017   57945 cri.go:89] found id: ""
	I0816 13:47:06.654045   57945 logs.go:276] 0 containers: []
	W0816 13:47:06.654057   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:06.654064   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:06.654124   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:06.695109   57945 cri.go:89] found id: ""
	I0816 13:47:06.695139   57945 logs.go:276] 0 containers: []
	W0816 13:47:06.695147   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:06.695153   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:06.695205   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:06.731545   57945 cri.go:89] found id: ""
	I0816 13:47:06.731620   57945 logs.go:276] 0 containers: []
	W0816 13:47:06.731635   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:06.731647   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:06.731668   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:06.782862   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:06.782900   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:06.797524   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:06.797550   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:06.877445   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:06.877476   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:06.877493   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:06.957932   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:06.957965   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:09.498843   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:09.513398   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:09.513468   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:09.551246   57945 cri.go:89] found id: ""
	I0816 13:47:09.551275   57945 logs.go:276] 0 containers: []
	W0816 13:47:09.551284   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:09.551290   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:09.551339   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:09.585033   57945 cri.go:89] found id: ""
	I0816 13:47:09.585059   57945 logs.go:276] 0 containers: []
	W0816 13:47:09.585066   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:09.585072   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:09.585120   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:09.623498   57945 cri.go:89] found id: ""
	I0816 13:47:09.623524   57945 logs.go:276] 0 containers: []
	W0816 13:47:09.623531   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:09.623537   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:09.623584   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:09.657476   57945 cri.go:89] found id: ""
	I0816 13:47:09.657504   57945 logs.go:276] 0 containers: []
	W0816 13:47:09.657515   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:09.657523   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:09.657578   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:09.693715   57945 cri.go:89] found id: ""
	I0816 13:47:09.693746   57945 logs.go:276] 0 containers: []
	W0816 13:47:09.693757   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:09.693765   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:09.693825   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:09.727396   57945 cri.go:89] found id: ""
	I0816 13:47:09.727426   57945 logs.go:276] 0 containers: []
	W0816 13:47:09.727437   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:09.727451   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:09.727511   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:09.764334   57945 cri.go:89] found id: ""
	I0816 13:47:09.764361   57945 logs.go:276] 0 containers: []
	W0816 13:47:09.764368   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:09.764374   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:09.764428   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:09.799460   57945 cri.go:89] found id: ""
	I0816 13:47:09.799485   57945 logs.go:276] 0 containers: []
	W0816 13:47:09.799497   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:09.799508   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:09.799521   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:09.849637   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:09.849678   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:09.869665   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:09.869702   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:09.954878   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:09.954907   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:09.954922   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:10.032473   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:10.032507   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:07.107809   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:09.606867   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:09.358384   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:11.359451   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:11.997273   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:13.998709   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:12.574303   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:12.587684   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:12.587746   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:12.625568   57945 cri.go:89] found id: ""
	I0816 13:47:12.625593   57945 logs.go:276] 0 containers: []
	W0816 13:47:12.625604   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:12.625611   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:12.625719   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:12.665018   57945 cri.go:89] found id: ""
	I0816 13:47:12.665048   57945 logs.go:276] 0 containers: []
	W0816 13:47:12.665059   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:12.665067   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:12.665128   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:12.701125   57945 cri.go:89] found id: ""
	I0816 13:47:12.701150   57945 logs.go:276] 0 containers: []
	W0816 13:47:12.701158   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:12.701163   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:12.701218   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:12.740613   57945 cri.go:89] found id: ""
	I0816 13:47:12.740644   57945 logs.go:276] 0 containers: []
	W0816 13:47:12.740654   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:12.740662   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:12.740727   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:12.779620   57945 cri.go:89] found id: ""
	I0816 13:47:12.779652   57945 logs.go:276] 0 containers: []
	W0816 13:47:12.779664   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:12.779678   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:12.779743   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:12.816222   57945 cri.go:89] found id: ""
	I0816 13:47:12.816248   57945 logs.go:276] 0 containers: []
	W0816 13:47:12.816269   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:12.816278   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:12.816327   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:12.853083   57945 cri.go:89] found id: ""
	I0816 13:47:12.853113   57945 logs.go:276] 0 containers: []
	W0816 13:47:12.853125   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:12.853133   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:12.853192   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:12.888197   57945 cri.go:89] found id: ""
	I0816 13:47:12.888223   57945 logs.go:276] 0 containers: []
	W0816 13:47:12.888232   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:12.888240   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:12.888255   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:12.941464   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:12.941502   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:12.955423   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:12.955456   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:13.025515   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:13.025537   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:13.025550   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:13.112409   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:13.112452   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:12.107421   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:14.606538   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:13.857389   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:15.857870   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:16.498127   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:18.498877   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:15.656240   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:15.669505   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:15.669568   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:15.703260   57945 cri.go:89] found id: ""
	I0816 13:47:15.703288   57945 logs.go:276] 0 containers: []
	W0816 13:47:15.703299   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:15.703306   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:15.703368   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:15.740555   57945 cri.go:89] found id: ""
	I0816 13:47:15.740580   57945 logs.go:276] 0 containers: []
	W0816 13:47:15.740590   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:15.740596   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:15.740660   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:15.776207   57945 cri.go:89] found id: ""
	I0816 13:47:15.776233   57945 logs.go:276] 0 containers: []
	W0816 13:47:15.776241   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:15.776247   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:15.776302   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:15.816845   57945 cri.go:89] found id: ""
	I0816 13:47:15.816871   57945 logs.go:276] 0 containers: []
	W0816 13:47:15.816879   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:15.816884   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:15.816953   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:15.851279   57945 cri.go:89] found id: ""
	I0816 13:47:15.851306   57945 logs.go:276] 0 containers: []
	W0816 13:47:15.851318   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:15.851325   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:15.851391   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:15.884960   57945 cri.go:89] found id: ""
	I0816 13:47:15.884987   57945 logs.go:276] 0 containers: []
	W0816 13:47:15.884997   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:15.885004   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:15.885063   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:15.922027   57945 cri.go:89] found id: ""
	I0816 13:47:15.922051   57945 logs.go:276] 0 containers: []
	W0816 13:47:15.922060   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:15.922067   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:15.922130   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:15.956774   57945 cri.go:89] found id: ""
	I0816 13:47:15.956799   57945 logs.go:276] 0 containers: []
	W0816 13:47:15.956806   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:15.956814   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:15.956828   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:16.036342   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:16.036375   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:16.079006   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:16.079033   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:16.130374   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:16.130409   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:16.144707   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:16.144740   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:16.216466   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:18.716696   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:18.729670   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:18.729731   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:18.764481   57945 cri.go:89] found id: ""
	I0816 13:47:18.764513   57945 logs.go:276] 0 containers: []
	W0816 13:47:18.764521   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:18.764527   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:18.764574   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:18.803141   57945 cri.go:89] found id: ""
	I0816 13:47:18.803172   57945 logs.go:276] 0 containers: []
	W0816 13:47:18.803183   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:18.803192   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:18.803257   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:18.847951   57945 cri.go:89] found id: ""
	I0816 13:47:18.847977   57945 logs.go:276] 0 containers: []
	W0816 13:47:18.847985   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:18.847991   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:18.848038   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:18.881370   57945 cri.go:89] found id: ""
	I0816 13:47:18.881402   57945 logs.go:276] 0 containers: []
	W0816 13:47:18.881420   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:18.881434   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:18.881491   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:18.916206   57945 cri.go:89] found id: ""
	I0816 13:47:18.916237   57945 logs.go:276] 0 containers: []
	W0816 13:47:18.916247   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:18.916253   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:18.916314   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:18.946851   57945 cri.go:89] found id: ""
	I0816 13:47:18.946873   57945 logs.go:276] 0 containers: []
	W0816 13:47:18.946883   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:18.946891   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:18.946944   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:18.980684   57945 cri.go:89] found id: ""
	I0816 13:47:18.980710   57945 logs.go:276] 0 containers: []
	W0816 13:47:18.980718   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:18.980724   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:18.980789   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:19.015762   57945 cri.go:89] found id: ""
	I0816 13:47:19.015794   57945 logs.go:276] 0 containers: []
	W0816 13:47:19.015805   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:19.015817   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:19.015837   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:19.101544   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:19.101582   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:19.143587   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:19.143621   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:19.198788   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:19.198826   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:19.212697   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:19.212723   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:19.282719   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:16.607841   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:19.107952   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:18.358184   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:20.857525   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:20.499116   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:22.996642   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:24.998888   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:21.783729   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:21.797977   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:21.798056   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:21.833944   57945 cri.go:89] found id: ""
	I0816 13:47:21.833976   57945 logs.go:276] 0 containers: []
	W0816 13:47:21.833987   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:21.833996   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:21.834053   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:21.870079   57945 cri.go:89] found id: ""
	I0816 13:47:21.870110   57945 logs.go:276] 0 containers: []
	W0816 13:47:21.870120   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:21.870128   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:21.870191   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:21.905834   57945 cri.go:89] found id: ""
	I0816 13:47:21.905864   57945 logs.go:276] 0 containers: []
	W0816 13:47:21.905872   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:21.905878   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:21.905932   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:21.943319   57945 cri.go:89] found id: ""
	I0816 13:47:21.943341   57945 logs.go:276] 0 containers: []
	W0816 13:47:21.943349   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:21.943354   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:21.943412   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:21.982065   57945 cri.go:89] found id: ""
	I0816 13:47:21.982094   57945 logs.go:276] 0 containers: []
	W0816 13:47:21.982103   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:21.982110   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:21.982268   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:22.035131   57945 cri.go:89] found id: ""
	I0816 13:47:22.035167   57945 logs.go:276] 0 containers: []
	W0816 13:47:22.035179   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:22.035186   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:22.035250   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:22.082619   57945 cri.go:89] found id: ""
	I0816 13:47:22.082647   57945 logs.go:276] 0 containers: []
	W0816 13:47:22.082655   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:22.082661   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:22.082720   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:22.128521   57945 cri.go:89] found id: ""
	I0816 13:47:22.128550   57945 logs.go:276] 0 containers: []
	W0816 13:47:22.128559   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:22.128568   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:22.128581   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:22.182794   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:22.182824   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:22.196602   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:22.196628   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:22.264434   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:22.264457   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:22.264472   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:22.343796   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:22.343832   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:24.891164   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:24.904170   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:24.904244   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:24.941046   57945 cri.go:89] found id: ""
	I0816 13:47:24.941082   57945 logs.go:276] 0 containers: []
	W0816 13:47:24.941093   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:24.941101   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:24.941177   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:24.976520   57945 cri.go:89] found id: ""
	I0816 13:47:24.976553   57945 logs.go:276] 0 containers: []
	W0816 13:47:24.976564   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:24.976572   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:24.976635   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:25.024663   57945 cri.go:89] found id: ""
	I0816 13:47:25.024692   57945 logs.go:276] 0 containers: []
	W0816 13:47:25.024704   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:25.024712   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:25.024767   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:25.063892   57945 cri.go:89] found id: ""
	I0816 13:47:25.063920   57945 logs.go:276] 0 containers: []
	W0816 13:47:25.063928   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:25.063934   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:25.064014   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:21.607247   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:23.608388   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:22.857995   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:24.858506   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:27.497595   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:29.997611   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:25.105565   57945 cri.go:89] found id: ""
	I0816 13:47:25.105600   57945 logs.go:276] 0 containers: []
	W0816 13:47:25.105612   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:25.105619   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:25.105676   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:25.150965   57945 cri.go:89] found id: ""
	I0816 13:47:25.150995   57945 logs.go:276] 0 containers: []
	W0816 13:47:25.151006   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:25.151014   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:25.151074   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:25.191170   57945 cri.go:89] found id: ""
	I0816 13:47:25.191202   57945 logs.go:276] 0 containers: []
	W0816 13:47:25.191213   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:25.191220   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:25.191280   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:25.226614   57945 cri.go:89] found id: ""
	I0816 13:47:25.226643   57945 logs.go:276] 0 containers: []
	W0816 13:47:25.226653   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:25.226664   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:25.226680   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:25.239478   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:25.239516   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:25.315450   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:25.315478   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:25.315494   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:25.394755   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:25.394792   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:25.434737   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:25.434768   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:27.984829   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:28.000304   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:28.000378   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:28.042396   57945 cri.go:89] found id: ""
	I0816 13:47:28.042430   57945 logs.go:276] 0 containers: []
	W0816 13:47:28.042447   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:28.042455   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:28.042514   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:28.094491   57945 cri.go:89] found id: ""
	I0816 13:47:28.094515   57945 logs.go:276] 0 containers: []
	W0816 13:47:28.094523   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:28.094528   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:28.094586   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:28.146228   57945 cri.go:89] found id: ""
	I0816 13:47:28.146254   57945 logs.go:276] 0 containers: []
	W0816 13:47:28.146262   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:28.146267   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:28.146314   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:28.179302   57945 cri.go:89] found id: ""
	I0816 13:47:28.179335   57945 logs.go:276] 0 containers: []
	W0816 13:47:28.179347   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:28.179355   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:28.179417   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:28.216707   57945 cri.go:89] found id: ""
	I0816 13:47:28.216737   57945 logs.go:276] 0 containers: []
	W0816 13:47:28.216749   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:28.216757   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:28.216808   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:28.253800   57945 cri.go:89] found id: ""
	I0816 13:47:28.253832   57945 logs.go:276] 0 containers: []
	W0816 13:47:28.253843   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:28.253851   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:28.253906   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:28.289403   57945 cri.go:89] found id: ""
	I0816 13:47:28.289438   57945 logs.go:276] 0 containers: []
	W0816 13:47:28.289450   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:28.289458   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:28.289520   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:28.325174   57945 cri.go:89] found id: ""
	I0816 13:47:28.325206   57945 logs.go:276] 0 containers: []
	W0816 13:47:28.325214   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:28.325222   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:28.325233   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:28.377043   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:28.377077   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:28.390991   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:28.391028   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:28.463563   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:28.463584   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:28.463598   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:28.546593   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:28.546628   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:26.107830   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:28.607294   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:30.613619   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:27.356723   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:29.358026   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:31.857750   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:32.497685   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:34.500214   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:31.084932   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:31.100742   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:31.100809   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:31.134888   57945 cri.go:89] found id: ""
	I0816 13:47:31.134914   57945 logs.go:276] 0 containers: []
	W0816 13:47:31.134921   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:31.134929   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:31.134979   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:31.169533   57945 cri.go:89] found id: ""
	I0816 13:47:31.169558   57945 logs.go:276] 0 containers: []
	W0816 13:47:31.169566   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:31.169572   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:31.169630   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:31.203888   57945 cri.go:89] found id: ""
	I0816 13:47:31.203914   57945 logs.go:276] 0 containers: []
	W0816 13:47:31.203924   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:31.203931   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:31.203993   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:31.239346   57945 cri.go:89] found id: ""
	I0816 13:47:31.239374   57945 logs.go:276] 0 containers: []
	W0816 13:47:31.239387   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:31.239393   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:31.239443   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:31.274011   57945 cri.go:89] found id: ""
	I0816 13:47:31.274038   57945 logs.go:276] 0 containers: []
	W0816 13:47:31.274046   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:31.274053   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:31.274117   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:31.308812   57945 cri.go:89] found id: ""
	I0816 13:47:31.308845   57945 logs.go:276] 0 containers: []
	W0816 13:47:31.308856   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:31.308863   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:31.308950   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:31.343041   57945 cri.go:89] found id: ""
	I0816 13:47:31.343067   57945 logs.go:276] 0 containers: []
	W0816 13:47:31.343075   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:31.343082   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:31.343143   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:31.380969   57945 cri.go:89] found id: ""
	I0816 13:47:31.380998   57945 logs.go:276] 0 containers: []
	W0816 13:47:31.381006   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:31.381015   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:31.381028   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:31.434431   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:31.434465   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:31.449374   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:31.449404   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:31.522134   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:31.522159   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:31.522174   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:31.602707   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:31.602736   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:34.142413   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:34.155531   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:34.155595   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:34.195926   57945 cri.go:89] found id: ""
	I0816 13:47:34.195953   57945 logs.go:276] 0 containers: []
	W0816 13:47:34.195964   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:34.195972   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:34.196040   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:34.230064   57945 cri.go:89] found id: ""
	I0816 13:47:34.230092   57945 logs.go:276] 0 containers: []
	W0816 13:47:34.230103   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:34.230109   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:34.230163   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:34.263973   57945 cri.go:89] found id: ""
	I0816 13:47:34.263998   57945 logs.go:276] 0 containers: []
	W0816 13:47:34.264005   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:34.264012   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:34.264069   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:34.298478   57945 cri.go:89] found id: ""
	I0816 13:47:34.298523   57945 logs.go:276] 0 containers: []
	W0816 13:47:34.298532   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:34.298539   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:34.298597   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:34.337196   57945 cri.go:89] found id: ""
	I0816 13:47:34.337225   57945 logs.go:276] 0 containers: []
	W0816 13:47:34.337233   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:34.337239   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:34.337291   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:34.374716   57945 cri.go:89] found id: ""
	I0816 13:47:34.374751   57945 logs.go:276] 0 containers: []
	W0816 13:47:34.374763   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:34.374771   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:34.374830   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:34.413453   57945 cri.go:89] found id: ""
	I0816 13:47:34.413480   57945 logs.go:276] 0 containers: []
	W0816 13:47:34.413491   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:34.413498   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:34.413563   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:34.450074   57945 cri.go:89] found id: ""
	I0816 13:47:34.450107   57945 logs.go:276] 0 containers: []
	W0816 13:47:34.450119   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:34.450156   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:34.450176   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:34.490214   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:34.490239   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:34.542861   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:34.542895   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:34.557371   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:34.557400   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:34.627976   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:34.627995   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:34.628011   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:33.106665   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:35.107026   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:34.358059   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:36.858347   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:36.998289   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:39.499047   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:37.205741   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:37.219207   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:37.219286   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:37.258254   57945 cri.go:89] found id: ""
	I0816 13:47:37.258288   57945 logs.go:276] 0 containers: []
	W0816 13:47:37.258300   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:37.258307   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:37.258359   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:37.293604   57945 cri.go:89] found id: ""
	I0816 13:47:37.293635   57945 logs.go:276] 0 containers: []
	W0816 13:47:37.293647   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:37.293654   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:37.293715   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:37.334043   57945 cri.go:89] found id: ""
	I0816 13:47:37.334072   57945 logs.go:276] 0 containers: []
	W0816 13:47:37.334084   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:37.334091   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:37.334153   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:37.369745   57945 cri.go:89] found id: ""
	I0816 13:47:37.369770   57945 logs.go:276] 0 containers: []
	W0816 13:47:37.369777   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:37.369784   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:37.369835   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:37.406277   57945 cri.go:89] found id: ""
	I0816 13:47:37.406305   57945 logs.go:276] 0 containers: []
	W0816 13:47:37.406317   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:37.406325   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:37.406407   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:37.440418   57945 cri.go:89] found id: ""
	I0816 13:47:37.440449   57945 logs.go:276] 0 containers: []
	W0816 13:47:37.440456   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:37.440463   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:37.440515   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:37.474527   57945 cri.go:89] found id: ""
	I0816 13:47:37.474561   57945 logs.go:276] 0 containers: []
	W0816 13:47:37.474572   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:37.474580   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:37.474642   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:37.513959   57945 cri.go:89] found id: ""
	I0816 13:47:37.513987   57945 logs.go:276] 0 containers: []
	W0816 13:47:37.513995   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:37.514004   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:37.514020   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:37.569561   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:37.569597   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:37.584095   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:37.584127   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:37.652289   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:37.652317   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:37.652333   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:37.737388   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:37.737434   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:37.107091   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:39.108555   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:39.358316   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:41.858946   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:41.998041   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:44.498467   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:40.281872   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:40.295704   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:40.295763   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:40.336641   57945 cri.go:89] found id: ""
	I0816 13:47:40.336667   57945 logs.go:276] 0 containers: []
	W0816 13:47:40.336678   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:40.336686   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:40.336748   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:40.373500   57945 cri.go:89] found id: ""
	I0816 13:47:40.373524   57945 logs.go:276] 0 containers: []
	W0816 13:47:40.373531   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:40.373536   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:40.373593   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:40.417553   57945 cri.go:89] found id: ""
	I0816 13:47:40.417575   57945 logs.go:276] 0 containers: []
	W0816 13:47:40.417583   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:40.417589   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:40.417645   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:40.452778   57945 cri.go:89] found id: ""
	I0816 13:47:40.452809   57945 logs.go:276] 0 containers: []
	W0816 13:47:40.452819   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:40.452827   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:40.452896   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:40.491389   57945 cri.go:89] found id: ""
	I0816 13:47:40.491424   57945 logs.go:276] 0 containers: []
	W0816 13:47:40.491436   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:40.491445   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:40.491505   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:40.529780   57945 cri.go:89] found id: ""
	I0816 13:47:40.529815   57945 logs.go:276] 0 containers: []
	W0816 13:47:40.529826   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:40.529835   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:40.529903   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:40.567724   57945 cri.go:89] found id: ""
	I0816 13:47:40.567751   57945 logs.go:276] 0 containers: []
	W0816 13:47:40.567761   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:40.567768   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:40.567825   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:40.604260   57945 cri.go:89] found id: ""
	I0816 13:47:40.604299   57945 logs.go:276] 0 containers: []
	W0816 13:47:40.604309   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:40.604319   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:40.604335   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:40.676611   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:40.676642   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:40.676659   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:40.755779   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:40.755815   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:40.793780   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:40.793811   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:40.845869   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:40.845902   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:43.361766   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:43.376247   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:43.376309   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:43.416527   57945 cri.go:89] found id: ""
	I0816 13:47:43.416559   57945 logs.go:276] 0 containers: []
	W0816 13:47:43.416567   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:43.416573   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:43.416621   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:43.458203   57945 cri.go:89] found id: ""
	I0816 13:47:43.458228   57945 logs.go:276] 0 containers: []
	W0816 13:47:43.458239   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:43.458246   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:43.458312   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:43.498122   57945 cri.go:89] found id: ""
	I0816 13:47:43.498146   57945 logs.go:276] 0 containers: []
	W0816 13:47:43.498158   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:43.498166   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:43.498231   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:43.533392   57945 cri.go:89] found id: ""
	I0816 13:47:43.533418   57945 logs.go:276] 0 containers: []
	W0816 13:47:43.533428   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:43.533436   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:43.533510   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:43.569258   57945 cri.go:89] found id: ""
	I0816 13:47:43.569294   57945 logs.go:276] 0 containers: []
	W0816 13:47:43.569301   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:43.569309   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:43.569368   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:43.603599   57945 cri.go:89] found id: ""
	I0816 13:47:43.603624   57945 logs.go:276] 0 containers: []
	W0816 13:47:43.603633   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:43.603639   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:43.603696   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:43.643204   57945 cri.go:89] found id: ""
	I0816 13:47:43.643236   57945 logs.go:276] 0 containers: []
	W0816 13:47:43.643248   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:43.643256   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:43.643343   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:43.678365   57945 cri.go:89] found id: ""
	I0816 13:47:43.678393   57945 logs.go:276] 0 containers: []
	W0816 13:47:43.678412   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:43.678424   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:43.678440   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:43.729472   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:43.729522   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:43.743714   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:43.743749   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:43.819210   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:43.819237   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:43.819252   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:43.899800   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:43.899835   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:41.606734   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:43.608097   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:44.357080   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:46.357589   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:46.503576   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:48.998084   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:46.437795   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:46.450756   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:46.450828   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:46.487036   57945 cri.go:89] found id: ""
	I0816 13:47:46.487059   57945 logs.go:276] 0 containers: []
	W0816 13:47:46.487067   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:46.487073   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:46.487119   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:46.524268   57945 cri.go:89] found id: ""
	I0816 13:47:46.524294   57945 logs.go:276] 0 containers: []
	W0816 13:47:46.524303   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:46.524308   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:46.524360   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:46.561202   57945 cri.go:89] found id: ""
	I0816 13:47:46.561232   57945 logs.go:276] 0 containers: []
	W0816 13:47:46.561244   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:46.561251   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:46.561311   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:46.596006   57945 cri.go:89] found id: ""
	I0816 13:47:46.596032   57945 logs.go:276] 0 containers: []
	W0816 13:47:46.596039   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:46.596045   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:46.596094   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:46.632279   57945 cri.go:89] found id: ""
	I0816 13:47:46.632306   57945 logs.go:276] 0 containers: []
	W0816 13:47:46.632313   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:46.632319   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:46.632372   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:46.669139   57945 cri.go:89] found id: ""
	I0816 13:47:46.669166   57945 logs.go:276] 0 containers: []
	W0816 13:47:46.669174   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:46.669179   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:46.669237   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:46.704084   57945 cri.go:89] found id: ""
	I0816 13:47:46.704115   57945 logs.go:276] 0 containers: []
	W0816 13:47:46.704126   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:46.704134   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:46.704207   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:46.740275   57945 cri.go:89] found id: ""
	I0816 13:47:46.740303   57945 logs.go:276] 0 containers: []
	W0816 13:47:46.740314   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:46.740325   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:46.740341   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:46.792777   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:46.792811   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:46.807390   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:46.807429   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:46.877563   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:46.877589   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:46.877605   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:46.954703   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:46.954737   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:49.497506   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:49.510913   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:49.511007   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:49.547461   57945 cri.go:89] found id: ""
	I0816 13:47:49.547491   57945 logs.go:276] 0 containers: []
	W0816 13:47:49.547503   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:49.547517   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:49.547579   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:49.581972   57945 cri.go:89] found id: ""
	I0816 13:47:49.582005   57945 logs.go:276] 0 containers: []
	W0816 13:47:49.582014   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:49.582021   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:49.582084   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:49.617148   57945 cri.go:89] found id: ""
	I0816 13:47:49.617176   57945 logs.go:276] 0 containers: []
	W0816 13:47:49.617185   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:49.617193   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:49.617260   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:49.652546   57945 cri.go:89] found id: ""
	I0816 13:47:49.652569   57945 logs.go:276] 0 containers: []
	W0816 13:47:49.652578   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:49.652584   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:49.652631   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:49.688040   57945 cri.go:89] found id: ""
	I0816 13:47:49.688071   57945 logs.go:276] 0 containers: []
	W0816 13:47:49.688079   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:49.688084   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:49.688154   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:49.721779   57945 cri.go:89] found id: ""
	I0816 13:47:49.721809   57945 logs.go:276] 0 containers: []
	W0816 13:47:49.721819   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:49.721827   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:49.721890   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:49.758926   57945 cri.go:89] found id: ""
	I0816 13:47:49.758953   57945 logs.go:276] 0 containers: []
	W0816 13:47:49.758960   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:49.758966   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:49.759020   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:49.796328   57945 cri.go:89] found id: ""
	I0816 13:47:49.796358   57945 logs.go:276] 0 containers: []
	W0816 13:47:49.796368   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:49.796378   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:49.796393   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:49.851818   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:49.851855   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:49.867320   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:49.867350   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:49.934885   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:49.934907   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:49.934921   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:50.018012   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:50.018055   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:46.105523   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:48.107122   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:50.606969   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:48.357769   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:50.859617   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:50.998256   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:53.498046   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:52.563101   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:52.576817   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:52.576879   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:52.613425   57945 cri.go:89] found id: ""
	I0816 13:47:52.613459   57945 logs.go:276] 0 containers: []
	W0816 13:47:52.613469   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:52.613475   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:52.613522   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:52.650086   57945 cri.go:89] found id: ""
	I0816 13:47:52.650109   57945 logs.go:276] 0 containers: []
	W0816 13:47:52.650117   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:52.650123   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:52.650186   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:52.686993   57945 cri.go:89] found id: ""
	I0816 13:47:52.687020   57945 logs.go:276] 0 containers: []
	W0816 13:47:52.687028   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:52.687034   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:52.687080   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:52.724307   57945 cri.go:89] found id: ""
	I0816 13:47:52.724337   57945 logs.go:276] 0 containers: []
	W0816 13:47:52.724349   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:52.724357   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:52.724421   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:52.759250   57945 cri.go:89] found id: ""
	I0816 13:47:52.759281   57945 logs.go:276] 0 containers: []
	W0816 13:47:52.759290   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:52.759295   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:52.759350   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:52.798634   57945 cri.go:89] found id: ""
	I0816 13:47:52.798660   57945 logs.go:276] 0 containers: []
	W0816 13:47:52.798670   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:52.798677   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:52.798741   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:52.833923   57945 cri.go:89] found id: ""
	I0816 13:47:52.833946   57945 logs.go:276] 0 containers: []
	W0816 13:47:52.833954   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:52.833960   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:52.834005   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:52.873647   57945 cri.go:89] found id: ""
	I0816 13:47:52.873671   57945 logs.go:276] 0 containers: []
	W0816 13:47:52.873679   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:52.873687   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:52.873701   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:52.887667   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:52.887697   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:52.960494   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:52.960516   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:52.960529   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:53.037132   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:53.037167   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:53.076769   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:53.076799   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:52.607529   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:55.107256   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:53.357315   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:55.357380   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:55.498193   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:57.498238   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:59.997582   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:55.625565   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:55.639296   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:55.639367   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:55.675104   57945 cri.go:89] found id: ""
	I0816 13:47:55.675137   57945 logs.go:276] 0 containers: []
	W0816 13:47:55.675149   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:55.675156   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:55.675220   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:55.710108   57945 cri.go:89] found id: ""
	I0816 13:47:55.710137   57945 logs.go:276] 0 containers: []
	W0816 13:47:55.710149   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:55.710156   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:55.710218   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:55.744190   57945 cri.go:89] found id: ""
	I0816 13:47:55.744212   57945 logs.go:276] 0 containers: []
	W0816 13:47:55.744220   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:55.744225   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:55.744288   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:55.781775   57945 cri.go:89] found id: ""
	I0816 13:47:55.781806   57945 logs.go:276] 0 containers: []
	W0816 13:47:55.781815   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:55.781821   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:55.781879   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:55.818877   57945 cri.go:89] found id: ""
	I0816 13:47:55.818907   57945 logs.go:276] 0 containers: []
	W0816 13:47:55.818915   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:55.818921   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:55.818973   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:55.858751   57945 cri.go:89] found id: ""
	I0816 13:47:55.858773   57945 logs.go:276] 0 containers: []
	W0816 13:47:55.858782   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:55.858790   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:55.858852   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:55.894745   57945 cri.go:89] found id: ""
	I0816 13:47:55.894776   57945 logs.go:276] 0 containers: []
	W0816 13:47:55.894787   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:55.894796   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:55.894854   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:55.928805   57945 cri.go:89] found id: ""
	I0816 13:47:55.928832   57945 logs.go:276] 0 containers: []
	W0816 13:47:55.928843   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:55.928853   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:55.928872   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:55.982684   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:55.982717   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:55.997319   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:55.997354   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:56.063016   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:56.063043   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:56.063059   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:56.147138   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:56.147177   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:58.686160   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:58.699135   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:58.699260   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:58.737566   57945 cri.go:89] found id: ""
	I0816 13:47:58.737597   57945 logs.go:276] 0 containers: []
	W0816 13:47:58.737606   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:58.737613   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:58.737662   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:58.778119   57945 cri.go:89] found id: ""
	I0816 13:47:58.778149   57945 logs.go:276] 0 containers: []
	W0816 13:47:58.778164   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:58.778173   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:58.778243   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:58.815003   57945 cri.go:89] found id: ""
	I0816 13:47:58.815031   57945 logs.go:276] 0 containers: []
	W0816 13:47:58.815040   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:58.815046   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:58.815094   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:58.847912   57945 cri.go:89] found id: ""
	I0816 13:47:58.847941   57945 logs.go:276] 0 containers: []
	W0816 13:47:58.847949   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:58.847955   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:58.848005   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:58.882600   57945 cri.go:89] found id: ""
	I0816 13:47:58.882623   57945 logs.go:276] 0 containers: []
	W0816 13:47:58.882631   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:58.882637   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:58.882686   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:58.920459   57945 cri.go:89] found id: ""
	I0816 13:47:58.920489   57945 logs.go:276] 0 containers: []
	W0816 13:47:58.920500   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:58.920507   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:58.920571   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:58.952411   57945 cri.go:89] found id: ""
	I0816 13:47:58.952445   57945 logs.go:276] 0 containers: []
	W0816 13:47:58.952453   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:58.952460   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:58.952570   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:58.985546   57945 cri.go:89] found id: ""
	I0816 13:47:58.985573   57945 logs.go:276] 0 containers: []
	W0816 13:47:58.985581   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:58.985589   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:58.985600   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:59.067406   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:59.067439   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:59.108076   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:59.108107   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:59.162698   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:59.162734   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:59.178734   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:59.178759   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:59.255267   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:57.606146   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:59.606603   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:57.358416   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:59.861332   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:01.998633   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:04.498646   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:01.756248   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:01.768940   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:01.769009   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:01.804884   57945 cri.go:89] found id: ""
	I0816 13:48:01.804924   57945 logs.go:276] 0 containers: []
	W0816 13:48:01.804936   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:01.804946   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:01.805000   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:01.844010   57945 cri.go:89] found id: ""
	I0816 13:48:01.844035   57945 logs.go:276] 0 containers: []
	W0816 13:48:01.844042   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:01.844051   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:01.844104   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:01.882450   57945 cri.go:89] found id: ""
	I0816 13:48:01.882488   57945 logs.go:276] 0 containers: []
	W0816 13:48:01.882500   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:01.882507   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:01.882568   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:01.916995   57945 cri.go:89] found id: ""
	I0816 13:48:01.917028   57945 logs.go:276] 0 containers: []
	W0816 13:48:01.917039   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:01.917048   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:01.917109   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:01.956289   57945 cri.go:89] found id: ""
	I0816 13:48:01.956312   57945 logs.go:276] 0 containers: []
	W0816 13:48:01.956319   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:01.956325   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:01.956378   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:01.991823   57945 cri.go:89] found id: ""
	I0816 13:48:01.991862   57945 logs.go:276] 0 containers: []
	W0816 13:48:01.991875   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:01.991882   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:01.991953   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:02.034244   57945 cri.go:89] found id: ""
	I0816 13:48:02.034272   57945 logs.go:276] 0 containers: []
	W0816 13:48:02.034282   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:02.034290   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:02.034357   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:02.067902   57945 cri.go:89] found id: ""
	I0816 13:48:02.067930   57945 logs.go:276] 0 containers: []
	W0816 13:48:02.067942   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:02.067953   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:02.067971   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:02.121170   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:02.121196   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:02.177468   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:02.177498   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:02.191721   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:02.191757   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:02.270433   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:02.270463   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:02.270500   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:04.855768   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:04.869098   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:04.869175   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:04.907817   57945 cri.go:89] found id: ""
	I0816 13:48:04.907848   57945 logs.go:276] 0 containers: []
	W0816 13:48:04.907856   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:04.907863   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:04.907919   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:04.943307   57945 cri.go:89] found id: ""
	I0816 13:48:04.943339   57945 logs.go:276] 0 containers: []
	W0816 13:48:04.943349   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:04.943356   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:04.943416   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:04.979884   57945 cri.go:89] found id: ""
	I0816 13:48:04.979914   57945 logs.go:276] 0 containers: []
	W0816 13:48:04.979922   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:04.979929   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:04.979978   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:05.021400   57945 cri.go:89] found id: ""
	I0816 13:48:05.021442   57945 logs.go:276] 0 containers: []
	W0816 13:48:05.021453   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:05.021463   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:05.021542   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:05.057780   57945 cri.go:89] found id: ""
	I0816 13:48:05.057800   57945 logs.go:276] 0 containers: []
	W0816 13:48:05.057808   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:05.057814   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:05.057864   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:05.091947   57945 cri.go:89] found id: ""
	I0816 13:48:05.091976   57945 logs.go:276] 0 containers: []
	W0816 13:48:05.091987   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:05.091995   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:05.092058   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:01.607315   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:04.107759   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:02.358142   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:04.857766   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:06.998437   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:09.496888   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:05.129740   57945 cri.go:89] found id: ""
	I0816 13:48:05.129771   57945 logs.go:276] 0 containers: []
	W0816 13:48:05.129781   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:05.129788   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:05.129857   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:05.163020   57945 cri.go:89] found id: ""
	I0816 13:48:05.163049   57945 logs.go:276] 0 containers: []
	W0816 13:48:05.163060   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:05.163070   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:05.163087   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:05.236240   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:05.236266   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:05.236281   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:05.310559   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:05.310595   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:05.351614   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:05.351646   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:05.404938   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:05.404971   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:07.921010   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:07.934181   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:07.934255   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:07.969474   57945 cri.go:89] found id: ""
	I0816 13:48:07.969502   57945 logs.go:276] 0 containers: []
	W0816 13:48:07.969512   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:07.969520   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:07.969575   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:08.007423   57945 cri.go:89] found id: ""
	I0816 13:48:08.007447   57945 logs.go:276] 0 containers: []
	W0816 13:48:08.007454   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:08.007460   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:08.007515   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:08.043981   57945 cri.go:89] found id: ""
	I0816 13:48:08.044010   57945 logs.go:276] 0 containers: []
	W0816 13:48:08.044021   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:08.044027   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:08.044076   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:08.078631   57945 cri.go:89] found id: ""
	I0816 13:48:08.078656   57945 logs.go:276] 0 containers: []
	W0816 13:48:08.078664   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:08.078669   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:08.078720   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:08.114970   57945 cri.go:89] found id: ""
	I0816 13:48:08.114998   57945 logs.go:276] 0 containers: []
	W0816 13:48:08.115010   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:08.115020   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:08.115081   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:08.149901   57945 cri.go:89] found id: ""
	I0816 13:48:08.149936   57945 logs.go:276] 0 containers: []
	W0816 13:48:08.149944   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:08.149951   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:08.150007   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:08.183104   57945 cri.go:89] found id: ""
	I0816 13:48:08.183128   57945 logs.go:276] 0 containers: []
	W0816 13:48:08.183136   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:08.183141   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:08.183189   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:08.216972   57945 cri.go:89] found id: ""
	I0816 13:48:08.217005   57945 logs.go:276] 0 containers: []
	W0816 13:48:08.217016   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:08.217027   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:08.217043   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:08.231192   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:08.231223   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:08.306779   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:08.306807   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:08.306823   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:08.388235   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:08.388274   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:08.429040   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:08.429071   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:06.110473   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:08.606467   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:07.356589   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:09.357419   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:11.357839   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:11.497754   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:13.997641   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:10.983867   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:10.997649   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:10.997722   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:11.033315   57945 cri.go:89] found id: ""
	I0816 13:48:11.033351   57945 logs.go:276] 0 containers: []
	W0816 13:48:11.033362   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:11.033370   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:11.033437   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:11.069000   57945 cri.go:89] found id: ""
	I0816 13:48:11.069030   57945 logs.go:276] 0 containers: []
	W0816 13:48:11.069038   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:11.069044   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:11.069102   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:11.100668   57945 cri.go:89] found id: ""
	I0816 13:48:11.100691   57945 logs.go:276] 0 containers: []
	W0816 13:48:11.100698   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:11.100704   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:11.100755   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:11.134753   57945 cri.go:89] found id: ""
	I0816 13:48:11.134782   57945 logs.go:276] 0 containers: []
	W0816 13:48:11.134792   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:11.134800   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:11.134857   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:11.169691   57945 cri.go:89] found id: ""
	I0816 13:48:11.169717   57945 logs.go:276] 0 containers: []
	W0816 13:48:11.169726   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:11.169734   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:11.169797   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:11.204048   57945 cri.go:89] found id: ""
	I0816 13:48:11.204077   57945 logs.go:276] 0 containers: []
	W0816 13:48:11.204088   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:11.204095   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:11.204147   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:11.237659   57945 cri.go:89] found id: ""
	I0816 13:48:11.237687   57945 logs.go:276] 0 containers: []
	W0816 13:48:11.237698   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:11.237706   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:11.237768   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:11.271886   57945 cri.go:89] found id: ""
	I0816 13:48:11.271911   57945 logs.go:276] 0 containers: []
	W0816 13:48:11.271922   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:11.271932   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:11.271946   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:11.327237   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:11.327274   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:11.343215   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:11.343256   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:11.419725   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:11.419752   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:11.419768   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:11.498221   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:11.498252   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:14.044619   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:14.057479   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:14.057537   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:14.093405   57945 cri.go:89] found id: ""
	I0816 13:48:14.093439   57945 logs.go:276] 0 containers: []
	W0816 13:48:14.093450   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:14.093459   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:14.093516   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:14.127089   57945 cri.go:89] found id: ""
	I0816 13:48:14.127111   57945 logs.go:276] 0 containers: []
	W0816 13:48:14.127120   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:14.127127   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:14.127190   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:14.165676   57945 cri.go:89] found id: ""
	I0816 13:48:14.165708   57945 logs.go:276] 0 containers: []
	W0816 13:48:14.165719   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:14.165726   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:14.165791   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:14.198630   57945 cri.go:89] found id: ""
	I0816 13:48:14.198652   57945 logs.go:276] 0 containers: []
	W0816 13:48:14.198660   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:14.198665   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:14.198717   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:14.246679   57945 cri.go:89] found id: ""
	I0816 13:48:14.246706   57945 logs.go:276] 0 containers: []
	W0816 13:48:14.246714   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:14.246719   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:14.246774   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:14.290928   57945 cri.go:89] found id: ""
	I0816 13:48:14.290960   57945 logs.go:276] 0 containers: []
	W0816 13:48:14.290972   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:14.290979   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:14.291043   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:14.342499   57945 cri.go:89] found id: ""
	I0816 13:48:14.342527   57945 logs.go:276] 0 containers: []
	W0816 13:48:14.342537   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:14.342544   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:14.342613   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:14.377858   57945 cri.go:89] found id: ""
	I0816 13:48:14.377891   57945 logs.go:276] 0 containers: []
	W0816 13:48:14.377899   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:14.377913   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:14.377928   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:14.431180   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:14.431218   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:14.445355   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:14.445381   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:14.513970   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:14.513991   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:14.514006   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:14.591381   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:14.591416   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:11.108299   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:13.612816   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:13.856979   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:15.857269   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:15.999100   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:18.497473   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:17.133406   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:17.146647   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:17.146703   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:17.180991   57945 cri.go:89] found id: ""
	I0816 13:48:17.181022   57945 logs.go:276] 0 containers: []
	W0816 13:48:17.181032   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:17.181041   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:17.181103   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:17.214862   57945 cri.go:89] found id: ""
	I0816 13:48:17.214892   57945 logs.go:276] 0 containers: []
	W0816 13:48:17.214903   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:17.214910   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:17.214971   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:17.250316   57945 cri.go:89] found id: ""
	I0816 13:48:17.250344   57945 logs.go:276] 0 containers: []
	W0816 13:48:17.250355   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:17.250362   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:17.250425   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:17.282959   57945 cri.go:89] found id: ""
	I0816 13:48:17.282991   57945 logs.go:276] 0 containers: []
	W0816 13:48:17.283001   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:17.283008   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:17.283070   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:17.316185   57945 cri.go:89] found id: ""
	I0816 13:48:17.316213   57945 logs.go:276] 0 containers: []
	W0816 13:48:17.316224   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:17.316232   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:17.316292   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:17.353383   57945 cri.go:89] found id: ""
	I0816 13:48:17.353410   57945 logs.go:276] 0 containers: []
	W0816 13:48:17.353420   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:17.353428   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:17.353487   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:17.390808   57945 cri.go:89] found id: ""
	I0816 13:48:17.390836   57945 logs.go:276] 0 containers: []
	W0816 13:48:17.390844   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:17.390850   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:17.390898   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:17.425484   57945 cri.go:89] found id: ""
	I0816 13:48:17.425517   57945 logs.go:276] 0 containers: []
	W0816 13:48:17.425529   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:17.425539   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:17.425556   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:17.439184   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:17.439220   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:17.511813   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:17.511838   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:17.511853   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:17.597415   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:17.597447   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:17.636703   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:17.636738   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:16.105992   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:18.606940   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:20.607532   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:18.357812   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:20.358351   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:20.498644   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:22.998103   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:24.999122   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:20.193694   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:20.207488   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:20.207549   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:20.246584   57945 cri.go:89] found id: ""
	I0816 13:48:20.246610   57945 logs.go:276] 0 containers: []
	W0816 13:48:20.246618   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:20.246624   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:20.246678   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:20.282030   57945 cri.go:89] found id: ""
	I0816 13:48:20.282060   57945 logs.go:276] 0 containers: []
	W0816 13:48:20.282071   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:20.282078   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:20.282142   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:20.317530   57945 cri.go:89] found id: ""
	I0816 13:48:20.317562   57945 logs.go:276] 0 containers: []
	W0816 13:48:20.317571   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:20.317578   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:20.317630   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:20.352964   57945 cri.go:89] found id: ""
	I0816 13:48:20.352990   57945 logs.go:276] 0 containers: []
	W0816 13:48:20.353000   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:20.353008   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:20.353066   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:20.388108   57945 cri.go:89] found id: ""
	I0816 13:48:20.388138   57945 logs.go:276] 0 containers: []
	W0816 13:48:20.388148   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:20.388156   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:20.388224   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:20.423627   57945 cri.go:89] found id: ""
	I0816 13:48:20.423660   57945 logs.go:276] 0 containers: []
	W0816 13:48:20.423672   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:20.423680   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:20.423741   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:20.460975   57945 cri.go:89] found id: ""
	I0816 13:48:20.461003   57945 logs.go:276] 0 containers: []
	W0816 13:48:20.461011   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:20.461017   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:20.461081   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:20.497707   57945 cri.go:89] found id: ""
	I0816 13:48:20.497728   57945 logs.go:276] 0 containers: []
	W0816 13:48:20.497735   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:20.497743   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:20.497758   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:20.584887   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:20.584939   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:20.627020   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:20.627054   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:20.680716   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:20.680756   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:20.694945   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:20.694973   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:20.770900   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:23.271654   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:23.284709   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:23.284788   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:23.324342   57945 cri.go:89] found id: ""
	I0816 13:48:23.324374   57945 logs.go:276] 0 containers: []
	W0816 13:48:23.324384   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:23.324393   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:23.324453   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:23.358846   57945 cri.go:89] found id: ""
	I0816 13:48:23.358869   57945 logs.go:276] 0 containers: []
	W0816 13:48:23.358879   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:23.358885   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:23.358943   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:23.392580   57945 cri.go:89] found id: ""
	I0816 13:48:23.392607   57945 logs.go:276] 0 containers: []
	W0816 13:48:23.392618   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:23.392626   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:23.392686   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:23.428035   57945 cri.go:89] found id: ""
	I0816 13:48:23.428066   57945 logs.go:276] 0 containers: []
	W0816 13:48:23.428076   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:23.428083   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:23.428164   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:23.470027   57945 cri.go:89] found id: ""
	I0816 13:48:23.470054   57945 logs.go:276] 0 containers: []
	W0816 13:48:23.470066   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:23.470076   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:23.470242   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:23.506497   57945 cri.go:89] found id: ""
	I0816 13:48:23.506522   57945 logs.go:276] 0 containers: []
	W0816 13:48:23.506530   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:23.506536   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:23.506588   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:23.542571   57945 cri.go:89] found id: ""
	I0816 13:48:23.542600   57945 logs.go:276] 0 containers: []
	W0816 13:48:23.542611   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:23.542619   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:23.542683   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:23.578552   57945 cri.go:89] found id: ""
	I0816 13:48:23.578584   57945 logs.go:276] 0 containers: []
	W0816 13:48:23.578592   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:23.578601   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:23.578612   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:23.633145   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:23.633181   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:23.648089   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:23.648129   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:23.724645   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:23.724663   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:23.724675   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:23.812979   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:23.813013   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:23.107986   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:25.607110   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:22.858674   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:25.358411   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:27.497538   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:29.498345   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:26.353455   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:26.367433   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:26.367504   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:26.406717   57945 cri.go:89] found id: ""
	I0816 13:48:26.406746   57945 logs.go:276] 0 containers: []
	W0816 13:48:26.406756   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:26.406764   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:26.406825   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:26.440267   57945 cri.go:89] found id: ""
	I0816 13:48:26.440298   57945 logs.go:276] 0 containers: []
	W0816 13:48:26.440309   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:26.440317   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:26.440379   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:26.479627   57945 cri.go:89] found id: ""
	I0816 13:48:26.479653   57945 logs.go:276] 0 containers: []
	W0816 13:48:26.479662   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:26.479667   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:26.479714   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:26.516608   57945 cri.go:89] found id: ""
	I0816 13:48:26.516638   57945 logs.go:276] 0 containers: []
	W0816 13:48:26.516646   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:26.516653   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:26.516713   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:26.553474   57945 cri.go:89] found id: ""
	I0816 13:48:26.553496   57945 logs.go:276] 0 containers: []
	W0816 13:48:26.553505   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:26.553510   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:26.553566   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:26.586090   57945 cri.go:89] found id: ""
	I0816 13:48:26.586147   57945 logs.go:276] 0 containers: []
	W0816 13:48:26.586160   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:26.586167   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:26.586217   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:26.621874   57945 cri.go:89] found id: ""
	I0816 13:48:26.621903   57945 logs.go:276] 0 containers: []
	W0816 13:48:26.621914   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:26.621923   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:26.621999   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:26.656643   57945 cri.go:89] found id: ""
	I0816 13:48:26.656668   57945 logs.go:276] 0 containers: []
	W0816 13:48:26.656676   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:26.656684   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:26.656694   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:26.710589   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:26.710628   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:26.724403   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:26.724423   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:26.795530   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:26.795550   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:26.795568   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:26.879670   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:26.879709   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:29.420540   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:29.434301   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:29.434368   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:29.471409   57945 cri.go:89] found id: ""
	I0816 13:48:29.471438   57945 logs.go:276] 0 containers: []
	W0816 13:48:29.471455   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:29.471464   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:29.471527   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:29.510841   57945 cri.go:89] found id: ""
	I0816 13:48:29.510865   57945 logs.go:276] 0 containers: []
	W0816 13:48:29.510873   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:29.510880   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:29.510928   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:29.546300   57945 cri.go:89] found id: ""
	I0816 13:48:29.546331   57945 logs.go:276] 0 containers: []
	W0816 13:48:29.546342   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:29.546349   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:29.546409   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:29.579324   57945 cri.go:89] found id: ""
	I0816 13:48:29.579349   57945 logs.go:276] 0 containers: []
	W0816 13:48:29.579357   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:29.579363   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:29.579414   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:29.613729   57945 cri.go:89] found id: ""
	I0816 13:48:29.613755   57945 logs.go:276] 0 containers: []
	W0816 13:48:29.613765   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:29.613772   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:29.613831   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:29.649401   57945 cri.go:89] found id: ""
	I0816 13:48:29.649428   57945 logs.go:276] 0 containers: []
	W0816 13:48:29.649439   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:29.649447   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:29.649510   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:29.685391   57945 cri.go:89] found id: ""
	I0816 13:48:29.685420   57945 logs.go:276] 0 containers: []
	W0816 13:48:29.685428   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:29.685436   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:29.685504   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:29.720954   57945 cri.go:89] found id: ""
	I0816 13:48:29.720981   57945 logs.go:276] 0 containers: []
	W0816 13:48:29.720993   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:29.721004   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:29.721019   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:29.791602   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:29.791625   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:29.791637   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:29.876595   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:29.876633   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:29.917172   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:29.917203   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:29.969511   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:29.969548   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:27.607276   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:30.106660   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:27.856585   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:29.857836   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:31.498615   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:33.999039   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:32.484186   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:32.499320   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:32.499386   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:32.537301   57945 cri.go:89] found id: ""
	I0816 13:48:32.537351   57945 logs.go:276] 0 containers: []
	W0816 13:48:32.537365   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:32.537373   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:32.537441   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:32.574324   57945 cri.go:89] found id: ""
	I0816 13:48:32.574350   57945 logs.go:276] 0 containers: []
	W0816 13:48:32.574360   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:32.574367   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:32.574445   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:32.610672   57945 cri.go:89] found id: ""
	I0816 13:48:32.610697   57945 logs.go:276] 0 containers: []
	W0816 13:48:32.610704   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:32.610709   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:32.610760   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:32.649916   57945 cri.go:89] found id: ""
	I0816 13:48:32.649941   57945 logs.go:276] 0 containers: []
	W0816 13:48:32.649949   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:32.649955   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:32.650010   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:32.684204   57945 cri.go:89] found id: ""
	I0816 13:48:32.684234   57945 logs.go:276] 0 containers: []
	W0816 13:48:32.684245   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:32.684257   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:32.684319   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:32.723735   57945 cri.go:89] found id: ""
	I0816 13:48:32.723764   57945 logs.go:276] 0 containers: []
	W0816 13:48:32.723772   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:32.723778   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:32.723838   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:32.759709   57945 cri.go:89] found id: ""
	I0816 13:48:32.759732   57945 logs.go:276] 0 containers: []
	W0816 13:48:32.759740   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:32.759746   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:32.759798   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:32.798782   57945 cri.go:89] found id: ""
	I0816 13:48:32.798807   57945 logs.go:276] 0 containers: []
	W0816 13:48:32.798815   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:32.798823   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:32.798835   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:32.876166   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:32.876188   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:32.876199   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:32.956218   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:32.956253   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:32.996625   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:32.996662   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:33.050093   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:33.050128   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:32.107363   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:34.607045   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:32.357801   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:34.856980   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:36.857321   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:36.497064   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:38.498666   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:35.565097   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:35.578582   57945 kubeadm.go:597] duration metric: took 4m3.330349632s to restartPrimaryControlPlane
	W0816 13:48:35.578670   57945 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0816 13:48:35.578704   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 13:48:36.655625   57945 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.076898816s)
	I0816 13:48:36.655703   57945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 13:48:36.670273   57945 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 13:48:36.681600   57945 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 13:48:36.691816   57945 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 13:48:36.691835   57945 kubeadm.go:157] found existing configuration files:
	
	I0816 13:48:36.691877   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 13:48:36.701841   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 13:48:36.701901   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 13:48:36.711571   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 13:48:36.720990   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 13:48:36.721055   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 13:48:36.730948   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 13:48:36.740294   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 13:48:36.740361   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 13:48:36.750725   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 13:48:36.761936   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 13:48:36.762009   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 13:48:36.772572   57945 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 13:48:37.001184   57945 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 13:48:36.608364   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:39.106585   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:38.857386   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:41.358217   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:40.997776   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:42.998819   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:44.999474   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:41.106806   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:43.107007   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:45.606716   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:42.357715   57440 pod_ready.go:82] duration metric: took 4m0.006671881s for pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace to be "Ready" ...
	E0816 13:48:42.357741   57440 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0816 13:48:42.357749   57440 pod_ready.go:39] duration metric: took 4m4.542302811s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:48:42.357762   57440 api_server.go:52] waiting for apiserver process to appear ...
	I0816 13:48:42.357787   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:42.357834   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:42.415231   57440 cri.go:89] found id: "17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1"
	I0816 13:48:42.415255   57440 cri.go:89] found id: ""
	I0816 13:48:42.415265   57440 logs.go:276] 1 containers: [17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1]
	I0816 13:48:42.415324   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:42.421713   57440 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:42.421779   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:42.462840   57440 cri.go:89] found id: "43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453"
	I0816 13:48:42.462867   57440 cri.go:89] found id: ""
	I0816 13:48:42.462878   57440 logs.go:276] 1 containers: [43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453]
	I0816 13:48:42.462940   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:42.467260   57440 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:42.467321   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:42.505423   57440 cri.go:89] found id: "1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc"
	I0816 13:48:42.505449   57440 cri.go:89] found id: ""
	I0816 13:48:42.505458   57440 logs.go:276] 1 containers: [1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc]
	I0816 13:48:42.505517   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:42.510072   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:42.510124   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:42.551873   57440 cri.go:89] found id: "db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4"
	I0816 13:48:42.551894   57440 cri.go:89] found id: ""
	I0816 13:48:42.551902   57440 logs.go:276] 1 containers: [db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4]
	I0816 13:48:42.551949   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:42.556735   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:42.556783   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:42.595853   57440 cri.go:89] found id: "ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5"
	I0816 13:48:42.595884   57440 cri.go:89] found id: ""
	I0816 13:48:42.595895   57440 logs.go:276] 1 containers: [ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5]
	I0816 13:48:42.595948   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:42.600951   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:42.601003   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:42.639288   57440 cri.go:89] found id: "d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd"
	I0816 13:48:42.639311   57440 cri.go:89] found id: ""
	I0816 13:48:42.639320   57440 logs.go:276] 1 containers: [d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd]
	I0816 13:48:42.639367   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:42.644502   57440 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:42.644554   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:42.685041   57440 cri.go:89] found id: ""
	I0816 13:48:42.685065   57440 logs.go:276] 0 containers: []
	W0816 13:48:42.685074   57440 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:42.685079   57440 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 13:48:42.685137   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 13:48:42.722485   57440 cri.go:89] found id: "b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae"
	I0816 13:48:42.722506   57440 cri.go:89] found id: "35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f"
	I0816 13:48:42.722510   57440 cri.go:89] found id: ""
	I0816 13:48:42.722519   57440 logs.go:276] 2 containers: [b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae 35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f]
	I0816 13:48:42.722590   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:42.727136   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:42.731169   57440 logs.go:123] Gathering logs for kube-controller-manager [d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd] ...
	I0816 13:48:42.731189   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd"
	I0816 13:48:42.794303   57440 logs.go:123] Gathering logs for storage-provisioner [b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae] ...
	I0816 13:48:42.794334   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae"
	I0816 13:48:42.833686   57440 logs.go:123] Gathering logs for container status ...
	I0816 13:48:42.833715   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:42.874606   57440 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:42.874632   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:42.948074   57440 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:42.948111   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:42.963546   57440 logs.go:123] Gathering logs for etcd [43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453] ...
	I0816 13:48:42.963571   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453"
	I0816 13:48:43.027410   57440 logs.go:123] Gathering logs for kube-scheduler [db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4] ...
	I0816 13:48:43.027446   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4"
	I0816 13:48:43.067643   57440 logs.go:123] Gathering logs for kube-proxy [ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5] ...
	I0816 13:48:43.067670   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5"
	I0816 13:48:43.115156   57440 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:43.115183   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 13:48:43.246588   57440 logs.go:123] Gathering logs for kube-apiserver [17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1] ...
	I0816 13:48:43.246618   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1"
	I0816 13:48:43.291042   57440 logs.go:123] Gathering logs for coredns [1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc] ...
	I0816 13:48:43.291069   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc"
	I0816 13:48:43.330741   57440 logs.go:123] Gathering logs for storage-provisioner [35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f] ...
	I0816 13:48:43.330771   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f"
	I0816 13:48:43.371970   57440 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:43.371999   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:46.357313   57440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:46.373368   57440 api_server.go:72] duration metric: took 4m16.32601859s to wait for apiserver process to appear ...
	I0816 13:48:46.373396   57440 api_server.go:88] waiting for apiserver healthz status ...
	I0816 13:48:46.373426   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:46.373473   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:46.411034   57440 cri.go:89] found id: "17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1"
	I0816 13:48:46.411059   57440 cri.go:89] found id: ""
	I0816 13:48:46.411067   57440 logs.go:276] 1 containers: [17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1]
	I0816 13:48:46.411121   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:46.415948   57440 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:46.416009   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:46.458648   57440 cri.go:89] found id: "43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453"
	I0816 13:48:46.458673   57440 cri.go:89] found id: ""
	I0816 13:48:46.458684   57440 logs.go:276] 1 containers: [43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453]
	I0816 13:48:46.458735   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:46.463268   57440 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:46.463332   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:46.502120   57440 cri.go:89] found id: "1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc"
	I0816 13:48:46.502139   57440 cri.go:89] found id: ""
	I0816 13:48:46.502149   57440 logs.go:276] 1 containers: [1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc]
	I0816 13:48:46.502319   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:46.508632   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:46.508692   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:46.552732   57440 cri.go:89] found id: "db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4"
	I0816 13:48:46.552757   57440 cri.go:89] found id: ""
	I0816 13:48:46.552765   57440 logs.go:276] 1 containers: [db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4]
	I0816 13:48:46.552812   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:46.557459   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:46.557524   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:46.598286   57440 cri.go:89] found id: "ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5"
	I0816 13:48:46.598308   57440 cri.go:89] found id: ""
	I0816 13:48:46.598330   57440 logs.go:276] 1 containers: [ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5]
	I0816 13:48:46.598403   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:46.603050   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:46.603110   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:46.641616   57440 cri.go:89] found id: "d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd"
	I0816 13:48:46.641638   57440 cri.go:89] found id: ""
	I0816 13:48:46.641648   57440 logs.go:276] 1 containers: [d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd]
	I0816 13:48:46.641712   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:46.646008   57440 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:46.646076   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:46.682259   57440 cri.go:89] found id: ""
	I0816 13:48:46.682290   57440 logs.go:276] 0 containers: []
	W0816 13:48:46.682302   57440 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:46.682310   57440 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 13:48:46.682366   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 13:48:46.718955   57440 cri.go:89] found id: "b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae"
	I0816 13:48:46.718979   57440 cri.go:89] found id: "35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f"
	I0816 13:48:46.718985   57440 cri.go:89] found id: ""
	I0816 13:48:46.718993   57440 logs.go:276] 2 containers: [b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae 35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f]
	I0816 13:48:46.719049   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:46.723519   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:46.727942   57440 logs.go:123] Gathering logs for kube-scheduler [db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4] ...
	I0816 13:48:46.727968   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4"
	I0816 13:48:46.771942   57440 logs.go:123] Gathering logs for storage-provisioner [b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae] ...
	I0816 13:48:46.771971   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae"
	I0816 13:48:46.818294   57440 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:46.818319   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:46.887977   57440 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:46.888021   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:46.903567   57440 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:46.903599   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 13:48:47.010715   57440 logs.go:123] Gathering logs for kube-apiserver [17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1] ...
	I0816 13:48:47.010747   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1"
	I0816 13:48:47.056317   57440 logs.go:123] Gathering logs for etcd [43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453] ...
	I0816 13:48:47.056346   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453"
	I0816 13:48:47.114669   57440 logs.go:123] Gathering logs for coredns [1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc] ...
	I0816 13:48:47.114696   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc"
	I0816 13:48:47.498472   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:49.998541   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:47.606991   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:49.607458   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:47.157046   57440 logs.go:123] Gathering logs for storage-provisioner [35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f] ...
	I0816 13:48:47.157073   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f"
	I0816 13:48:47.199364   57440 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:47.199393   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:47.640964   57440 logs.go:123] Gathering logs for kube-proxy [ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5] ...
	I0816 13:48:47.641003   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5"
	I0816 13:48:47.683503   57440 logs.go:123] Gathering logs for kube-controller-manager [d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd] ...
	I0816 13:48:47.683541   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd"
	I0816 13:48:47.746748   57440 logs.go:123] Gathering logs for container status ...
	I0816 13:48:47.746798   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:50.296176   57440 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0816 13:48:50.300482   57440 api_server.go:279] https://192.168.61.116:8443/healthz returned 200:
	ok
	I0816 13:48:50.301550   57440 api_server.go:141] control plane version: v1.31.0
	I0816 13:48:50.301570   57440 api_server.go:131] duration metric: took 3.928168044s to wait for apiserver health ...
	I0816 13:48:50.301578   57440 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 13:48:50.301599   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:50.301653   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:50.343199   57440 cri.go:89] found id: "17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1"
	I0816 13:48:50.343223   57440 cri.go:89] found id: ""
	I0816 13:48:50.343231   57440 logs.go:276] 1 containers: [17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1]
	I0816 13:48:50.343276   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:50.347576   57440 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:50.347651   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:50.387912   57440 cri.go:89] found id: "43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453"
	I0816 13:48:50.387937   57440 cri.go:89] found id: ""
	I0816 13:48:50.387947   57440 logs.go:276] 1 containers: [43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453]
	I0816 13:48:50.388004   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:50.392120   57440 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:50.392188   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:50.428655   57440 cri.go:89] found id: "1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc"
	I0816 13:48:50.428680   57440 cri.go:89] found id: ""
	I0816 13:48:50.428688   57440 logs.go:276] 1 containers: [1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc]
	I0816 13:48:50.428734   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:50.432863   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:50.432941   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:50.472269   57440 cri.go:89] found id: "db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4"
	I0816 13:48:50.472295   57440 cri.go:89] found id: ""
	I0816 13:48:50.472304   57440 logs.go:276] 1 containers: [db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4]
	I0816 13:48:50.472351   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:50.476961   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:50.477006   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:50.514772   57440 cri.go:89] found id: "ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5"
	I0816 13:48:50.514793   57440 cri.go:89] found id: ""
	I0816 13:48:50.514801   57440 logs.go:276] 1 containers: [ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5]
	I0816 13:48:50.514857   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:50.520430   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:50.520492   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:50.564708   57440 cri.go:89] found id: "d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd"
	I0816 13:48:50.564733   57440 cri.go:89] found id: ""
	I0816 13:48:50.564741   57440 logs.go:276] 1 containers: [d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd]
	I0816 13:48:50.564788   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:50.569255   57440 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:50.569306   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:50.607803   57440 cri.go:89] found id: ""
	I0816 13:48:50.607823   57440 logs.go:276] 0 containers: []
	W0816 13:48:50.607829   57440 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:50.607835   57440 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 13:48:50.607888   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 13:48:50.643909   57440 cri.go:89] found id: "b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae"
	I0816 13:48:50.643934   57440 cri.go:89] found id: "35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f"
	I0816 13:48:50.643940   57440 cri.go:89] found id: ""
	I0816 13:48:50.643949   57440 logs.go:276] 2 containers: [b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae 35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f]
	I0816 13:48:50.643994   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:50.648575   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:50.653322   57440 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:50.653354   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:50.667847   57440 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:50.667878   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 13:48:50.774932   57440 logs.go:123] Gathering logs for kube-apiserver [17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1] ...
	I0816 13:48:50.774969   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1"
	I0816 13:48:50.823473   57440 logs.go:123] Gathering logs for etcd [43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453] ...
	I0816 13:48:50.823503   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453"
	I0816 13:48:50.884009   57440 logs.go:123] Gathering logs for storage-provisioner [b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae] ...
	I0816 13:48:50.884044   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae"
	I0816 13:48:50.925187   57440 logs.go:123] Gathering logs for container status ...
	I0816 13:48:50.925219   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:50.965019   57440 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:50.965046   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:51.033614   57440 logs.go:123] Gathering logs for coredns [1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc] ...
	I0816 13:48:51.033651   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc"
	I0816 13:48:51.068360   57440 logs.go:123] Gathering logs for kube-scheduler [db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4] ...
	I0816 13:48:51.068387   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4"
	I0816 13:48:51.107768   57440 logs.go:123] Gathering logs for kube-proxy [ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5] ...
	I0816 13:48:51.107792   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5"
	I0816 13:48:51.163637   57440 logs.go:123] Gathering logs for kube-controller-manager [d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd] ...
	I0816 13:48:51.163673   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd"
	I0816 13:48:51.227436   57440 logs.go:123] Gathering logs for storage-provisioner [35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f] ...
	I0816 13:48:51.227462   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f"
	I0816 13:48:51.265505   57440 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:51.265531   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:54.130801   57440 system_pods.go:59] 8 kube-system pods found
	I0816 13:48:54.130828   57440 system_pods.go:61] "coredns-6f6b679f8f-8kbs6" [e732183e-3b22-4a11-909a-246de5fc1c8a] Running
	I0816 13:48:54.130833   57440 system_pods.go:61] "etcd-no-preload-311070" [e05deb95-06ff-4a8d-b520-0350b2332814] Running
	I0816 13:48:54.130837   57440 system_pods.go:61] "kube-apiserver-no-preload-311070" [06cb4989-0511-4c14-a3ec-e966829cf2ec] Running
	I0816 13:48:54.130840   57440 system_pods.go:61] "kube-controller-manager-no-preload-311070" [baa9f379-4e14-4873-a456-42e90a816b0b] Running
	I0816 13:48:54.130843   57440 system_pods.go:61] "kube-proxy-b8d5b" [9ed1c33b-903f-43e8-880c-b9a49c658806] Running
	I0816 13:48:54.130846   57440 system_pods.go:61] "kube-scheduler-no-preload-311070" [166c0f64-ebf6-413f-82d5-f1c32991c63a] Running
	I0816 13:48:54.130852   57440 system_pods.go:61] "metrics-server-6867b74b74-mgxhv" [e9654a8e-4db2-494d-93a7-a134b0e2bb50] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 13:48:54.130855   57440 system_pods.go:61] "storage-provisioner" [f340d2e3-2889-4200-b477-830494b827c6] Running
	I0816 13:48:54.130862   57440 system_pods.go:74] duration metric: took 3.829279192s to wait for pod list to return data ...
	I0816 13:48:54.130868   57440 default_sa.go:34] waiting for default service account to be created ...
	I0816 13:48:54.133253   57440 default_sa.go:45] found service account: "default"
	I0816 13:48:54.133282   57440 default_sa.go:55] duration metric: took 2.407297ms for default service account to be created ...
	I0816 13:48:54.133292   57440 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 13:48:54.138812   57440 system_pods.go:86] 8 kube-system pods found
	I0816 13:48:54.138835   57440 system_pods.go:89] "coredns-6f6b679f8f-8kbs6" [e732183e-3b22-4a11-909a-246de5fc1c8a] Running
	I0816 13:48:54.138841   57440 system_pods.go:89] "etcd-no-preload-311070" [e05deb95-06ff-4a8d-b520-0350b2332814] Running
	I0816 13:48:54.138845   57440 system_pods.go:89] "kube-apiserver-no-preload-311070" [06cb4989-0511-4c14-a3ec-e966829cf2ec] Running
	I0816 13:48:54.138849   57440 system_pods.go:89] "kube-controller-manager-no-preload-311070" [baa9f379-4e14-4873-a456-42e90a816b0b] Running
	I0816 13:48:54.138853   57440 system_pods.go:89] "kube-proxy-b8d5b" [9ed1c33b-903f-43e8-880c-b9a49c658806] Running
	I0816 13:48:54.138856   57440 system_pods.go:89] "kube-scheduler-no-preload-311070" [166c0f64-ebf6-413f-82d5-f1c32991c63a] Running
	I0816 13:48:54.138863   57440 system_pods.go:89] "metrics-server-6867b74b74-mgxhv" [e9654a8e-4db2-494d-93a7-a134b0e2bb50] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 13:48:54.138868   57440 system_pods.go:89] "storage-provisioner" [f340d2e3-2889-4200-b477-830494b827c6] Running
	I0816 13:48:54.138874   57440 system_pods.go:126] duration metric: took 5.576801ms to wait for k8s-apps to be running ...
	I0816 13:48:54.138879   57440 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 13:48:54.138922   57440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 13:48:54.154406   57440 system_svc.go:56] duration metric: took 15.507123ms WaitForService to wait for kubelet
	I0816 13:48:54.154438   57440 kubeadm.go:582] duration metric: took 4m24.107091364s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 13:48:54.154463   57440 node_conditions.go:102] verifying NodePressure condition ...
	I0816 13:48:54.156991   57440 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 13:48:54.157012   57440 node_conditions.go:123] node cpu capacity is 2
	I0816 13:48:54.157027   57440 node_conditions.go:105] duration metric: took 2.558338ms to run NodePressure ...
	I0816 13:48:54.157041   57440 start.go:241] waiting for startup goroutines ...
	I0816 13:48:54.157052   57440 start.go:246] waiting for cluster config update ...
	I0816 13:48:54.157070   57440 start.go:255] writing updated cluster config ...
	I0816 13:48:54.157381   57440 ssh_runner.go:195] Run: rm -f paused
	I0816 13:48:54.205583   57440 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 13:48:54.207845   57440 out.go:177] * Done! kubectl is now configured to use "no-preload-311070" cluster and "default" namespace by default
	I0816 13:48:51.999301   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:54.498057   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:52.107465   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:54.606735   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:56.498967   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:58.997311   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:56.606925   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:58.606970   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:00.607943   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:00.997760   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:02.998653   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:03.107555   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:05.606363   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:05.497723   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:07.498572   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:09.997905   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:07.607916   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:09.606579   58430 pod_ready.go:82] duration metric: took 4m0.00617652s for pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace to be "Ready" ...
	E0816 13:49:09.606602   58430 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0816 13:49:09.606612   58430 pod_ready.go:39] duration metric: took 4m3.606005486s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:49:09.606627   58430 api_server.go:52] waiting for apiserver process to appear ...
	I0816 13:49:09.606652   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:49:09.606698   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:49:09.660442   58430 cri.go:89] found id: "4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190"
	I0816 13:49:09.660461   58430 cri.go:89] found id: ""
	I0816 13:49:09.660469   58430 logs.go:276] 1 containers: [4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190]
	I0816 13:49:09.660519   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:09.664752   58430 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:49:09.664813   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:49:09.701589   58430 cri.go:89] found id: "83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176"
	I0816 13:49:09.701615   58430 cri.go:89] found id: ""
	I0816 13:49:09.701625   58430 logs.go:276] 1 containers: [83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176]
	I0816 13:49:09.701681   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:09.706048   58430 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:49:09.706114   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:49:09.743810   58430 cri.go:89] found id: "8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910"
	I0816 13:49:09.743832   58430 cri.go:89] found id: ""
	I0816 13:49:09.743841   58430 logs.go:276] 1 containers: [8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910]
	I0816 13:49:09.743898   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:09.748197   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:49:09.748271   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:49:09.783730   58430 cri.go:89] found id: "ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9"
	I0816 13:49:09.783752   58430 cri.go:89] found id: ""
	I0816 13:49:09.783765   58430 logs.go:276] 1 containers: [ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9]
	I0816 13:49:09.783828   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:09.787845   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:49:09.787909   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:49:09.828449   58430 cri.go:89] found id: "99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb"
	I0816 13:49:09.828472   58430 cri.go:89] found id: ""
	I0816 13:49:09.828481   58430 logs.go:276] 1 containers: [99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb]
	I0816 13:49:09.828546   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:09.832890   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:49:09.832963   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:49:09.880136   58430 cri.go:89] found id: "590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239"
	I0816 13:49:09.880164   58430 cri.go:89] found id: ""
	I0816 13:49:09.880175   58430 logs.go:276] 1 containers: [590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239]
	I0816 13:49:09.880232   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:09.884533   58430 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:49:09.884599   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:49:09.924776   58430 cri.go:89] found id: ""
	I0816 13:49:09.924805   58430 logs.go:276] 0 containers: []
	W0816 13:49:09.924816   58430 logs.go:278] No container was found matching "kindnet"
	I0816 13:49:09.924828   58430 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 13:49:09.924889   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 13:49:09.971663   58430 cri.go:89] found id: "7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b"
	I0816 13:49:09.971689   58430 cri.go:89] found id: "17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825"
	I0816 13:49:09.971695   58430 cri.go:89] found id: ""
	I0816 13:49:09.971705   58430 logs.go:276] 2 containers: [7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b 17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825]
	I0816 13:49:09.971770   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:09.976297   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:09.980815   58430 logs.go:123] Gathering logs for coredns [8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910] ...
	I0816 13:49:09.980844   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910"
	I0816 13:49:10.020287   58430 logs.go:123] Gathering logs for kube-scheduler [ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9] ...
	I0816 13:49:10.020317   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9"
	I0816 13:49:10.060266   58430 logs.go:123] Gathering logs for kube-controller-manager [590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239] ...
	I0816 13:49:10.060291   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239"
	I0816 13:49:10.113574   58430 logs.go:123] Gathering logs for storage-provisioner [7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b] ...
	I0816 13:49:10.113608   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b"
	I0816 13:49:10.153457   58430 logs.go:123] Gathering logs for storage-provisioner [17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825] ...
	I0816 13:49:10.153482   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825"
	I0816 13:49:10.191530   58430 logs.go:123] Gathering logs for dmesg ...
	I0816 13:49:10.191559   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:49:10.206267   58430 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:49:10.206296   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 13:49:10.326723   58430 logs.go:123] Gathering logs for etcd [83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176] ...
	I0816 13:49:10.326753   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176"
	I0816 13:49:10.377541   58430 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:49:10.377574   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:49:10.895387   58430 logs.go:123] Gathering logs for container status ...
	I0816 13:49:10.895445   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:49:10.947447   58430 logs.go:123] Gathering logs for kubelet ...
	I0816 13:49:10.947475   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:49:11.997943   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:13.998932   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:11.020745   58430 logs.go:123] Gathering logs for kube-apiserver [4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190] ...
	I0816 13:49:11.020786   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190"
	I0816 13:49:11.081224   58430 logs.go:123] Gathering logs for kube-proxy [99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb] ...
	I0816 13:49:11.081257   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb"
	I0816 13:49:13.632726   58430 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:49:13.651185   58430 api_server.go:72] duration metric: took 4m14.880109274s to wait for apiserver process to appear ...
	I0816 13:49:13.651214   58430 api_server.go:88] waiting for apiserver healthz status ...
	I0816 13:49:13.651254   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:49:13.651308   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:49:13.691473   58430 cri.go:89] found id: "4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190"
	I0816 13:49:13.691495   58430 cri.go:89] found id: ""
	I0816 13:49:13.691503   58430 logs.go:276] 1 containers: [4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190]
	I0816 13:49:13.691582   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:13.695945   58430 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:49:13.695998   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:49:13.730798   58430 cri.go:89] found id: "83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176"
	I0816 13:49:13.730830   58430 cri.go:89] found id: ""
	I0816 13:49:13.730840   58430 logs.go:276] 1 containers: [83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176]
	I0816 13:49:13.730913   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:13.735156   58430 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:49:13.735222   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:49:13.769612   58430 cri.go:89] found id: "8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910"
	I0816 13:49:13.769639   58430 cri.go:89] found id: ""
	I0816 13:49:13.769650   58430 logs.go:276] 1 containers: [8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910]
	I0816 13:49:13.769710   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:13.773690   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:49:13.773745   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:49:13.815417   58430 cri.go:89] found id: "ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9"
	I0816 13:49:13.815444   58430 cri.go:89] found id: ""
	I0816 13:49:13.815454   58430 logs.go:276] 1 containers: [ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9]
	I0816 13:49:13.815515   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:13.819596   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:49:13.819666   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:49:13.852562   58430 cri.go:89] found id: "99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb"
	I0816 13:49:13.852587   58430 cri.go:89] found id: ""
	I0816 13:49:13.852597   58430 logs.go:276] 1 containers: [99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb]
	I0816 13:49:13.852657   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:13.856697   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:49:13.856757   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:49:13.902327   58430 cri.go:89] found id: "590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239"
	I0816 13:49:13.902346   58430 cri.go:89] found id: ""
	I0816 13:49:13.902353   58430 logs.go:276] 1 containers: [590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239]
	I0816 13:49:13.902416   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:13.906789   58430 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:49:13.906840   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:49:13.943401   58430 cri.go:89] found id: ""
	I0816 13:49:13.943430   58430 logs.go:276] 0 containers: []
	W0816 13:49:13.943438   58430 logs.go:278] No container was found matching "kindnet"
	I0816 13:49:13.943443   58430 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 13:49:13.943490   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 13:49:13.979154   58430 cri.go:89] found id: "7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b"
	I0816 13:49:13.979178   58430 cri.go:89] found id: "17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825"
	I0816 13:49:13.979182   58430 cri.go:89] found id: ""
	I0816 13:49:13.979189   58430 logs.go:276] 2 containers: [7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b 17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825]
	I0816 13:49:13.979235   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:13.983301   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:13.988522   58430 logs.go:123] Gathering logs for dmesg ...
	I0816 13:49:13.988545   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:49:14.005891   58430 logs.go:123] Gathering logs for kube-apiserver [4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190] ...
	I0816 13:49:14.005916   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190"
	I0816 13:49:14.055686   58430 logs.go:123] Gathering logs for etcd [83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176] ...
	I0816 13:49:14.055713   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176"
	I0816 13:49:14.104975   58430 logs.go:123] Gathering logs for kube-proxy [99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb] ...
	I0816 13:49:14.105010   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb"
	I0816 13:49:14.145761   58430 logs.go:123] Gathering logs for kube-controller-manager [590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239] ...
	I0816 13:49:14.145786   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239"
	I0816 13:49:14.198935   58430 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:49:14.198966   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:49:14.662287   58430 logs.go:123] Gathering logs for container status ...
	I0816 13:49:14.662323   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:49:14.717227   58430 logs.go:123] Gathering logs for kubelet ...
	I0816 13:49:14.717256   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:49:14.789824   58430 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:49:14.789868   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 13:49:14.902892   58430 logs.go:123] Gathering logs for coredns [8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910] ...
	I0816 13:49:14.902922   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910"
	I0816 13:49:14.946711   58430 logs.go:123] Gathering logs for kube-scheduler [ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9] ...
	I0816 13:49:14.946736   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9"
	I0816 13:49:14.986143   58430 logs.go:123] Gathering logs for storage-provisioner [7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b] ...
	I0816 13:49:14.986175   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b"
	I0816 13:49:15.022107   58430 logs.go:123] Gathering logs for storage-provisioner [17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825] ...
	I0816 13:49:15.022138   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825"
	I0816 13:49:16.497493   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:18.497979   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:17.556820   58430 api_server.go:253] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 13:49:17.562249   58430 api_server.go:279] https://192.168.50.186:8444/healthz returned 200:
	ok
	I0816 13:49:17.563264   58430 api_server.go:141] control plane version: v1.31.0
	I0816 13:49:17.563280   58430 api_server.go:131] duration metric: took 3.912060569s to wait for apiserver health ...
	I0816 13:49:17.563288   58430 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 13:49:17.563312   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:49:17.563377   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:49:17.604072   58430 cri.go:89] found id: "4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190"
	I0816 13:49:17.604099   58430 cri.go:89] found id: ""
	I0816 13:49:17.604109   58430 logs.go:276] 1 containers: [4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190]
	I0816 13:49:17.604163   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:17.608623   58430 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:49:17.608678   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:49:17.650241   58430 cri.go:89] found id: "83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176"
	I0816 13:49:17.650267   58430 cri.go:89] found id: ""
	I0816 13:49:17.650275   58430 logs.go:276] 1 containers: [83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176]
	I0816 13:49:17.650328   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:17.654928   58430 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:49:17.655000   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:49:17.690057   58430 cri.go:89] found id: "8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910"
	I0816 13:49:17.690085   58430 cri.go:89] found id: ""
	I0816 13:49:17.690095   58430 logs.go:276] 1 containers: [8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910]
	I0816 13:49:17.690164   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:17.694636   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:49:17.694692   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:49:17.730134   58430 cri.go:89] found id: "ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9"
	I0816 13:49:17.730167   58430 cri.go:89] found id: ""
	I0816 13:49:17.730177   58430 logs.go:276] 1 containers: [ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9]
	I0816 13:49:17.730238   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:17.734364   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:49:17.734420   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:49:17.769579   58430 cri.go:89] found id: "99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb"
	I0816 13:49:17.769595   58430 cri.go:89] found id: ""
	I0816 13:49:17.769603   58430 logs.go:276] 1 containers: [99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb]
	I0816 13:49:17.769643   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:17.773543   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:49:17.773601   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:49:17.814287   58430 cri.go:89] found id: "590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239"
	I0816 13:49:17.814310   58430 cri.go:89] found id: ""
	I0816 13:49:17.814319   58430 logs.go:276] 1 containers: [590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239]
	I0816 13:49:17.814393   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:17.818904   58430 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:49:17.818977   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:49:17.858587   58430 cri.go:89] found id: ""
	I0816 13:49:17.858614   58430 logs.go:276] 0 containers: []
	W0816 13:49:17.858622   58430 logs.go:278] No container was found matching "kindnet"
	I0816 13:49:17.858627   58430 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 13:49:17.858674   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 13:49:17.901759   58430 cri.go:89] found id: "7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b"
	I0816 13:49:17.901784   58430 cri.go:89] found id: "17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825"
	I0816 13:49:17.901788   58430 cri.go:89] found id: ""
	I0816 13:49:17.901796   58430 logs.go:276] 2 containers: [7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b 17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825]
	I0816 13:49:17.901853   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:17.906139   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:17.910273   58430 logs.go:123] Gathering logs for dmesg ...
	I0816 13:49:17.910293   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:49:17.924565   58430 logs.go:123] Gathering logs for etcd [83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176] ...
	I0816 13:49:17.924590   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176"
	I0816 13:49:17.971895   58430 logs.go:123] Gathering logs for coredns [8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910] ...
	I0816 13:49:17.971927   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910"
	I0816 13:49:18.011332   58430 logs.go:123] Gathering logs for kube-scheduler [ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9] ...
	I0816 13:49:18.011364   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9"
	I0816 13:49:18.049264   58430 logs.go:123] Gathering logs for storage-provisioner [7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b] ...
	I0816 13:49:18.049292   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b"
	I0816 13:49:18.084004   58430 logs.go:123] Gathering logs for container status ...
	I0816 13:49:18.084030   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:49:18.136961   58430 logs.go:123] Gathering logs for kubelet ...
	I0816 13:49:18.137000   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:49:18.210452   58430 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:49:18.210483   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 13:49:18.327398   58430 logs.go:123] Gathering logs for kube-apiserver [4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190] ...
	I0816 13:49:18.327429   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190"
	I0816 13:49:18.378777   58430 logs.go:123] Gathering logs for kube-proxy [99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb] ...
	I0816 13:49:18.378809   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb"
	I0816 13:49:18.430052   58430 logs.go:123] Gathering logs for kube-controller-manager [590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239] ...
	I0816 13:49:18.430088   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239"
	I0816 13:49:18.496775   58430 logs.go:123] Gathering logs for storage-provisioner [17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825] ...
	I0816 13:49:18.496806   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825"
	I0816 13:49:18.540493   58430 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:49:18.540523   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:49:21.451644   58430 system_pods.go:59] 8 kube-system pods found
	I0816 13:49:21.451673   58430 system_pods.go:61] "coredns-6f6b679f8f-xdwhx" [66987c52-9a8c-4ddd-a6cf-ac84172d8c8c] Running
	I0816 13:49:21.451679   58430 system_pods.go:61] "etcd-default-k8s-diff-port-893736" [54f5bb47-00f5-4c65-88ae-460571872fd1] Running
	I0816 13:49:21.451682   58430 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-893736" [c8c57619-3a4c-4cc9-b637-7842194071ad] Running
	I0816 13:49:21.451687   58430 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-893736" [e553aced-34bd-4d30-b473-082001fe2948] Running
	I0816 13:49:21.451691   58430 system_pods.go:61] "kube-proxy-btq6r" [a2b7b283-da62-4cb8-a039-07a509491e5e] Running
	I0816 13:49:21.451694   58430 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-893736" [0a30cbd0-a2ac-4ca9-bddc-674e6fc9ae77] Running
	I0816 13:49:21.451701   58430 system_pods.go:61] "metrics-server-6867b74b74-j9tqh" [ef077e6d-f368-4872-bb87-9e031d3ea764] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 13:49:21.451705   58430 system_pods.go:61] "storage-provisioner" [e2fbf16a-3bc7-4300-8023-5dbb20ba70bc] Running
	I0816 13:49:21.451713   58430 system_pods.go:74] duration metric: took 3.888418707s to wait for pod list to return data ...
	I0816 13:49:21.451719   58430 default_sa.go:34] waiting for default service account to be created ...
	I0816 13:49:21.454558   58430 default_sa.go:45] found service account: "default"
	I0816 13:49:21.454578   58430 default_sa.go:55] duration metric: took 2.853068ms for default service account to be created ...
	I0816 13:49:21.454585   58430 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 13:49:21.458906   58430 system_pods.go:86] 8 kube-system pods found
	I0816 13:49:21.458930   58430 system_pods.go:89] "coredns-6f6b679f8f-xdwhx" [66987c52-9a8c-4ddd-a6cf-ac84172d8c8c] Running
	I0816 13:49:21.458935   58430 system_pods.go:89] "etcd-default-k8s-diff-port-893736" [54f5bb47-00f5-4c65-88ae-460571872fd1] Running
	I0816 13:49:21.458941   58430 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-893736" [c8c57619-3a4c-4cc9-b637-7842194071ad] Running
	I0816 13:49:21.458944   58430 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-893736" [e553aced-34bd-4d30-b473-082001fe2948] Running
	I0816 13:49:21.458948   58430 system_pods.go:89] "kube-proxy-btq6r" [a2b7b283-da62-4cb8-a039-07a509491e5e] Running
	I0816 13:49:21.458951   58430 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-893736" [0a30cbd0-a2ac-4ca9-bddc-674e6fc9ae77] Running
	I0816 13:49:21.458958   58430 system_pods.go:89] "metrics-server-6867b74b74-j9tqh" [ef077e6d-f368-4872-bb87-9e031d3ea764] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 13:49:21.458961   58430 system_pods.go:89] "storage-provisioner" [e2fbf16a-3bc7-4300-8023-5dbb20ba70bc] Running
	I0816 13:49:21.458968   58430 system_pods.go:126] duration metric: took 4.378971ms to wait for k8s-apps to be running ...
	I0816 13:49:21.458975   58430 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 13:49:21.459016   58430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 13:49:21.476060   58430 system_svc.go:56] duration metric: took 17.075817ms WaitForService to wait for kubelet
	I0816 13:49:21.476086   58430 kubeadm.go:582] duration metric: took 4m22.705015833s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 13:49:21.476109   58430 node_conditions.go:102] verifying NodePressure condition ...
	I0816 13:49:21.479557   58430 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 13:49:21.479585   58430 node_conditions.go:123] node cpu capacity is 2
	I0816 13:49:21.479600   58430 node_conditions.go:105] duration metric: took 3.483638ms to run NodePressure ...
	I0816 13:49:21.479613   58430 start.go:241] waiting for startup goroutines ...
	I0816 13:49:21.479622   58430 start.go:246] waiting for cluster config update ...
	I0816 13:49:21.479637   58430 start.go:255] writing updated cluster config ...
	I0816 13:49:21.479949   58430 ssh_runner.go:195] Run: rm -f paused
	I0816 13:49:21.530237   58430 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 13:49:21.532328   58430 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-893736" cluster and "default" namespace by default
	I0816 13:49:20.998486   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:23.498358   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:25.498502   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:27.998622   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:30.491886   57240 pod_ready.go:82] duration metric: took 4m0.000539211s for pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace to be "Ready" ...
	E0816 13:49:30.491929   57240 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace to be "Ready" (will not retry!)
	I0816 13:49:30.491945   57240 pod_ready.go:39] duration metric: took 4m12.492024576s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:49:30.491972   57240 kubeadm.go:597] duration metric: took 4m19.795438093s to restartPrimaryControlPlane
	W0816 13:49:30.492032   57240 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0816 13:49:30.492059   57240 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 13:49:56.783263   57240 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.29118348s)
	I0816 13:49:56.783321   57240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 13:49:56.798550   57240 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 13:49:56.810542   57240 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 13:49:56.820837   57240 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 13:49:56.820873   57240 kubeadm.go:157] found existing configuration files:
	
	I0816 13:49:56.820947   57240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 13:49:56.831998   57240 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 13:49:56.832057   57240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 13:49:56.842351   57240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 13:49:56.852062   57240 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 13:49:56.852119   57240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 13:49:56.862337   57240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 13:49:56.872000   57240 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 13:49:56.872050   57240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 13:49:56.881764   57240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 13:49:56.891211   57240 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 13:49:56.891276   57240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 13:49:56.900969   57240 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 13:49:56.942823   57240 kubeadm.go:310] W0816 13:49:56.895203    2544 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 13:49:56.943751   57240 kubeadm.go:310] W0816 13:49:56.896255    2544 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 13:49:57.049491   57240 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 13:50:05.244505   57240 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0816 13:50:05.244561   57240 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 13:50:05.244657   57240 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 13:50:05.244775   57240 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 13:50:05.244901   57240 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0816 13:50:05.244989   57240 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 13:50:05.246568   57240 out.go:235]   - Generating certificates and keys ...
	I0816 13:50:05.246667   57240 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 13:50:05.246779   57240 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 13:50:05.246885   57240 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 13:50:05.246968   57240 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 13:50:05.247065   57240 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 13:50:05.247125   57240 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 13:50:05.247195   57240 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 13:50:05.247260   57240 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 13:50:05.247372   57240 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 13:50:05.247480   57240 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 13:50:05.247521   57240 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 13:50:05.247590   57240 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 13:50:05.247670   57240 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 13:50:05.247751   57240 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0816 13:50:05.247830   57240 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 13:50:05.247886   57240 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 13:50:05.247965   57240 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 13:50:05.248046   57240 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 13:50:05.248100   57240 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 13:50:05.249601   57240 out.go:235]   - Booting up control plane ...
	I0816 13:50:05.249698   57240 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 13:50:05.249779   57240 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 13:50:05.249835   57240 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 13:50:05.249930   57240 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 13:50:05.250007   57240 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 13:50:05.250046   57240 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 13:50:05.250184   57240 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0816 13:50:05.250289   57240 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0816 13:50:05.250343   57240 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002296228s
	I0816 13:50:05.250403   57240 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0816 13:50:05.250456   57240 kubeadm.go:310] [api-check] The API server is healthy after 5.002119618s
	I0816 13:50:05.250546   57240 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0816 13:50:05.250651   57240 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0816 13:50:05.250700   57240 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0816 13:50:05.250876   57240 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-302520 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0816 13:50:05.250930   57240 kubeadm.go:310] [bootstrap-token] Using token: dta4cr.diyk2wto3tx3ixlb
	I0816 13:50:05.252120   57240 out.go:235]   - Configuring RBAC rules ...
	I0816 13:50:05.252207   57240 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0816 13:50:05.252287   57240 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0816 13:50:05.252418   57240 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0816 13:50:05.252542   57240 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0816 13:50:05.252648   57240 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0816 13:50:05.252724   57240 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0816 13:50:05.252819   57240 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0816 13:50:05.252856   57240 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0816 13:50:05.252895   57240 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0816 13:50:05.252901   57240 kubeadm.go:310] 
	I0816 13:50:05.253004   57240 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0816 13:50:05.253022   57240 kubeadm.go:310] 
	I0816 13:50:05.253116   57240 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0816 13:50:05.253126   57240 kubeadm.go:310] 
	I0816 13:50:05.253155   57240 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0816 13:50:05.253240   57240 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0816 13:50:05.253283   57240 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0816 13:50:05.253289   57240 kubeadm.go:310] 
	I0816 13:50:05.253340   57240 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0816 13:50:05.253347   57240 kubeadm.go:310] 
	I0816 13:50:05.253405   57240 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0816 13:50:05.253423   57240 kubeadm.go:310] 
	I0816 13:50:05.253484   57240 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0816 13:50:05.253556   57240 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0816 13:50:05.253621   57240 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0816 13:50:05.253629   57240 kubeadm.go:310] 
	I0816 13:50:05.253710   57240 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0816 13:50:05.253840   57240 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0816 13:50:05.253855   57240 kubeadm.go:310] 
	I0816 13:50:05.253962   57240 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token dta4cr.diyk2wto3tx3ixlb \
	I0816 13:50:05.254087   57240 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4dc33a2efd2478f5cbf5aa38b4f971f510c5b41ce450063af834610254851641 \
	I0816 13:50:05.254122   57240 kubeadm.go:310] 	--control-plane 
	I0816 13:50:05.254126   57240 kubeadm.go:310] 
	I0816 13:50:05.254202   57240 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0816 13:50:05.254209   57240 kubeadm.go:310] 
	I0816 13:50:05.254280   57240 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token dta4cr.diyk2wto3tx3ixlb \
	I0816 13:50:05.254394   57240 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4dc33a2efd2478f5cbf5aa38b4f971f510c5b41ce450063af834610254851641 
	I0816 13:50:05.254407   57240 cni.go:84] Creating CNI manager for ""
	I0816 13:50:05.254416   57240 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:50:05.255889   57240 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 13:50:05.257086   57240 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 13:50:05.268668   57240 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 13:50:05.288676   57240 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 13:50:05.288735   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 13:50:05.288755   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-302520 minikube.k8s.io/updated_at=2024_08_16T13_50_05_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ab84f9bc76071a77c857a14f5c66dccc01002b05 minikube.k8s.io/name=embed-certs-302520 minikube.k8s.io/primary=true
	I0816 13:50:05.494987   57240 ops.go:34] apiserver oom_adj: -16
	I0816 13:50:05.495066   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 13:50:05.995792   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 13:50:06.495937   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 13:50:06.995513   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 13:50:07.495437   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 13:50:07.995600   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 13:50:08.495194   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 13:50:08.995101   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 13:50:09.495533   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 13:50:09.659383   57240 kubeadm.go:1113] duration metric: took 4.370714211s to wait for elevateKubeSystemPrivileges
	I0816 13:50:09.659425   57240 kubeadm.go:394] duration metric: took 4m59.010243945s to StartCluster
	I0816 13:50:09.659448   57240 settings.go:142] acquiring lock: {Name:mk96ffc9331ceacd8f3c1c33a59b38e047898a0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:50:09.659529   57240 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 13:50:09.661178   57240 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/kubeconfig: {Name:mk102d7d0e1fecb6a50b9d6c1ee82dcff7f7a898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:50:09.661475   57240 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 13:50:09.661579   57240 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 13:50:09.661662   57240 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-302520"
	I0816 13:50:09.661678   57240 addons.go:69] Setting default-storageclass=true in profile "embed-certs-302520"
	I0816 13:50:09.661693   57240 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-302520"
	W0816 13:50:09.661701   57240 addons.go:243] addon storage-provisioner should already be in state true
	I0816 13:50:09.661683   57240 config.go:182] Loaded profile config "embed-certs-302520": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 13:50:09.661707   57240 addons.go:69] Setting metrics-server=true in profile "embed-certs-302520"
	I0816 13:50:09.661730   57240 host.go:66] Checking if "embed-certs-302520" exists ...
	I0816 13:50:09.661732   57240 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-302520"
	I0816 13:50:09.661744   57240 addons.go:234] Setting addon metrics-server=true in "embed-certs-302520"
	W0816 13:50:09.661758   57240 addons.go:243] addon metrics-server should already be in state true
	I0816 13:50:09.661789   57240 host.go:66] Checking if "embed-certs-302520" exists ...
	I0816 13:50:09.662063   57240 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:50:09.662070   57240 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:50:09.662092   57240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:50:09.662093   57240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:50:09.662125   57240 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:50:09.662177   57240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:50:09.663568   57240 out.go:177] * Verifying Kubernetes components...
	I0816 13:50:09.665144   57240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:50:09.679643   57240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44127
	I0816 13:50:09.679976   57240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33121
	I0816 13:50:09.680138   57240 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:50:09.680460   57240 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:50:09.680652   57240 main.go:141] libmachine: Using API Version  1
	I0816 13:50:09.680677   57240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:50:09.681040   57240 main.go:141] libmachine: Using API Version  1
	I0816 13:50:09.681060   57240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:50:09.681084   57240 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:50:09.681449   57240 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:50:09.681659   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetState
	I0816 13:50:09.681706   57240 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:50:09.681737   57240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:50:09.682300   57240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42691
	I0816 13:50:09.682644   57240 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:50:09.683099   57240 main.go:141] libmachine: Using API Version  1
	I0816 13:50:09.683121   57240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:50:09.683464   57240 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:50:09.683993   57240 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:50:09.684020   57240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:50:09.684695   57240 addons.go:234] Setting addon default-storageclass=true in "embed-certs-302520"
	W0816 13:50:09.684713   57240 addons.go:243] addon default-storageclass should already be in state true
	I0816 13:50:09.684733   57240 host.go:66] Checking if "embed-certs-302520" exists ...
	I0816 13:50:09.685016   57240 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:50:09.685044   57240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:50:09.699612   57240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40707
	I0816 13:50:09.700235   57240 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:50:09.700244   57240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36139
	I0816 13:50:09.700776   57240 main.go:141] libmachine: Using API Version  1
	I0816 13:50:09.700795   57240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:50:09.700827   57240 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:50:09.701285   57240 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:50:09.701369   57240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40047
	I0816 13:50:09.701457   57240 main.go:141] libmachine: Using API Version  1
	I0816 13:50:09.701467   57240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:50:09.701939   57240 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:50:09.701980   57240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:50:09.702188   57240 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:50:09.702209   57240 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:50:09.702494   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetState
	I0816 13:50:09.702618   57240 main.go:141] libmachine: Using API Version  1
	I0816 13:50:09.702635   57240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:50:09.703042   57240 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:50:09.703250   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetState
	I0816 13:50:09.704568   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	I0816 13:50:09.705308   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	I0816 13:50:09.707074   57240 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0816 13:50:09.707074   57240 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:50:09.708773   57240 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 13:50:09.708792   57240 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 13:50:09.708813   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:50:09.708894   57240 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 13:50:09.708924   57240 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 13:50:09.708941   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:50:09.714305   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:50:09.714338   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:50:09.714812   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:50:09.714840   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:50:09.714874   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:50:09.714928   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:50:09.715181   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:50:09.715215   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:50:09.715363   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:50:09.715399   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:50:09.715512   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:50:09.715556   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:50:09.715634   57240 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/embed-certs-302520/id_rsa Username:docker}
	I0816 13:50:09.715876   57240 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/embed-certs-302520/id_rsa Username:docker}
	I0816 13:50:09.724172   57240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37325
	I0816 13:50:09.724636   57240 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:50:09.725184   57240 main.go:141] libmachine: Using API Version  1
	I0816 13:50:09.725213   57240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:50:09.725596   57240 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:50:09.725799   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetState
	I0816 13:50:09.727188   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	I0816 13:50:09.727410   57240 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 13:50:09.727426   57240 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 13:50:09.727447   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:50:09.729840   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:50:09.730228   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:50:09.730255   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:50:09.730534   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:50:09.730723   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:50:09.730867   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:50:09.731014   57240 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/embed-certs-302520/id_rsa Username:docker}
	I0816 13:50:09.899195   57240 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 13:50:09.939173   57240 node_ready.go:35] waiting up to 6m0s for node "embed-certs-302520" to be "Ready" ...
	I0816 13:50:09.958087   57240 node_ready.go:49] node "embed-certs-302520" has status "Ready":"True"
	I0816 13:50:09.958119   57240 node_ready.go:38] duration metric: took 18.911367ms for node "embed-certs-302520" to be "Ready" ...
	I0816 13:50:09.958130   57240 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:50:09.963326   57240 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-whnqh" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:10.083721   57240 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 13:50:10.184794   57240 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 13:50:10.203192   57240 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 13:50:10.203214   57240 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0816 13:50:10.285922   57240 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 13:50:10.285950   57240 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 13:50:10.370797   57240 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 13:50:10.370825   57240 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 13:50:10.420892   57240 main.go:141] libmachine: Making call to close driver server
	I0816 13:50:10.420942   57240 main.go:141] libmachine: (embed-certs-302520) Calling .Close
	I0816 13:50:10.421261   57240 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:50:10.421280   57240 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:50:10.421282   57240 main.go:141] libmachine: (embed-certs-302520) DBG | Closing plugin on server side
	I0816 13:50:10.421293   57240 main.go:141] libmachine: Making call to close driver server
	I0816 13:50:10.421303   57240 main.go:141] libmachine: (embed-certs-302520) Calling .Close
	I0816 13:50:10.421556   57240 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:50:10.421620   57240 main.go:141] libmachine: (embed-certs-302520) DBG | Closing plugin on server side
	I0816 13:50:10.421625   57240 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:50:10.427229   57240 main.go:141] libmachine: Making call to close driver server
	I0816 13:50:10.427250   57240 main.go:141] libmachine: (embed-certs-302520) Calling .Close
	I0816 13:50:10.427591   57240 main.go:141] libmachine: (embed-certs-302520) DBG | Closing plugin on server side
	I0816 13:50:10.427638   57240 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:50:10.427655   57240 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:50:10.454486   57240 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 13:50:11.225905   57240 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.041077031s)
	I0816 13:50:11.225958   57240 main.go:141] libmachine: Making call to close driver server
	I0816 13:50:11.225969   57240 main.go:141] libmachine: (embed-certs-302520) Calling .Close
	I0816 13:50:11.226248   57240 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:50:11.226268   57240 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:50:11.226273   57240 main.go:141] libmachine: (embed-certs-302520) DBG | Closing plugin on server side
	I0816 13:50:11.226295   57240 main.go:141] libmachine: Making call to close driver server
	I0816 13:50:11.226310   57240 main.go:141] libmachine: (embed-certs-302520) Calling .Close
	I0816 13:50:11.226561   57240 main.go:141] libmachine: (embed-certs-302520) DBG | Closing plugin on server side
	I0816 13:50:11.226608   57240 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:50:11.226627   57240 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:50:11.447454   57240 main.go:141] libmachine: Making call to close driver server
	I0816 13:50:11.447484   57240 main.go:141] libmachine: (embed-certs-302520) Calling .Close
	I0816 13:50:11.447823   57240 main.go:141] libmachine: (embed-certs-302520) DBG | Closing plugin on server side
	I0816 13:50:11.447890   57240 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:50:11.447908   57240 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:50:11.447924   57240 main.go:141] libmachine: Making call to close driver server
	I0816 13:50:11.447936   57240 main.go:141] libmachine: (embed-certs-302520) Calling .Close
	I0816 13:50:11.448179   57240 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:50:11.448195   57240 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:50:11.448241   57240 addons.go:475] Verifying addon metrics-server=true in "embed-certs-302520"
	I0816 13:50:11.450274   57240 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0816 13:50:11.451676   57240 addons.go:510] duration metric: took 1.790101568s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0816 13:50:11.971087   57240 pod_ready.go:103] pod "coredns-6f6b679f8f-whnqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:50:12.470167   57240 pod_ready.go:93] pod "coredns-6f6b679f8f-whnqh" in "kube-system" namespace has status "Ready":"True"
	I0816 13:50:12.470193   57240 pod_ready.go:82] duration metric: took 2.506842546s for pod "coredns-6f6b679f8f-whnqh" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:12.470203   57240 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-zh69g" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:12.474959   57240 pod_ready.go:93] pod "coredns-6f6b679f8f-zh69g" in "kube-system" namespace has status "Ready":"True"
	I0816 13:50:12.474980   57240 pod_ready.go:82] duration metric: took 4.769458ms for pod "coredns-6f6b679f8f-zh69g" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:12.474988   57240 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:12.479388   57240 pod_ready.go:93] pod "etcd-embed-certs-302520" in "kube-system" namespace has status "Ready":"True"
	I0816 13:50:12.479410   57240 pod_ready.go:82] duration metric: took 4.41564ms for pod "etcd-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:12.479421   57240 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:12.483567   57240 pod_ready.go:93] pod "kube-apiserver-embed-certs-302520" in "kube-system" namespace has status "Ready":"True"
	I0816 13:50:12.483589   57240 pod_ready.go:82] duration metric: took 4.159906ms for pod "kube-apiserver-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:12.483600   57240 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:14.490212   57240 pod_ready.go:103] pod "kube-controller-manager-embed-certs-302520" in "kube-system" namespace has status "Ready":"False"
	I0816 13:50:15.990204   57240 pod_ready.go:93] pod "kube-controller-manager-embed-certs-302520" in "kube-system" namespace has status "Ready":"True"
	I0816 13:50:15.990226   57240 pod_ready.go:82] duration metric: took 3.506618768s for pod "kube-controller-manager-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:15.990235   57240 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-spgtw" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:15.994580   57240 pod_ready.go:93] pod "kube-proxy-spgtw" in "kube-system" namespace has status "Ready":"True"
	I0816 13:50:15.994597   57240 pod_ready.go:82] duration metric: took 4.356588ms for pod "kube-proxy-spgtw" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:15.994605   57240 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:16.068472   57240 pod_ready.go:93] pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace has status "Ready":"True"
	I0816 13:50:16.068495   57240 pod_ready.go:82] duration metric: took 73.884906ms for pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:16.068503   57240 pod_ready.go:39] duration metric: took 6.110362477s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:50:16.068519   57240 api_server.go:52] waiting for apiserver process to appear ...
	I0816 13:50:16.068579   57240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:50:16.086318   57240 api_server.go:72] duration metric: took 6.424804798s to wait for apiserver process to appear ...
	I0816 13:50:16.086345   57240 api_server.go:88] waiting for apiserver healthz status ...
	I0816 13:50:16.086361   57240 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0816 13:50:16.091170   57240 api_server.go:279] https://192.168.39.125:8443/healthz returned 200:
	ok
	I0816 13:50:16.092122   57240 api_server.go:141] control plane version: v1.31.0
	I0816 13:50:16.092138   57240 api_server.go:131] duration metric: took 5.787898ms to wait for apiserver health ...
	I0816 13:50:16.092146   57240 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 13:50:16.271303   57240 system_pods.go:59] 9 kube-system pods found
	I0816 13:50:16.271338   57240 system_pods.go:61] "coredns-6f6b679f8f-whnqh" [6f4d69de-4130-4959-b1ef-9ddfbe5d6a72] Running
	I0816 13:50:16.271344   57240 system_pods.go:61] "coredns-6f6b679f8f-zh69g" [b65235cd-590b-4108-b5fc-b5f6072c8f5f] Running
	I0816 13:50:16.271348   57240 system_pods.go:61] "etcd-embed-certs-302520" [54a46f37-7b4c-4732-908d-df64558dd74f] Running
	I0816 13:50:16.271353   57240 system_pods.go:61] "kube-apiserver-embed-certs-302520" [d58b625b-c94e-44a7-ac30-18b1e2e8691e] Running
	I0816 13:50:16.271359   57240 system_pods.go:61] "kube-controller-manager-embed-certs-302520" [6bb26bff-7111-40c5-9f18-9ca1b733f990] Running
	I0816 13:50:16.271364   57240 system_pods.go:61] "kube-proxy-spgtw" [e9b2b029-a32e-4dd5-b4ff-bd3d61b97a02] Running
	I0816 13:50:16.271370   57240 system_pods.go:61] "kube-scheduler-embed-certs-302520" [aea7ddf8-67b1-468d-9ab8-c78b0bfecdbb] Running
	I0816 13:50:16.271379   57240 system_pods.go:61] "metrics-server-6867b74b74-q58h2" [1351eabe-df61-4b9c-b67b-2e9c963b0eaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 13:50:16.271389   57240 system_pods.go:61] "storage-provisioner" [8e139aaf-e6d1-4661-8c7b-90c1cc9827d4] Running
	I0816 13:50:16.271398   57240 system_pods.go:74] duration metric: took 179.244421ms to wait for pod list to return data ...
	I0816 13:50:16.271410   57240 default_sa.go:34] waiting for default service account to be created ...
	I0816 13:50:16.468167   57240 default_sa.go:45] found service account: "default"
	I0816 13:50:16.468196   57240 default_sa.go:55] duration metric: took 196.779435ms for default service account to be created ...
	I0816 13:50:16.468207   57240 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 13:50:16.670917   57240 system_pods.go:86] 9 kube-system pods found
	I0816 13:50:16.670943   57240 system_pods.go:89] "coredns-6f6b679f8f-whnqh" [6f4d69de-4130-4959-b1ef-9ddfbe5d6a72] Running
	I0816 13:50:16.670949   57240 system_pods.go:89] "coredns-6f6b679f8f-zh69g" [b65235cd-590b-4108-b5fc-b5f6072c8f5f] Running
	I0816 13:50:16.670953   57240 system_pods.go:89] "etcd-embed-certs-302520" [54a46f37-7b4c-4732-908d-df64558dd74f] Running
	I0816 13:50:16.670957   57240 system_pods.go:89] "kube-apiserver-embed-certs-302520" [d58b625b-c94e-44a7-ac30-18b1e2e8691e] Running
	I0816 13:50:16.670960   57240 system_pods.go:89] "kube-controller-manager-embed-certs-302520" [6bb26bff-7111-40c5-9f18-9ca1b733f990] Running
	I0816 13:50:16.670963   57240 system_pods.go:89] "kube-proxy-spgtw" [e9b2b029-a32e-4dd5-b4ff-bd3d61b97a02] Running
	I0816 13:50:16.670967   57240 system_pods.go:89] "kube-scheduler-embed-certs-302520" [aea7ddf8-67b1-468d-9ab8-c78b0bfecdbb] Running
	I0816 13:50:16.670972   57240 system_pods.go:89] "metrics-server-6867b74b74-q58h2" [1351eabe-df61-4b9c-b67b-2e9c963b0eaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 13:50:16.670976   57240 system_pods.go:89] "storage-provisioner" [8e139aaf-e6d1-4661-8c7b-90c1cc9827d4] Running
	I0816 13:50:16.670984   57240 system_pods.go:126] duration metric: took 202.771216ms to wait for k8s-apps to be running ...
	I0816 13:50:16.670990   57240 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 13:50:16.671040   57240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 13:50:16.686873   57240 system_svc.go:56] duration metric: took 15.876641ms WaitForService to wait for kubelet
	I0816 13:50:16.686906   57240 kubeadm.go:582] duration metric: took 7.025397638s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 13:50:16.686925   57240 node_conditions.go:102] verifying NodePressure condition ...
	I0816 13:50:16.869367   57240 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 13:50:16.869393   57240 node_conditions.go:123] node cpu capacity is 2
	I0816 13:50:16.869405   57240 node_conditions.go:105] duration metric: took 182.475776ms to run NodePressure ...
	I0816 13:50:16.869420   57240 start.go:241] waiting for startup goroutines ...
	I0816 13:50:16.869427   57240 start.go:246] waiting for cluster config update ...
	I0816 13:50:16.869436   57240 start.go:255] writing updated cluster config ...
	I0816 13:50:16.869686   57240 ssh_runner.go:195] Run: rm -f paused
	I0816 13:50:16.919168   57240 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 13:50:16.921207   57240 out.go:177] * Done! kubectl is now configured to use "embed-certs-302520" cluster and "default" namespace by default
	I0816 13:50:32.875973   57945 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0816 13:50:32.876092   57945 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0816 13:50:32.877853   57945 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0816 13:50:32.877964   57945 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 13:50:32.878066   57945 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 13:50:32.878184   57945 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 13:50:32.878286   57945 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0816 13:50:32.878362   57945 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 13:50:32.880211   57945 out.go:235]   - Generating certificates and keys ...
	I0816 13:50:32.880308   57945 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 13:50:32.880389   57945 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 13:50:32.880480   57945 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 13:50:32.880575   57945 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 13:50:32.880684   57945 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 13:50:32.880782   57945 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 13:50:32.880874   57945 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 13:50:32.880988   57945 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 13:50:32.881100   57945 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 13:50:32.881190   57945 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 13:50:32.881228   57945 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 13:50:32.881274   57945 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 13:50:32.881318   57945 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 13:50:32.881362   57945 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 13:50:32.881418   57945 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 13:50:32.881473   57945 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 13:50:32.881585   57945 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 13:50:32.881676   57945 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 13:50:32.881747   57945 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 13:50:32.881846   57945 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 13:50:32.883309   57945 out.go:235]   - Booting up control plane ...
	I0816 13:50:32.883394   57945 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 13:50:32.883493   57945 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 13:50:32.883563   57945 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 13:50:32.883661   57945 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 13:50:32.883867   57945 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0816 13:50:32.883916   57945 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0816 13:50:32.883985   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:50:32.884185   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:50:32.884285   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:50:32.884483   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:50:32.884557   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:50:32.884718   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:50:32.884775   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:50:32.884984   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:50:32.885058   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:50:32.885258   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:50:32.885272   57945 kubeadm.go:310] 
	I0816 13:50:32.885367   57945 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0816 13:50:32.885419   57945 kubeadm.go:310] 		timed out waiting for the condition
	I0816 13:50:32.885426   57945 kubeadm.go:310] 
	I0816 13:50:32.885455   57945 kubeadm.go:310] 	This error is likely caused by:
	I0816 13:50:32.885489   57945 kubeadm.go:310] 		- The kubelet is not running
	I0816 13:50:32.885579   57945 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0816 13:50:32.885587   57945 kubeadm.go:310] 
	I0816 13:50:32.885709   57945 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0816 13:50:32.885745   57945 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0816 13:50:32.885774   57945 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0816 13:50:32.885781   57945 kubeadm.go:310] 
	I0816 13:50:32.885866   57945 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0816 13:50:32.885938   57945 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0816 13:50:32.885945   57945 kubeadm.go:310] 
	I0816 13:50:32.886039   57945 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0816 13:50:32.886139   57945 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0816 13:50:32.886251   57945 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0816 13:50:32.886331   57945 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0816 13:50:32.886369   57945 kubeadm.go:310] 
	W0816 13:50:32.886438   57945 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0816 13:50:32.886474   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 13:50:33.351503   57945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 13:50:33.366285   57945 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 13:50:33.378157   57945 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 13:50:33.378180   57945 kubeadm.go:157] found existing configuration files:
	
	I0816 13:50:33.378241   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 13:50:33.389301   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 13:50:33.389358   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 13:50:33.400730   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 13:50:33.412130   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 13:50:33.412209   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 13:50:33.423484   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 13:50:33.433610   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 13:50:33.433676   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 13:50:33.445384   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 13:50:33.456098   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 13:50:33.456159   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 13:50:33.466036   57945 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 13:50:33.693238   57945 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 13:52:29.699171   57945 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0816 13:52:29.699367   57945 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0816 13:52:29.700903   57945 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0816 13:52:29.701036   57945 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 13:52:29.701228   57945 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 13:52:29.701460   57945 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 13:52:29.701761   57945 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0816 13:52:29.701863   57945 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 13:52:29.703486   57945 out.go:235]   - Generating certificates and keys ...
	I0816 13:52:29.703550   57945 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 13:52:29.703603   57945 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 13:52:29.703671   57945 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 13:52:29.703732   57945 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 13:52:29.703823   57945 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 13:52:29.703918   57945 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 13:52:29.704016   57945 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 13:52:29.704098   57945 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 13:52:29.704190   57945 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 13:52:29.704283   57945 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 13:52:29.704344   57945 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 13:52:29.704407   57945 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 13:52:29.704469   57945 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 13:52:29.704541   57945 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 13:52:29.704630   57945 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 13:52:29.704674   57945 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 13:52:29.704753   57945 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 13:52:29.704824   57945 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 13:52:29.704855   57945 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 13:52:29.704939   57945 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 13:52:29.706461   57945 out.go:235]   - Booting up control plane ...
	I0816 13:52:29.706555   57945 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 13:52:29.706672   57945 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 13:52:29.706744   57945 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 13:52:29.706836   57945 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 13:52:29.707002   57945 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0816 13:52:29.707047   57945 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0816 13:52:29.707126   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:52:29.707345   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:52:29.707438   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:52:29.707691   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:52:29.707752   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:52:29.707892   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:52:29.707969   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:52:29.708132   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:52:29.708219   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:52:29.708478   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:52:29.708500   57945 kubeadm.go:310] 
	I0816 13:52:29.708538   57945 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0816 13:52:29.708579   57945 kubeadm.go:310] 		timed out waiting for the condition
	I0816 13:52:29.708593   57945 kubeadm.go:310] 
	I0816 13:52:29.708633   57945 kubeadm.go:310] 	This error is likely caused by:
	I0816 13:52:29.708660   57945 kubeadm.go:310] 		- The kubelet is not running
	I0816 13:52:29.708743   57945 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0816 13:52:29.708750   57945 kubeadm.go:310] 
	I0816 13:52:29.708841   57945 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0816 13:52:29.708892   57945 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0816 13:52:29.708959   57945 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0816 13:52:29.708969   57945 kubeadm.go:310] 
	I0816 13:52:29.709120   57945 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0816 13:52:29.709237   57945 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0816 13:52:29.709248   57945 kubeadm.go:310] 
	I0816 13:52:29.709412   57945 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0816 13:52:29.709551   57945 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0816 13:52:29.709660   57945 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0816 13:52:29.709755   57945 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0816 13:52:29.709782   57945 kubeadm.go:310] 
	I0816 13:52:29.709836   57945 kubeadm.go:394] duration metric: took 7m57.514215667s to StartCluster
	I0816 13:52:29.709886   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:52:29.709942   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:52:29.753540   57945 cri.go:89] found id: ""
	I0816 13:52:29.753569   57945 logs.go:276] 0 containers: []
	W0816 13:52:29.753580   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:52:29.753588   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:52:29.753655   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:52:29.793951   57945 cri.go:89] found id: ""
	I0816 13:52:29.793975   57945 logs.go:276] 0 containers: []
	W0816 13:52:29.793983   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:52:29.793988   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:52:29.794040   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:52:29.831303   57945 cri.go:89] found id: ""
	I0816 13:52:29.831334   57945 logs.go:276] 0 containers: []
	W0816 13:52:29.831345   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:52:29.831356   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:52:29.831420   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:52:29.867252   57945 cri.go:89] found id: ""
	I0816 13:52:29.867277   57945 logs.go:276] 0 containers: []
	W0816 13:52:29.867285   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:52:29.867296   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:52:29.867349   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:52:29.901161   57945 cri.go:89] found id: ""
	I0816 13:52:29.901188   57945 logs.go:276] 0 containers: []
	W0816 13:52:29.901204   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:52:29.901212   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:52:29.901268   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:52:29.935781   57945 cri.go:89] found id: ""
	I0816 13:52:29.935808   57945 logs.go:276] 0 containers: []
	W0816 13:52:29.935816   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:52:29.935823   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:52:29.935873   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:52:29.970262   57945 cri.go:89] found id: ""
	I0816 13:52:29.970292   57945 logs.go:276] 0 containers: []
	W0816 13:52:29.970303   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:52:29.970310   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:52:29.970370   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:52:30.026580   57945 cri.go:89] found id: ""
	I0816 13:52:30.026610   57945 logs.go:276] 0 containers: []
	W0816 13:52:30.026621   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:52:30.026642   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:52:30.026657   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:52:30.050718   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:52:30.050747   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:52:30.146600   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:52:30.146623   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:52:30.146637   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:52:30.268976   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:52:30.269012   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:52:30.312306   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:52:30.312341   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0816 13:52:30.363242   57945 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0816 13:52:30.363303   57945 out.go:270] * 
	W0816 13:52:30.363365   57945 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0816 13:52:30.363377   57945 out.go:270] * 
	W0816 13:52:30.364104   57945 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 13:52:30.366989   57945 out.go:201] 
	W0816 13:52:30.368192   57945 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0816 13:52:30.368293   57945 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0816 13:52:30.368318   57945 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0816 13:52:30.369674   57945 out.go:201] 
	
	
	==> CRI-O <==
	Aug 16 14:01:35 old-k8s-version-882237 crio[655]: time="2024-08-16 14:01:35.538449626Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723816895538417147,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a42dde22-5854-4f43-bb0a-55e7c2ab3b21 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 14:01:35 old-k8s-version-882237 crio[655]: time="2024-08-16 14:01:35.539129796Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=713fd6a4-726e-4c14-9299-25a651131646 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 14:01:35 old-k8s-version-882237 crio[655]: time="2024-08-16 14:01:35.539200227Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=713fd6a4-726e-4c14-9299-25a651131646 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 14:01:35 old-k8s-version-882237 crio[655]: time="2024-08-16 14:01:35.539240738Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=713fd6a4-726e-4c14-9299-25a651131646 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 14:01:35 old-k8s-version-882237 crio[655]: time="2024-08-16 14:01:35.573771866Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=63d2e3c3-dbb1-4235-b658-a944df81b175 name=/runtime.v1.RuntimeService/Version
	Aug 16 14:01:35 old-k8s-version-882237 crio[655]: time="2024-08-16 14:01:35.573874295Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=63d2e3c3-dbb1-4235-b658-a944df81b175 name=/runtime.v1.RuntimeService/Version
	Aug 16 14:01:35 old-k8s-version-882237 crio[655]: time="2024-08-16 14:01:35.575290421Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4f20dc28-3f4f-4431-afed-ae6661fd1f60 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 14:01:35 old-k8s-version-882237 crio[655]: time="2024-08-16 14:01:35.575794223Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723816895575764190,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4f20dc28-3f4f-4431-afed-ae6661fd1f60 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 14:01:35 old-k8s-version-882237 crio[655]: time="2024-08-16 14:01:35.576383880Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=16b94e11-b482-4a97-8f02-f74a61e936a5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 14:01:35 old-k8s-version-882237 crio[655]: time="2024-08-16 14:01:35.576460221Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=16b94e11-b482-4a97-8f02-f74a61e936a5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 14:01:35 old-k8s-version-882237 crio[655]: time="2024-08-16 14:01:35.576497904Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=16b94e11-b482-4a97-8f02-f74a61e936a5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 14:01:35 old-k8s-version-882237 crio[655]: time="2024-08-16 14:01:35.608698576Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=28652aa9-dcc2-44b9-af80-274d288ad75c name=/runtime.v1.RuntimeService/Version
	Aug 16 14:01:35 old-k8s-version-882237 crio[655]: time="2024-08-16 14:01:35.608836298Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=28652aa9-dcc2-44b9-af80-274d288ad75c name=/runtime.v1.RuntimeService/Version
	Aug 16 14:01:35 old-k8s-version-882237 crio[655]: time="2024-08-16 14:01:35.610345177Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b7947ee8-f3f8-4f61-beec-ff43998e488a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 14:01:35 old-k8s-version-882237 crio[655]: time="2024-08-16 14:01:35.610890665Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723816895610855327,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b7947ee8-f3f8-4f61-beec-ff43998e488a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 14:01:35 old-k8s-version-882237 crio[655]: time="2024-08-16 14:01:35.611519184Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=242ac963-90b4-4bbd-973f-e418a71f484f name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 14:01:35 old-k8s-version-882237 crio[655]: time="2024-08-16 14:01:35.611625372Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=242ac963-90b4-4bbd-973f-e418a71f484f name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 14:01:35 old-k8s-version-882237 crio[655]: time="2024-08-16 14:01:35.611667237Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=242ac963-90b4-4bbd-973f-e418a71f484f name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 14:01:35 old-k8s-version-882237 crio[655]: time="2024-08-16 14:01:35.646341054Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a55e7073-dae0-41fd-a6f9-cf8ae55c4317 name=/runtime.v1.RuntimeService/Version
	Aug 16 14:01:35 old-k8s-version-882237 crio[655]: time="2024-08-16 14:01:35.646438872Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a55e7073-dae0-41fd-a6f9-cf8ae55c4317 name=/runtime.v1.RuntimeService/Version
	Aug 16 14:01:35 old-k8s-version-882237 crio[655]: time="2024-08-16 14:01:35.647703085Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=67ca0dfb-9ad3-4fa5-9bdb-908ecf82a50a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 14:01:35 old-k8s-version-882237 crio[655]: time="2024-08-16 14:01:35.648084054Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723816895648056499,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=67ca0dfb-9ad3-4fa5-9bdb-908ecf82a50a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 14:01:35 old-k8s-version-882237 crio[655]: time="2024-08-16 14:01:35.648821831Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=71ce9ff6-3f04-4afb-b7c1-8dba20c06067 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 14:01:35 old-k8s-version-882237 crio[655]: time="2024-08-16 14:01:35.648872652Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=71ce9ff6-3f04-4afb-b7c1-8dba20c06067 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 14:01:35 old-k8s-version-882237 crio[655]: time="2024-08-16 14:01:35.648902887Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=71ce9ff6-3f04-4afb-b7c1-8dba20c06067 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug16 13:44] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050110] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040174] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.904148] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.568641] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.570397] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000010] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.219540] systemd-fstab-generator[573]: Ignoring "noauto" option for root device
	[  +0.067905] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.075212] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.209113] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.188995] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.278563] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +6.705927] systemd-fstab-generator[901]: Ignoring "noauto" option for root device
	[  +0.067606] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.266713] systemd-fstab-generator[1025]: Ignoring "noauto" option for root device
	[ +11.277225] kauditd_printk_skb: 46 callbacks suppressed
	[Aug16 13:48] systemd-fstab-generator[5073]: Ignoring "noauto" option for root device
	[Aug16 13:50] systemd-fstab-generator[5354]: Ignoring "noauto" option for root device
	[  +0.065917] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 14:01:35 up 17 min,  0 users,  load average: 0.00, 0.03, 0.06
	Linux old-k8s-version-882237 5.10.207 #1 SMP Wed Aug 14 19:18:01 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 16 14:01:30 old-k8s-version-882237 kubelet[6536]: k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc000446460, 0xc000a7c788, 0x70c7020, 0x0, 0x0)
	Aug 16 14:01:30 old-k8s-version-882237 kubelet[6536]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:492 +0xa5
	Aug 16 14:01:30 old-k8s-version-882237 kubelet[6536]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*http2Client).reader(0xc000498540)
	Aug 16 14:01:30 old-k8s-version-882237 kubelet[6536]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:1245 +0x7e
	Aug 16 14:01:30 old-k8s-version-882237 kubelet[6536]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Aug 16 14:01:30 old-k8s-version-882237 kubelet[6536]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:300 +0xd31
	Aug 16 14:01:30 old-k8s-version-882237 kubelet[6536]: goroutine 168 [select]:
	Aug 16 14:01:30 old-k8s-version-882237 kubelet[6536]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc000863c20, 0xc000bb9401, 0xc000a68e00, 0xc000b897f0, 0xc000093cc0, 0xc000093c80)
	Aug 16 14:01:30 old-k8s-version-882237 kubelet[6536]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Aug 16 14:01:30 old-k8s-version-882237 kubelet[6536]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc000bb9440, 0x0, 0x0)
	Aug 16 14:01:30 old-k8s-version-882237 kubelet[6536]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Aug 16 14:01:30 old-k8s-version-882237 kubelet[6536]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc000498540)
	Aug 16 14:01:30 old-k8s-version-882237 kubelet[6536]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Aug 16 14:01:30 old-k8s-version-882237 kubelet[6536]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Aug 16 14:01:30 old-k8s-version-882237 kubelet[6536]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Aug 16 14:01:30 old-k8s-version-882237 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Aug 16 14:01:30 old-k8s-version-882237 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Aug 16 14:01:31 old-k8s-version-882237 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Aug 16 14:01:31 old-k8s-version-882237 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Aug 16 14:01:31 old-k8s-version-882237 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Aug 16 14:01:31 old-k8s-version-882237 kubelet[6545]: I0816 14:01:31.327886    6545 server.go:416] Version: v1.20.0
	Aug 16 14:01:31 old-k8s-version-882237 kubelet[6545]: I0816 14:01:31.328120    6545 server.go:837] Client rotation is on, will bootstrap in background
	Aug 16 14:01:31 old-k8s-version-882237 kubelet[6545]: I0816 14:01:31.330070    6545 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Aug 16 14:01:31 old-k8s-version-882237 kubelet[6545]: W0816 14:01:31.331056    6545 manager.go:159] Cannot detect current cgroup on cgroup v2
	Aug 16 14:01:31 old-k8s-version-882237 kubelet[6545]: I0816 14:01:31.331123    6545 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-882237 -n old-k8s-version-882237
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-882237 -n old-k8s-version-882237: exit status 2 (230.052607ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-882237" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (417.96s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-311070 -n no-preload-311070
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-08-16 14:04:54.605681074 +0000 UTC m=+6237.514371724
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-311070 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-311070 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.576µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-311070 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-311070 -n no-preload-311070
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-311070 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-311070 logs -n 25: (1.249665295s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p embed-certs-302520                                  | embed-certs-302520           | jenkins | v1.33.1 | 16 Aug 24 13:36 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-311070             | no-preload-311070            | jenkins | v1.33.1 | 16 Aug 24 13:36 UTC | 16 Aug 24 13:37 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-311070                                   | no-preload-311070            | jenkins | v1.33.1 | 16 Aug 24 13:37 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-050553                              | cert-expiration-050553       | jenkins | v1.33.1 | 16 Aug 24 13:37 UTC | 16 Aug 24 13:38 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-050553                              | cert-expiration-050553       | jenkins | v1.33.1 | 16 Aug 24 13:38 UTC | 16 Aug 24 13:38 UTC |
	| delete  | -p                                                     | disable-driver-mounts-338033 | jenkins | v1.33.1 | 16 Aug 24 13:38 UTC | 16 Aug 24 13:38 UTC |
	|         | disable-driver-mounts-338033                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-893736 | jenkins | v1.33.1 | 16 Aug 24 13:38 UTC | 16 Aug 24 13:39 UTC |
	|         | default-k8s-diff-port-893736                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-302520                 | embed-certs-302520           | jenkins | v1.33.1 | 16 Aug 24 13:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-882237        | old-k8s-version-882237       | jenkins | v1.33.1 | 16 Aug 24 13:39 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-302520                                  | embed-certs-302520           | jenkins | v1.33.1 | 16 Aug 24 13:39 UTC | 16 Aug 24 13:50 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-311070                  | no-preload-311070            | jenkins | v1.33.1 | 16 Aug 24 13:39 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-311070                                   | no-preload-311070            | jenkins | v1.33.1 | 16 Aug 24 13:39 UTC | 16 Aug 24 13:48 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-893736  | default-k8s-diff-port-893736 | jenkins | v1.33.1 | 16 Aug 24 13:39 UTC | 16 Aug 24 13:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-893736 | jenkins | v1.33.1 | 16 Aug 24 13:39 UTC |                     |
	|         | default-k8s-diff-port-893736                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-882237                              | old-k8s-version-882237       | jenkins | v1.33.1 | 16 Aug 24 13:40 UTC | 16 Aug 24 13:40 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-882237             | old-k8s-version-882237       | jenkins | v1.33.1 | 16 Aug 24 13:40 UTC | 16 Aug 24 13:40 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-882237                              | old-k8s-version-882237       | jenkins | v1.33.1 | 16 Aug 24 13:40 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-893736       | default-k8s-diff-port-893736 | jenkins | v1.33.1 | 16 Aug 24 13:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-893736 | jenkins | v1.33.1 | 16 Aug 24 13:42 UTC | 16 Aug 24 13:49 UTC |
	|         | default-k8s-diff-port-893736                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-882237                              | old-k8s-version-882237       | jenkins | v1.33.1 | 16 Aug 24 14:03 UTC | 16 Aug 24 14:03 UTC |
	| start   | -p newest-cni-375308 --memory=2200 --alsologtostderr   | newest-cni-375308            | jenkins | v1.33.1 | 16 Aug 24 14:03 UTC | 16 Aug 24 14:04 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-375308             | newest-cni-375308            | jenkins | v1.33.1 | 16 Aug 24 14:04 UTC | 16 Aug 24 14:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-375308                                   | newest-cni-375308            | jenkins | v1.33.1 | 16 Aug 24 14:04 UTC | 16 Aug 24 14:04 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-375308                  | newest-cni-375308            | jenkins | v1.33.1 | 16 Aug 24 14:04 UTC | 16 Aug 24 14:04 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-375308 --memory=2200 --alsologtostderr   | newest-cni-375308            | jenkins | v1.33.1 | 16 Aug 24 14:04 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 14:04:44
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 14:04:44.836676   64777 out.go:345] Setting OutFile to fd 1 ...
	I0816 14:04:44.836806   64777 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 14:04:44.836817   64777 out.go:358] Setting ErrFile to fd 2...
	I0816 14:04:44.836824   64777 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 14:04:44.837182   64777 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-3966/.minikube/bin
	I0816 14:04:44.837907   64777 out.go:352] Setting JSON to false
	I0816 14:04:44.839126   64777 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6430,"bootTime":1723810655,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 14:04:44.839197   64777 start.go:139] virtualization: kvm guest
	I0816 14:04:44.841409   64777 out.go:177] * [newest-cni-375308] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 14:04:44.842606   64777 notify.go:220] Checking for updates...
	I0816 14:04:44.842629   64777 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 14:04:44.843970   64777 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 14:04:44.845343   64777 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 14:04:44.846463   64777 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-3966/.minikube
	I0816 14:04:44.847605   64777 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 14:04:44.848768   64777 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 14:04:44.850261   64777 config.go:182] Loaded profile config "newest-cni-375308": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 14:04:44.850678   64777 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 14:04:44.850748   64777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 14:04:44.866420   64777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33557
	I0816 14:04:44.866809   64777 main.go:141] libmachine: () Calling .GetVersion
	I0816 14:04:44.867317   64777 main.go:141] libmachine: Using API Version  1
	I0816 14:04:44.867336   64777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 14:04:44.867596   64777 main.go:141] libmachine: () Calling .GetMachineName
	I0816 14:04:44.867744   64777 main.go:141] libmachine: (newest-cni-375308) Calling .DriverName
	I0816 14:04:44.867955   64777 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 14:04:44.868265   64777 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 14:04:44.868295   64777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 14:04:44.882872   64777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40071
	I0816 14:04:44.883292   64777 main.go:141] libmachine: () Calling .GetVersion
	I0816 14:04:44.883802   64777 main.go:141] libmachine: Using API Version  1
	I0816 14:04:44.883827   64777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 14:04:44.884189   64777 main.go:141] libmachine: () Calling .GetMachineName
	I0816 14:04:44.884375   64777 main.go:141] libmachine: (newest-cni-375308) Calling .DriverName
	I0816 14:04:44.920146   64777 out.go:177] * Using the kvm2 driver based on existing profile
	I0816 14:04:44.921600   64777 start.go:297] selected driver: kvm2
	I0816 14:04:44.921622   64777 start.go:901] validating driver "kvm2" against &{Name:newest-cni-375308 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:newest-cni-375308 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.12 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] Sta
rtHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 14:04:44.921752   64777 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 14:04:44.922695   64777 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 14:04:44.922773   64777 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-3966/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 14:04:44.937602   64777 install.go:137] /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0816 14:04:44.937926   64777 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0816 14:04:44.937986   64777 cni.go:84] Creating CNI manager for ""
	I0816 14:04:44.938000   64777 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 14:04:44.938036   64777 start.go:340] cluster config:
	{Name:newest-cni-375308 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-375308 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.12 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:
Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 14:04:44.938124   64777 iso.go:125] acquiring lock: {Name:mk00139b359c6fe4c84d80e843a2099303fb7c5b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 14:04:44.940577   64777 out.go:177] * Starting "newest-cni-375308" primary control-plane node in "newest-cni-375308" cluster
	I0816 14:04:44.941889   64777 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 14:04:44.941929   64777 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0816 14:04:44.941937   64777 cache.go:56] Caching tarball of preloaded images
	I0816 14:04:44.942068   64777 preload.go:172] Found /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 14:04:44.942081   64777 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0816 14:04:44.942221   64777 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/newest-cni-375308/config.json ...
	I0816 14:04:44.942436   64777 start.go:360] acquireMachinesLock for newest-cni-375308: {Name:mk0b4b88ee8893cefe753073bc08069ac50e5641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 14:04:44.942489   64777 start.go:364] duration metric: took 30.266µs to acquireMachinesLock for "newest-cni-375308"
	I0816 14:04:44.942507   64777 start.go:96] Skipping create...Using existing machine configuration
	I0816 14:04:44.942522   64777 fix.go:54] fixHost starting: 
	I0816 14:04:44.942826   64777 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 14:04:44.942852   64777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 14:04:44.957395   64777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44731
	I0816 14:04:44.957791   64777 main.go:141] libmachine: () Calling .GetVersion
	I0816 14:04:44.958278   64777 main.go:141] libmachine: Using API Version  1
	I0816 14:04:44.958302   64777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 14:04:44.958625   64777 main.go:141] libmachine: () Calling .GetMachineName
	I0816 14:04:44.958828   64777 main.go:141] libmachine: (newest-cni-375308) Calling .DriverName
	I0816 14:04:44.958983   64777 main.go:141] libmachine: (newest-cni-375308) Calling .GetState
	I0816 14:04:44.960475   64777 fix.go:112] recreateIfNeeded on newest-cni-375308: state=Stopped err=<nil>
	I0816 14:04:44.960496   64777 main.go:141] libmachine: (newest-cni-375308) Calling .DriverName
	W0816 14:04:44.960689   64777 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 14:04:44.962425   64777 out.go:177] * Restarting existing kvm2 VM for "newest-cni-375308" ...
	I0816 14:04:44.963545   64777 main.go:141] libmachine: (newest-cni-375308) Calling .Start
	I0816 14:04:44.963688   64777 main.go:141] libmachine: (newest-cni-375308) Ensuring networks are active...
	I0816 14:04:44.964446   64777 main.go:141] libmachine: (newest-cni-375308) Ensuring network default is active
	I0816 14:04:44.964736   64777 main.go:141] libmachine: (newest-cni-375308) Ensuring network mk-newest-cni-375308 is active
	I0816 14:04:44.965219   64777 main.go:141] libmachine: (newest-cni-375308) Getting domain xml...
	I0816 14:04:44.965998   64777 main.go:141] libmachine: (newest-cni-375308) Creating domain...
	I0816 14:04:46.187569   64777 main.go:141] libmachine: (newest-cni-375308) Waiting to get IP...
	I0816 14:04:46.188629   64777 main.go:141] libmachine: (newest-cni-375308) DBG | domain newest-cni-375308 has defined MAC address 52:54:00:5c:22:d6 in network mk-newest-cni-375308
	I0816 14:04:46.189055   64777 main.go:141] libmachine: (newest-cni-375308) DBG | unable to find current IP address of domain newest-cni-375308 in network mk-newest-cni-375308
	I0816 14:04:46.189142   64777 main.go:141] libmachine: (newest-cni-375308) DBG | I0816 14:04:46.189043   64812 retry.go:31] will retry after 263.477644ms: waiting for machine to come up
	I0816 14:04:46.454488   64777 main.go:141] libmachine: (newest-cni-375308) DBG | domain newest-cni-375308 has defined MAC address 52:54:00:5c:22:d6 in network mk-newest-cni-375308
	I0816 14:04:46.455016   64777 main.go:141] libmachine: (newest-cni-375308) DBG | unable to find current IP address of domain newest-cni-375308 in network mk-newest-cni-375308
	I0816 14:04:46.455075   64777 main.go:141] libmachine: (newest-cni-375308) DBG | I0816 14:04:46.454990   64812 retry.go:31] will retry after 350.404378ms: waiting for machine to come up
	I0816 14:04:46.806757   64777 main.go:141] libmachine: (newest-cni-375308) DBG | domain newest-cni-375308 has defined MAC address 52:54:00:5c:22:d6 in network mk-newest-cni-375308
	I0816 14:04:46.807300   64777 main.go:141] libmachine: (newest-cni-375308) DBG | unable to find current IP address of domain newest-cni-375308 in network mk-newest-cni-375308
	I0816 14:04:46.807328   64777 main.go:141] libmachine: (newest-cni-375308) DBG | I0816 14:04:46.807254   64812 retry.go:31] will retry after 413.018389ms: waiting for machine to come up
	I0816 14:04:47.221986   64777 main.go:141] libmachine: (newest-cni-375308) DBG | domain newest-cni-375308 has defined MAC address 52:54:00:5c:22:d6 in network mk-newest-cni-375308
	I0816 14:04:47.222529   64777 main.go:141] libmachine: (newest-cni-375308) DBG | unable to find current IP address of domain newest-cni-375308 in network mk-newest-cni-375308
	I0816 14:04:47.222570   64777 main.go:141] libmachine: (newest-cni-375308) DBG | I0816 14:04:47.222481   64812 retry.go:31] will retry after 591.159187ms: waiting for machine to come up
	I0816 14:04:47.815785   64777 main.go:141] libmachine: (newest-cni-375308) DBG | domain newest-cni-375308 has defined MAC address 52:54:00:5c:22:d6 in network mk-newest-cni-375308
	I0816 14:04:47.816348   64777 main.go:141] libmachine: (newest-cni-375308) DBG | unable to find current IP address of domain newest-cni-375308 in network mk-newest-cni-375308
	I0816 14:04:47.816378   64777 main.go:141] libmachine: (newest-cni-375308) DBG | I0816 14:04:47.816297   64812 retry.go:31] will retry after 701.2594ms: waiting for machine to come up
	I0816 14:04:48.518774   64777 main.go:141] libmachine: (newest-cni-375308) DBG | domain newest-cni-375308 has defined MAC address 52:54:00:5c:22:d6 in network mk-newest-cni-375308
	I0816 14:04:48.519214   64777 main.go:141] libmachine: (newest-cni-375308) DBG | unable to find current IP address of domain newest-cni-375308 in network mk-newest-cni-375308
	I0816 14:04:48.519238   64777 main.go:141] libmachine: (newest-cni-375308) DBG | I0816 14:04:48.519176   64812 retry.go:31] will retry after 799.403935ms: waiting for machine to come up
	I0816 14:04:49.320216   64777 main.go:141] libmachine: (newest-cni-375308) DBG | domain newest-cni-375308 has defined MAC address 52:54:00:5c:22:d6 in network mk-newest-cni-375308
	I0816 14:04:49.320680   64777 main.go:141] libmachine: (newest-cni-375308) DBG | unable to find current IP address of domain newest-cni-375308 in network mk-newest-cni-375308
	I0816 14:04:49.320713   64777 main.go:141] libmachine: (newest-cni-375308) DBG | I0816 14:04:49.320621   64812 retry.go:31] will retry after 723.267617ms: waiting for machine to come up
	I0816 14:04:50.045788   64777 main.go:141] libmachine: (newest-cni-375308) DBG | domain newest-cni-375308 has defined MAC address 52:54:00:5c:22:d6 in network mk-newest-cni-375308
	I0816 14:04:50.046206   64777 main.go:141] libmachine: (newest-cni-375308) DBG | unable to find current IP address of domain newest-cni-375308 in network mk-newest-cni-375308
	I0816 14:04:50.046316   64777 main.go:141] libmachine: (newest-cni-375308) DBG | I0816 14:04:50.046121   64812 retry.go:31] will retry after 1.173474673s: waiting for machine to come up
	I0816 14:04:51.220688   64777 main.go:141] libmachine: (newest-cni-375308) DBG | domain newest-cni-375308 has defined MAC address 52:54:00:5c:22:d6 in network mk-newest-cni-375308
	I0816 14:04:51.221123   64777 main.go:141] libmachine: (newest-cni-375308) DBG | unable to find current IP address of domain newest-cni-375308 in network mk-newest-cni-375308
	I0816 14:04:51.221188   64777 main.go:141] libmachine: (newest-cni-375308) DBG | I0816 14:04:51.221056   64812 retry.go:31] will retry after 1.849561308s: waiting for machine to come up
	I0816 14:04:53.072007   64777 main.go:141] libmachine: (newest-cni-375308) DBG | domain newest-cni-375308 has defined MAC address 52:54:00:5c:22:d6 in network mk-newest-cni-375308
	I0816 14:04:53.072617   64777 main.go:141] libmachine: (newest-cni-375308) DBG | unable to find current IP address of domain newest-cni-375308 in network mk-newest-cni-375308
	I0816 14:04:53.072648   64777 main.go:141] libmachine: (newest-cni-375308) DBG | I0816 14:04:53.072567   64812 retry.go:31] will retry after 1.506001822s: waiting for machine to come up
	I0816 14:04:54.580166   64777 main.go:141] libmachine: (newest-cni-375308) DBG | domain newest-cni-375308 has defined MAC address 52:54:00:5c:22:d6 in network mk-newest-cni-375308
	I0816 14:04:54.580722   64777 main.go:141] libmachine: (newest-cni-375308) DBG | unable to find current IP address of domain newest-cni-375308 in network mk-newest-cni-375308
	I0816 14:04:54.580750   64777 main.go:141] libmachine: (newest-cni-375308) DBG | I0816 14:04:54.580671   64812 retry.go:31] will retry after 2.714078531s: waiting for machine to come up
	
	
	==> CRI-O <==
	Aug 16 14:04:55 no-preload-311070 crio[719]: time="2024-08-16 14:04:55.257219286Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723817095257197596,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=49bad212-0069-4b8a-8b53-142e1595a7a5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 14:04:55 no-preload-311070 crio[719]: time="2024-08-16 14:04:55.258008746Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=27079ea0-0cc8-4323-bee3-0226b4a6e53e name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 14:04:55 no-preload-311070 crio[719]: time="2024-08-16 14:04:55.260802295Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=27079ea0-0cc8-4323-bee3-0226b4a6e53e name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 14:04:55 no-preload-311070 crio[719]: time="2024-08-16 14:04:55.261164567Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae,PodSandboxId:3125fb14de6f6f2602848b6908ff2459df145dcc7b09d0d47d6185dfb4e27998,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723815898514580307,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f340d2e3-2889-4200-b477-830494b827c6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea81a9459ce06058de2dd74f477ececfbe3527bca36613b0f20187f8bbad6be,PodSandboxId:158ed4beb224d2a1ee2d224faaab5e1a05b43e1f7cbee8cbcff9944fb7073edb,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723815878571525416,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9af952ee-3d22-4bd5-8138-87534a89702c,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc,PodSandboxId:b72d7a25c2e011e72c29b783da89adcb9a87a329dda01d9c5c1d4350ee7a118c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723815875348466280,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8kbs6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e732183e-3b22-4a11-909a-246de5fc1c8a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5,PodSandboxId:4d46a3a717255294115a141e0492f386a501475c04a7326fb383c35d7bc4314d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723815867712947590,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b8d5b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ed1c33b-903f-43e8-88
0c-b9a49c658806,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f,PodSandboxId:3125fb14de6f6f2602848b6908ff2459df145dcc7b09d0d47d6185dfb4e27998,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723815867713806908,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f340d2e3-2889-4200-b477-830494b827c
6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd,PodSandboxId:a7ca36fe4257f236158999f72df3bd5c692914e6868c51b4f3d1cbd104f2c61e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723815863000468665,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-311070,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 796943b75caef6e46
cae3edcad9a83de,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4,PodSandboxId:8bb04f5bf9e67209e7a7ab46b15e8e780c8efd5d82de662c34edacc58e3cebc4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723815862983423374,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-311070,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82514b8622d04376f3e5fe85f0cb7b
09,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453,PodSandboxId:e7e53bbe2e9c477d736f96b7724eb109b77fdf46ca7f183ff426f80c47127d46,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723815862919630139,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-311070,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 346913957544dd3f3a427f9db15be919,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1,PodSandboxId:711e122055cf49cbac18c4aaee1af0a2054198bda4111c6fceb09b400aba1e64,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723815862901636353,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-311070,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 625eb3629609e577befcb415fe7a3e35,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=27079ea0-0cc8-4323-bee3-0226b4a6e53e name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 14:04:55 no-preload-311070 crio[719]: time="2024-08-16 14:04:55.304619817Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9f2d0987-2a12-48f9-800b-fe1cff32d45c name=/runtime.v1.RuntimeService/Version
	Aug 16 14:04:55 no-preload-311070 crio[719]: time="2024-08-16 14:04:55.304715181Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9f2d0987-2a12-48f9-800b-fe1cff32d45c name=/runtime.v1.RuntimeService/Version
	Aug 16 14:04:55 no-preload-311070 crio[719]: time="2024-08-16 14:04:55.306178546Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3691fbe9-db03-4c4c-a471-5f76280d0bd0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 14:04:55 no-preload-311070 crio[719]: time="2024-08-16 14:04:55.306577678Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723817095306548028,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3691fbe9-db03-4c4c-a471-5f76280d0bd0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 14:04:55 no-preload-311070 crio[719]: time="2024-08-16 14:04:55.307243285Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=698104b4-b8e3-4461-b248-f05e6313c914 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 14:04:55 no-preload-311070 crio[719]: time="2024-08-16 14:04:55.307326376Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=698104b4-b8e3-4461-b248-f05e6313c914 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 14:04:55 no-preload-311070 crio[719]: time="2024-08-16 14:04:55.307527537Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae,PodSandboxId:3125fb14de6f6f2602848b6908ff2459df145dcc7b09d0d47d6185dfb4e27998,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723815898514580307,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f340d2e3-2889-4200-b477-830494b827c6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea81a9459ce06058de2dd74f477ececfbe3527bca36613b0f20187f8bbad6be,PodSandboxId:158ed4beb224d2a1ee2d224faaab5e1a05b43e1f7cbee8cbcff9944fb7073edb,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723815878571525416,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9af952ee-3d22-4bd5-8138-87534a89702c,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc,PodSandboxId:b72d7a25c2e011e72c29b783da89adcb9a87a329dda01d9c5c1d4350ee7a118c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723815875348466280,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8kbs6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e732183e-3b22-4a11-909a-246de5fc1c8a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5,PodSandboxId:4d46a3a717255294115a141e0492f386a501475c04a7326fb383c35d7bc4314d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723815867712947590,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b8d5b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ed1c33b-903f-43e8-88
0c-b9a49c658806,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f,PodSandboxId:3125fb14de6f6f2602848b6908ff2459df145dcc7b09d0d47d6185dfb4e27998,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723815867713806908,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f340d2e3-2889-4200-b477-830494b827c
6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd,PodSandboxId:a7ca36fe4257f236158999f72df3bd5c692914e6868c51b4f3d1cbd104f2c61e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723815863000468665,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-311070,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 796943b75caef6e46
cae3edcad9a83de,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4,PodSandboxId:8bb04f5bf9e67209e7a7ab46b15e8e780c8efd5d82de662c34edacc58e3cebc4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723815862983423374,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-311070,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82514b8622d04376f3e5fe85f0cb7b
09,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453,PodSandboxId:e7e53bbe2e9c477d736f96b7724eb109b77fdf46ca7f183ff426f80c47127d46,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723815862919630139,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-311070,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 346913957544dd3f3a427f9db15be919,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1,PodSandboxId:711e122055cf49cbac18c4aaee1af0a2054198bda4111c6fceb09b400aba1e64,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723815862901636353,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-311070,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 625eb3629609e577befcb415fe7a3e35,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=698104b4-b8e3-4461-b248-f05e6313c914 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 14:04:55 no-preload-311070 crio[719]: time="2024-08-16 14:04:55.348320134Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=09253dab-fa66-4865-9fdf-41b8da0ccd4b name=/runtime.v1.RuntimeService/Version
	Aug 16 14:04:55 no-preload-311070 crio[719]: time="2024-08-16 14:04:55.348408063Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=09253dab-fa66-4865-9fdf-41b8da0ccd4b name=/runtime.v1.RuntimeService/Version
	Aug 16 14:04:55 no-preload-311070 crio[719]: time="2024-08-16 14:04:55.349383430Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a8391305-af49-4b22-a741-b9b5c5fd05e2 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 14:04:55 no-preload-311070 crio[719]: time="2024-08-16 14:04:55.349721916Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723817095349700349,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a8391305-af49-4b22-a741-b9b5c5fd05e2 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 14:04:55 no-preload-311070 crio[719]: time="2024-08-16 14:04:55.350276032Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4d383c7b-0118-41fe-97ab-5d7c6bdf39f0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 14:04:55 no-preload-311070 crio[719]: time="2024-08-16 14:04:55.350344998Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4d383c7b-0118-41fe-97ab-5d7c6bdf39f0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 14:04:55 no-preload-311070 crio[719]: time="2024-08-16 14:04:55.350547099Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae,PodSandboxId:3125fb14de6f6f2602848b6908ff2459df145dcc7b09d0d47d6185dfb4e27998,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723815898514580307,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f340d2e3-2889-4200-b477-830494b827c6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea81a9459ce06058de2dd74f477ececfbe3527bca36613b0f20187f8bbad6be,PodSandboxId:158ed4beb224d2a1ee2d224faaab5e1a05b43e1f7cbee8cbcff9944fb7073edb,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723815878571525416,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9af952ee-3d22-4bd5-8138-87534a89702c,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc,PodSandboxId:b72d7a25c2e011e72c29b783da89adcb9a87a329dda01d9c5c1d4350ee7a118c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723815875348466280,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8kbs6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e732183e-3b22-4a11-909a-246de5fc1c8a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5,PodSandboxId:4d46a3a717255294115a141e0492f386a501475c04a7326fb383c35d7bc4314d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723815867712947590,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b8d5b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ed1c33b-903f-43e8-88
0c-b9a49c658806,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f,PodSandboxId:3125fb14de6f6f2602848b6908ff2459df145dcc7b09d0d47d6185dfb4e27998,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723815867713806908,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f340d2e3-2889-4200-b477-830494b827c
6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd,PodSandboxId:a7ca36fe4257f236158999f72df3bd5c692914e6868c51b4f3d1cbd104f2c61e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723815863000468665,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-311070,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 796943b75caef6e46
cae3edcad9a83de,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4,PodSandboxId:8bb04f5bf9e67209e7a7ab46b15e8e780c8efd5d82de662c34edacc58e3cebc4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723815862983423374,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-311070,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82514b8622d04376f3e5fe85f0cb7b
09,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453,PodSandboxId:e7e53bbe2e9c477d736f96b7724eb109b77fdf46ca7f183ff426f80c47127d46,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723815862919630139,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-311070,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 346913957544dd3f3a427f9db15be919,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1,PodSandboxId:711e122055cf49cbac18c4aaee1af0a2054198bda4111c6fceb09b400aba1e64,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723815862901636353,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-311070,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 625eb3629609e577befcb415fe7a3e35,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4d383c7b-0118-41fe-97ab-5d7c6bdf39f0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 14:04:55 no-preload-311070 crio[719]: time="2024-08-16 14:04:55.385581447Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ffd51870-aead-4c52-8e52-6222b3beba87 name=/runtime.v1.RuntimeService/Version
	Aug 16 14:04:55 no-preload-311070 crio[719]: time="2024-08-16 14:04:55.385670164Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ffd51870-aead-4c52-8e52-6222b3beba87 name=/runtime.v1.RuntimeService/Version
	Aug 16 14:04:55 no-preload-311070 crio[719]: time="2024-08-16 14:04:55.387417627Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=84db9d04-546f-4449-b476-5551a314d5a4 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 14:04:55 no-preload-311070 crio[719]: time="2024-08-16 14:04:55.387767779Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723817095387745631,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=84db9d04-546f-4449-b476-5551a314d5a4 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 14:04:55 no-preload-311070 crio[719]: time="2024-08-16 14:04:55.388704647Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c271f1e8-d6e5-4f62-a11d-113a2a253458 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 14:04:55 no-preload-311070 crio[719]: time="2024-08-16 14:04:55.388773687Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c271f1e8-d6e5-4f62-a11d-113a2a253458 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 14:04:55 no-preload-311070 crio[719]: time="2024-08-16 14:04:55.388973830Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae,PodSandboxId:3125fb14de6f6f2602848b6908ff2459df145dcc7b09d0d47d6185dfb4e27998,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723815898514580307,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f340d2e3-2889-4200-b477-830494b827c6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea81a9459ce06058de2dd74f477ececfbe3527bca36613b0f20187f8bbad6be,PodSandboxId:158ed4beb224d2a1ee2d224faaab5e1a05b43e1f7cbee8cbcff9944fb7073edb,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723815878571525416,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9af952ee-3d22-4bd5-8138-87534a89702c,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc,PodSandboxId:b72d7a25c2e011e72c29b783da89adcb9a87a329dda01d9c5c1d4350ee7a118c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723815875348466280,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8kbs6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e732183e-3b22-4a11-909a-246de5fc1c8a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5,PodSandboxId:4d46a3a717255294115a141e0492f386a501475c04a7326fb383c35d7bc4314d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723815867712947590,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b8d5b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ed1c33b-903f-43e8-88
0c-b9a49c658806,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f,PodSandboxId:3125fb14de6f6f2602848b6908ff2459df145dcc7b09d0d47d6185dfb4e27998,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723815867713806908,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f340d2e3-2889-4200-b477-830494b827c
6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd,PodSandboxId:a7ca36fe4257f236158999f72df3bd5c692914e6868c51b4f3d1cbd104f2c61e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723815863000468665,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-311070,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 796943b75caef6e46
cae3edcad9a83de,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4,PodSandboxId:8bb04f5bf9e67209e7a7ab46b15e8e780c8efd5d82de662c34edacc58e3cebc4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723815862983423374,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-311070,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82514b8622d04376f3e5fe85f0cb7b
09,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453,PodSandboxId:e7e53bbe2e9c477d736f96b7724eb109b77fdf46ca7f183ff426f80c47127d46,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723815862919630139,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-311070,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 346913957544dd3f3a427f9db15be919,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1,PodSandboxId:711e122055cf49cbac18c4aaee1af0a2054198bda4111c6fceb09b400aba1e64,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723815862901636353,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-311070,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 625eb3629609e577befcb415fe7a3e35,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c271f1e8-d6e5-4f62-a11d-113a2a253458 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b9150d56b0778       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Running             storage-provisioner       2                   3125fb14de6f6       storage-provisioner
	0ea81a9459ce0       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   20 minutes ago      Running             busybox                   1                   158ed4beb224d       busybox
	1c89ddcb90aa2       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      20 minutes ago      Running             coredns                   1                   b72d7a25c2e01       coredns-6f6b679f8f-8kbs6
	35ef9517598da       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Exited              storage-provisioner       1                   3125fb14de6f6       storage-provisioner
	ca2c017b0b7fc       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      20 minutes ago      Running             kube-proxy                1                   4d46a3a717255       kube-proxy-b8d5b
	d8cda792253cd       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      20 minutes ago      Running             kube-controller-manager   1                   a7ca36fe4257f       kube-controller-manager-no-preload-311070
	db946a5971167       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      20 minutes ago      Running             kube-scheduler            1                   8bb04f5bf9e67       kube-scheduler-no-preload-311070
	43c9169b2abc2       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      20 minutes ago      Running             etcd                      1                   e7e53bbe2e9c4       etcd-no-preload-311070
	17b3d9ea47cdf       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      20 minutes ago      Running             kube-apiserver            1                   711e122055cf4       kube-apiserver-no-preload-311070
	
	
	==> coredns [1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:34903 - 33828 "HINFO IN 8326533554559909018.250990010125686623. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.01452461s
	
	
	==> describe nodes <==
	Name:               no-preload-311070
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-311070
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab84f9bc76071a77c857a14f5c66dccc01002b05
	                    minikube.k8s.io/name=no-preload-311070
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_16T13_36_03_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 13:35:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-311070
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 14:04:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 14:00:17 +0000   Fri, 16 Aug 2024 13:35:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 14:00:17 +0000   Fri, 16 Aug 2024 13:35:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 14:00:17 +0000   Fri, 16 Aug 2024 13:35:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 14:00:17 +0000   Fri, 16 Aug 2024 13:44:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.116
	  Hostname:    no-preload-311070
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b8b176131bdb451e96436ef571244feb
	  System UUID:                b8b17613-1bdb-451e-9643-6ef571244feb
	  Boot ID:                    33340544-bf0f-4dc3-87b7-35d230a40dd6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 coredns-6f6b679f8f-8kbs6                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     28m
	  kube-system                 etcd-no-preload-311070                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         28m
	  kube-system                 kube-apiserver-no-preload-311070             250m (12%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-controller-manager-no-preload-311070    200m (10%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-proxy-b8d5b                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-scheduler-no-preload-311070             100m (5%)     0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 metrics-server-6867b74b74-mgxhv              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         27m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 20m                kube-proxy       
	  Normal  Starting                 28m                kube-proxy       
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet          Node no-preload-311070 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet          Node no-preload-311070 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet          Node no-preload-311070 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    28m (x2 over 28m)  kubelet          Node no-preload-311070 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  28m (x2 over 28m)  kubelet          Node no-preload-311070 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     28m (x2 over 28m)  kubelet          Node no-preload-311070 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                28m                kubelet          Node no-preload-311070 status is now: NodeReady
	  Normal  RegisteredNode           28m                node-controller  Node no-preload-311070 event: Registered Node no-preload-311070 in Controller
	  Normal  CIDRAssignmentFailed     28m                cidrAllocator    Node no-preload-311070 status is now: CIDRAssignmentFailed
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node no-preload-311070 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node no-preload-311070 status is now: NodeHasSufficientMemory
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node no-preload-311070 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           20m                node-controller  Node no-preload-311070 event: Registered Node no-preload-311070 in Controller
	
	
	==> dmesg <==
	[Aug16 13:43] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050641] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040192] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.764452] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.400769] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.839831] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Aug16 13:44] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.054814] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053301] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.156121] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +0.136468] systemd-fstab-generator[674]: Ignoring "noauto" option for root device
	[  +0.277150] systemd-fstab-generator[703]: Ignoring "noauto" option for root device
	[ +15.547270] systemd-fstab-generator[1300]: Ignoring "noauto" option for root device
	[  +0.067283] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.981902] systemd-fstab-generator[1420]: Ignoring "noauto" option for root device
	[  +5.577177] kauditd_printk_skb: 100 callbacks suppressed
	[  +2.514602] systemd-fstab-generator[2053]: Ignoring "noauto" option for root device
	[  +4.212777] kauditd_printk_skb: 58 callbacks suppressed
	[ +24.223736] kauditd_printk_skb: 46 callbacks suppressed
	
	
	==> etcd [43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453] <==
	{"level":"warn","ts":"2024-08-16T13:44:32.506477Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T13:44:31.012782Z","time spent":"1.493690009s","remote":"127.0.0.1:34424","response type":"/etcdserverpb.KV/Range","request count":0,"request size":37,"response count":1,"response size":4662,"request content":"key:\"/registry/minions/no-preload-311070\" "}
	{"level":"info","ts":"2024-08-16T13:44:32.506696Z","caller":"traceutil/trace.go:171","msg":"trace[1913843442] range","detail":"{range_begin:/registry/minions/no-preload-311070; range_end:; response_count:1; response_revision:552; }","duration":"1.4939096s","start":"2024-08-16T13:44:31.012778Z","end":"2024-08-16T13:44:32.506688Z","steps":["trace[1913843442] 'agreement among raft nodes before linearized reading'  (duration: 1.483210808s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T13:44:32.506751Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T13:44:31.012770Z","time spent":"1.493971533s","remote":"127.0.0.1:34424","response type":"/etcdserverpb.KV/Range","request count":0,"request size":37,"response count":1,"response size":4662,"request content":"key:\"/registry/minions/no-preload-311070\" "}
	{"level":"info","ts":"2024-08-16T13:44:32.506844Z","caller":"traceutil/trace.go:171","msg":"trace[130826627] range","detail":"{range_begin:/registry/minions/no-preload-311070; range_end:; response_count:1; response_revision:552; }","duration":"1.494112613s","start":"2024-08-16T13:44:31.012725Z","end":"2024-08-16T13:44:32.506837Z","steps":["trace[130826627] 'agreement among raft nodes before linearized reading'  (duration: 1.482731812s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T13:44:32.506888Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T13:44:31.012715Z","time spent":"1.494166163s","remote":"127.0.0.1:34424","response type":"/etcdserverpb.KV/Range","request count":0,"request size":37,"response count":1,"response size":4662,"request content":"key:\"/registry/minions/no-preload-311070\" "}
	{"level":"info","ts":"2024-08-16T13:44:32.507050Z","caller":"traceutil/trace.go:171","msg":"trace[1429387092] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/endpointslice-controller; range_end:; response_count:1; response_revision:552; }","duration":"1.494291576s","start":"2024-08-16T13:44:31.012752Z","end":"2024-08-16T13:44:32.507043Z","steps":["trace[1429387092] 'agreement among raft nodes before linearized reading'  (duration: 1.482648043s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T13:44:32.507151Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T13:44:31.012744Z","time spent":"1.49439897s","remote":"127.0.0.1:34472","response type":"/etcdserverpb.KV/Range","request count":0,"request size":64,"response count":1,"response size":237,"request content":"key:\"/registry/serviceaccounts/kube-system/endpointslice-controller\" "}
	{"level":"info","ts":"2024-08-16T13:54:25.111980Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":893}
	{"level":"info","ts":"2024-08-16T13:54:25.126114Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":893,"took":"13.649946ms","hash":4035405371,"current-db-size-bytes":2809856,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":2809856,"current-db-size-in-use":"2.8 MB"}
	{"level":"info","ts":"2024-08-16T13:54:25.126175Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4035405371,"revision":893,"compact-revision":-1}
	{"level":"info","ts":"2024-08-16T13:59:25.122460Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1136}
	{"level":"info","ts":"2024-08-16T13:59:25.126750Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1136,"took":"3.711417ms","hash":1872673072,"current-db-size-bytes":2809856,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":1650688,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2024-08-16T13:59:25.126837Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1872673072,"revision":1136,"compact-revision":893}
	{"level":"warn","ts":"2024-08-16T14:04:19.882656Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"142.792877ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1118"}
	{"level":"info","ts":"2024-08-16T14:04:19.882815Z","caller":"traceutil/trace.go:171","msg":"trace[221857932] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1617; }","duration":"142.996385ms","start":"2024-08-16T14:04:19.739793Z","end":"2024-08-16T14:04:19.882789Z","steps":["trace[221857932] 'range keys from in-memory index tree'  (duration: 142.683064ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T14:04:20.058128Z","caller":"traceutil/trace.go:171","msg":"trace[203014973] transaction","detail":"{read_only:false; response_revision:1618; number_of_response:1; }","duration":"171.321037ms","start":"2024-08-16T14:04:19.886780Z","end":"2024-08-16T14:04:20.058101Z","steps":["trace[203014973] 'process raft request'  (duration: 171.173832ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T14:04:20.678424Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.887508ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-16T14:04:20.678489Z","caller":"traceutil/trace.go:171","msg":"trace[1279069616] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1619; }","duration":"123.976655ms","start":"2024-08-16T14:04:20.554499Z","end":"2024-08-16T14:04:20.678475Z","steps":["trace[1279069616] 'range keys from in-memory index tree'  (duration: 123.873398ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T14:04:20.678595Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.746275ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-16T14:04:20.678610Z","caller":"traceutil/trace.go:171","msg":"trace[30733907] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1619; }","duration":"127.766077ms","start":"2024-08-16T14:04:20.550840Z","end":"2024-08-16T14:04:20.678606Z","steps":["trace[30733907] 'range keys from in-memory index tree'  (duration: 127.674705ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T14:04:20.678691Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"191.010447ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-16T14:04:20.678849Z","caller":"traceutil/trace.go:171","msg":"trace[1910526132] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1619; }","duration":"191.170583ms","start":"2024-08-16T14:04:20.487664Z","end":"2024-08-16T14:04:20.678834Z","steps":["trace[1910526132] 'range keys from in-memory index tree'  (duration: 190.912475ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T14:04:25.135707Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1380}
	{"level":"info","ts":"2024-08-16T14:04:25.140385Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1380,"took":"4.32302ms","hash":4041387369,"current-db-size-bytes":2809856,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":1617920,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-08-16T14:04:25.140452Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4041387369,"revision":1380,"compact-revision":1136}
	
	
	==> kernel <==
	 14:04:55 up 21 min,  0 users,  load average: 0.00, 0.03, 0.06
	Linux no-preload-311070 5.10.207 #1 SMP Wed Aug 14 19:18:01 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1] <==
	I0816 14:00:27.694737       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0816 14:00:27.694796       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0816 14:02:27.695463       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 14:02:27.695579       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0816 14:02:27.695472       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 14:02:27.695626       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0816 14:02:27.696875       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0816 14:02:27.696872       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0816 14:04:26.694997       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 14:04:26.695163       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0816 14:04:27.696694       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 14:04:27.696812       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0816 14:04:27.696881       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 14:04:27.696969       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0816 14:04:27.697958       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0816 14:04:27.699091       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd] <==
	E0816 13:59:30.509942       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 13:59:30.992673       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 14:00:00.515886       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 14:00:01.001593       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0816 14:00:17.660264       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-311070"
	E0816 14:00:30.522918       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 14:00:31.016193       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0816 14:00:54.318350       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="433.014µs"
	E0816 14:01:00.528865       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 14:01:01.023433       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0816 14:01:05.313998       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="215.408µs"
	E0816 14:01:30.536899       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 14:01:31.031144       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 14:02:00.543014       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 14:02:01.038554       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 14:02:30.549356       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 14:02:31.048535       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 14:03:00.556278       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 14:03:01.055363       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 14:03:30.563520       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 14:03:31.064728       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 14:04:00.569712       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 14:04:01.071728       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 14:04:30.576505       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 14:04:31.081651       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0816 13:44:28.244999       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0816 13:44:28.266220       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.116"]
	E0816 13:44:28.266360       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 13:44:28.326563       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0816 13:44:28.326685       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0816 13:44:28.326742       1 server_linux.go:169] "Using iptables Proxier"
	I0816 13:44:28.330369       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 13:44:28.330862       1 server.go:483] "Version info" version="v1.31.0"
	I0816 13:44:28.330927       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 13:44:28.336143       1 config.go:326] "Starting node config controller"
	I0816 13:44:28.336177       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 13:44:28.338982       1 config.go:197] "Starting service config controller"
	I0816 13:44:28.339017       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 13:44:28.339032       1 config.go:104] "Starting endpoint slice config controller"
	I0816 13:44:28.339038       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 13:44:28.339466       1 shared_informer.go:320] Caches are synced for service config
	I0816 13:44:28.437231       1 shared_informer.go:320] Caches are synced for node config
	I0816 13:44:28.439445       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4] <==
	I0816 13:44:24.377492       1 serving.go:386] Generated self-signed cert in-memory
	W0816 13:44:26.647517       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0816 13:44:26.647606       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0816 13:44:26.647615       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0816 13:44:26.647622       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0816 13:44:26.738176       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0816 13:44:26.738234       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 13:44:26.748597       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0816 13:44:26.748648       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 13:44:26.751505       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0816 13:44:26.751620       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0816 13:44:26.849981       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 16 14:03:48 no-preload-311070 kubelet[1427]: E0816 14:03:48.295356    1427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-mgxhv" podUID="e9654a8e-4db2-494d-93a7-a134b0e2bb50"
	Aug 16 14:03:52 no-preload-311070 kubelet[1427]: E0816 14:03:52.581880    1427 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723817032579305528,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 14:03:52 no-preload-311070 kubelet[1427]: E0816 14:03:52.581908    1427 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723817032579305528,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 14:04:01 no-preload-311070 kubelet[1427]: E0816 14:04:01.294917    1427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-mgxhv" podUID="e9654a8e-4db2-494d-93a7-a134b0e2bb50"
	Aug 16 14:04:02 no-preload-311070 kubelet[1427]: E0816 14:04:02.583522    1427 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723817042583024017,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 14:04:02 no-preload-311070 kubelet[1427]: E0816 14:04:02.583552    1427 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723817042583024017,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 14:04:12 no-preload-311070 kubelet[1427]: E0816 14:04:12.585311    1427 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723817052584881801,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 14:04:12 no-preload-311070 kubelet[1427]: E0816 14:04:12.585373    1427 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723817052584881801,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 14:04:15 no-preload-311070 kubelet[1427]: E0816 14:04:15.294887    1427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-mgxhv" podUID="e9654a8e-4db2-494d-93a7-a134b0e2bb50"
	Aug 16 14:04:22 no-preload-311070 kubelet[1427]: E0816 14:04:22.326007    1427 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 14:04:22 no-preload-311070 kubelet[1427]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 14:04:22 no-preload-311070 kubelet[1427]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 14:04:22 no-preload-311070 kubelet[1427]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 14:04:22 no-preload-311070 kubelet[1427]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 14:04:22 no-preload-311070 kubelet[1427]: E0816 14:04:22.588332    1427 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723817062587318546,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 14:04:22 no-preload-311070 kubelet[1427]: E0816 14:04:22.588365    1427 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723817062587318546,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 14:04:27 no-preload-311070 kubelet[1427]: E0816 14:04:27.294998    1427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-mgxhv" podUID="e9654a8e-4db2-494d-93a7-a134b0e2bb50"
	Aug 16 14:04:32 no-preload-311070 kubelet[1427]: E0816 14:04:32.591218    1427 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723817072590329718,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 14:04:32 no-preload-311070 kubelet[1427]: E0816 14:04:32.591766    1427 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723817072590329718,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 14:04:41 no-preload-311070 kubelet[1427]: E0816 14:04:41.295177    1427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-mgxhv" podUID="e9654a8e-4db2-494d-93a7-a134b0e2bb50"
	Aug 16 14:04:42 no-preload-311070 kubelet[1427]: E0816 14:04:42.595147    1427 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723817082594770876,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 14:04:42 no-preload-311070 kubelet[1427]: E0816 14:04:42.595201    1427 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723817082594770876,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 14:04:52 no-preload-311070 kubelet[1427]: E0816 14:04:52.597435    1427 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723817092595943263,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 14:04:52 no-preload-311070 kubelet[1427]: E0816 14:04:52.597502    1427 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723817092595943263,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 14:04:53 no-preload-311070 kubelet[1427]: E0816 14:04:53.295195    1427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-mgxhv" podUID="e9654a8e-4db2-494d-93a7-a134b0e2bb50"
	
	
	==> storage-provisioner [35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f] <==
	I0816 13:44:27.947014       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0816 13:44:57.953583       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae] <==
	I0816 13:44:58.620148       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0816 13:44:58.632919       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0816 13:44:58.633214       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0816 13:45:16.039534       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0816 13:45:16.039878       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-311070_1ee62cc1-65e0-4a9e-97c6-ada22117f8b3!
	I0816 13:45:16.041887       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2039a4dc-6f93-4a66-bad3-9b5760e7138c", APIVersion:"v1", ResourceVersion:"678", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-311070_1ee62cc1-65e0-4a9e-97c6-ada22117f8b3 became leader
	I0816 13:45:16.140913       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-311070_1ee62cc1-65e0-4a9e-97c6-ada22117f8b3!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-311070 -n no-preload-311070
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-311070 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-mgxhv
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-311070 describe pod metrics-server-6867b74b74-mgxhv
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-311070 describe pod metrics-server-6867b74b74-mgxhv: exit status 1 (82.911281ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-mgxhv" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-311070 describe pod metrics-server-6867b74b74-mgxhv: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (417.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (542.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-893736 -n default-k8s-diff-port-893736
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-08-16 14:07:26.248038726 +0000 UTC m=+6389.156729352
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-893736 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-893736 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (56.223276ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): namespaces "kubernetes-dashboard" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-893736 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-893736 -n default-k8s-diff-port-893736
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-893736 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-893736 logs -n 25: (1.429025157s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kindnet-251866 sudo                               | kindnet-251866            | jenkins | v1.33.1 | 16 Aug 24 14:07 UTC | 16 Aug 24 14:07 UTC |
	|         | systemctl status kubelet --all                       |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-251866 sudo                               | kindnet-251866            | jenkins | v1.33.1 | 16 Aug 24 14:07 UTC | 16 Aug 24 14:07 UTC |
	|         | systemctl cat kubelet                                |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-251866 sudo                               | kindnet-251866            | jenkins | v1.33.1 | 16 Aug 24 14:07 UTC | 16 Aug 24 14:07 UTC |
	|         | journalctl -xeu kubelet --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-251866 sudo cat                           | kindnet-251866            | jenkins | v1.33.1 | 16 Aug 24 14:07 UTC | 16 Aug 24 14:07 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p kindnet-251866 sudo cat                           | kindnet-251866            | jenkins | v1.33.1 | 16 Aug 24 14:07 UTC | 16 Aug 24 14:07 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p kindnet-251866 sudo                               | kindnet-251866            | jenkins | v1.33.1 | 16 Aug 24 14:07 UTC |                     |
	|         | systemctl status docker --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-251866 sudo                               | kindnet-251866            | jenkins | v1.33.1 | 16 Aug 24 14:07 UTC | 16 Aug 24 14:07 UTC |
	|         | systemctl cat docker                                 |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-251866 sudo cat                           | kindnet-251866            | jenkins | v1.33.1 | 16 Aug 24 14:07 UTC | 16 Aug 24 14:07 UTC |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-251866 sudo docker                        | kindnet-251866            | jenkins | v1.33.1 | 16 Aug 24 14:07 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p kindnet-251866 sudo                               | kindnet-251866            | jenkins | v1.33.1 | 16 Aug 24 14:07 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-251866 sudo                               | kindnet-251866            | jenkins | v1.33.1 | 16 Aug 24 14:07 UTC | 16 Aug 24 14:07 UTC |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-251866 sudo cat                           | kindnet-251866            | jenkins | v1.33.1 | 16 Aug 24 14:07 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p kindnet-251866 sudo cat                           | kindnet-251866            | jenkins | v1.33.1 | 16 Aug 24 14:07 UTC | 16 Aug 24 14:07 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-251866 sudo                               | kindnet-251866            | jenkins | v1.33.1 | 16 Aug 24 14:07 UTC | 16 Aug 24 14:07 UTC |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p kindnet-251866 sudo                               | kindnet-251866            | jenkins | v1.33.1 | 16 Aug 24 14:07 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-251866 sudo                               | kindnet-251866            | jenkins | v1.33.1 | 16 Aug 24 14:07 UTC | 16 Aug 24 14:07 UTC |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-251866 sudo cat                           | kindnet-251866            | jenkins | v1.33.1 | 16 Aug 24 14:07 UTC | 16 Aug 24 14:07 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p kindnet-251866 sudo cat                           | kindnet-251866            | jenkins | v1.33.1 | 16 Aug 24 14:07 UTC | 16 Aug 24 14:07 UTC |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p kindnet-251866 sudo                               | kindnet-251866            | jenkins | v1.33.1 | 16 Aug 24 14:07 UTC | 16 Aug 24 14:07 UTC |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p kindnet-251866 sudo                               | kindnet-251866            | jenkins | v1.33.1 | 16 Aug 24 14:07 UTC | 16 Aug 24 14:07 UTC |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-251866 sudo                               | kindnet-251866            | jenkins | v1.33.1 | 16 Aug 24 14:07 UTC | 16 Aug 24 14:07 UTC |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p kindnet-251866 sudo find                          | kindnet-251866            | jenkins | v1.33.1 | 16 Aug 24 14:07 UTC | 16 Aug 24 14:07 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p kindnet-251866 sudo crio                          | kindnet-251866            | jenkins | v1.33.1 | 16 Aug 24 14:07 UTC | 16 Aug 24 14:07 UTC |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p kindnet-251866                                    | kindnet-251866            | jenkins | v1.33.1 | 16 Aug 24 14:07 UTC | 16 Aug 24 14:07 UTC |
	| start   | -p enable-default-cni-251866                         | enable-default-cni-251866 | jenkins | v1.33.1 | 16 Aug 24 14:07 UTC |                     |
	|         | --memory=3072                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --enable-default-cni=true                            |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 14:07:17
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 14:07:17.073139   69794 out.go:345] Setting OutFile to fd 1 ...
	I0816 14:07:17.073245   69794 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 14:07:17.073255   69794 out.go:358] Setting ErrFile to fd 2...
	I0816 14:07:17.073259   69794 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 14:07:17.073442   69794 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-3966/.minikube/bin
	I0816 14:07:17.074033   69794 out.go:352] Setting JSON to false
	I0816 14:07:17.075074   69794 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6582,"bootTime":1723810655,"procs":297,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 14:07:17.075131   69794 start.go:139] virtualization: kvm guest
	I0816 14:07:17.077236   69794 out.go:177] * [enable-default-cni-251866] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 14:07:17.078642   69794 notify.go:220] Checking for updates...
	I0816 14:07:17.078673   69794 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 14:07:17.079986   69794 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 14:07:17.081336   69794 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 14:07:17.082506   69794 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-3966/.minikube
	I0816 14:07:17.083769   69794 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 14:07:17.084951   69794 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 14:07:17.086612   69794 config.go:182] Loaded profile config "calico-251866": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 14:07:17.086705   69794 config.go:182] Loaded profile config "custom-flannel-251866": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 14:07:17.086801   69794 config.go:182] Loaded profile config "default-k8s-diff-port-893736": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 14:07:17.086889   69794 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 14:07:17.124022   69794 out.go:177] * Using the kvm2 driver based on user configuration
	I0816 14:07:17.125355   69794 start.go:297] selected driver: kvm2
	I0816 14:07:17.125374   69794 start.go:901] validating driver "kvm2" against <nil>
	I0816 14:07:17.125385   69794 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 14:07:17.126016   69794 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 14:07:17.126075   69794 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-3966/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 14:07:17.141495   69794 install.go:137] /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0816 14:07:17.141554   69794 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	E0816 14:07:17.141755   69794 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0816 14:07:17.141779   69794 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 14:07:17.141816   69794 cni.go:84] Creating CNI manager for "bridge"
	I0816 14:07:17.141830   69794 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0816 14:07:17.141883   69794 start.go:340] cluster config:
	{Name:enable-default-cni-251866 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:enable-default-cni-251866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 14:07:17.142003   69794 iso.go:125] acquiring lock: {Name:mk00139b359c6fe4c84d80e843a2099303fb7c5b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 14:07:17.144634   69794 out.go:177] * Starting "enable-default-cni-251866" primary control-plane node in "enable-default-cni-251866" cluster
	I0816 14:07:17.145867   69794 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 14:07:17.145922   69794 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0816 14:07:17.145934   69794 cache.go:56] Caching tarball of preloaded images
	I0816 14:07:17.146018   69794 preload.go:172] Found /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 14:07:17.146033   69794 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0816 14:07:17.146116   69794 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/enable-default-cni-251866/config.json ...
	I0816 14:07:17.146133   69794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/enable-default-cni-251866/config.json: {Name:mk8324cb60b455d207818425579e4906670b0f63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 14:07:17.146287   69794 start.go:360] acquireMachinesLock for enable-default-cni-251866: {Name:mk0b4b88ee8893cefe753073bc08069ac50e5641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 14:07:17.146320   69794 start.go:364] duration metric: took 16.931µs to acquireMachinesLock for "enable-default-cni-251866"
	I0816 14:07:17.146340   69794 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-251866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.0 ClusterName:enable-default-cni-251866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 14:07:17.146412   69794 start.go:125] createHost starting for "" (driver="kvm2")
	I0816 14:07:18.482174   68056 kubeadm.go:310] [api-check] The API server is healthy after 5.001934706s
	I0816 14:07:18.493738   68056 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0816 14:07:18.510087   68056 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0816 14:07:18.538759   68056 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0816 14:07:18.539171   68056 kubeadm.go:310] [mark-control-plane] Marking the node custom-flannel-251866 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0816 14:07:18.563681   68056 kubeadm.go:310] [bootstrap-token] Using token: x380gj.q5wnp3q0oo3stqz1
	I0816 14:07:18.565118   68056 out.go:235]   - Configuring RBAC rules ...
	I0816 14:07:18.565248   68056 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0816 14:07:18.570620   68056 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0816 14:07:18.583001   68056 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0816 14:07:18.587317   68056 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0816 14:07:18.593091   68056 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0816 14:07:18.597374   68056 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0816 14:07:18.888138   68056 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0816 14:07:19.317644   68056 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0816 14:07:19.888477   68056 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0816 14:07:19.889661   68056 kubeadm.go:310] 
	I0816 14:07:19.889753   68056 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0816 14:07:19.889764   68056 kubeadm.go:310] 
	I0816 14:07:19.889861   68056 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0816 14:07:19.889872   68056 kubeadm.go:310] 
	I0816 14:07:19.889905   68056 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0816 14:07:19.889987   68056 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0816 14:07:19.890059   68056 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0816 14:07:19.890085   68056 kubeadm.go:310] 
	I0816 14:07:19.890177   68056 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0816 14:07:19.890188   68056 kubeadm.go:310] 
	I0816 14:07:19.890270   68056 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0816 14:07:19.890284   68056 kubeadm.go:310] 
	I0816 14:07:19.890352   68056 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0816 14:07:19.890478   68056 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0816 14:07:19.890580   68056 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0816 14:07:19.890587   68056 kubeadm.go:310] 
	I0816 14:07:19.890695   68056 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0816 14:07:19.890810   68056 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0816 14:07:19.890819   68056 kubeadm.go:310] 
	I0816 14:07:19.890926   68056 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token x380gj.q5wnp3q0oo3stqz1 \
	I0816 14:07:19.891054   68056 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4dc33a2efd2478f5cbf5aa38b4f971f510c5b41ce450063af834610254851641 \
	I0816 14:07:19.891086   68056 kubeadm.go:310] 	--control-plane 
	I0816 14:07:19.891094   68056 kubeadm.go:310] 
	I0816 14:07:19.891210   68056 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0816 14:07:19.891219   68056 kubeadm.go:310] 
	I0816 14:07:19.891336   68056 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token x380gj.q5wnp3q0oo3stqz1 \
	I0816 14:07:19.891479   68056 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4dc33a2efd2478f5cbf5aa38b4f971f510c5b41ce450063af834610254851641 
	I0816 14:07:19.891898   68056 kubeadm.go:310] W0816 14:07:09.322733     865 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 14:07:19.892300   68056 kubeadm.go:310] W0816 14:07:09.323498     865 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 14:07:19.892474   68056 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 14:07:19.892504   68056 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0816 14:07:19.893937   68056 out.go:177] * Configuring testdata/kube-flannel.yaml (Container Networking Interface) ...
	I0816 14:07:18.927199   66168 pod_ready.go:103] pod "calico-kube-controllers-7fbd86d5c5-jwjzz" in "kube-system" namespace has status "Ready":"False"
	I0816 14:07:20.935902   66168 pod_ready.go:103] pod "calico-kube-controllers-7fbd86d5c5-jwjzz" in "kube-system" namespace has status "Ready":"False"
	I0816 14:07:17.148000   69794 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0816 14:07:17.148106   69794 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 14:07:17.148141   69794 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 14:07:17.162582   69794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39401
	I0816 14:07:17.162987   69794 main.go:141] libmachine: () Calling .GetVersion
	I0816 14:07:17.163442   69794 main.go:141] libmachine: Using API Version  1
	I0816 14:07:17.163458   69794 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 14:07:17.163813   69794 main.go:141] libmachine: () Calling .GetMachineName
	I0816 14:07:17.164010   69794 main.go:141] libmachine: (enable-default-cni-251866) Calling .GetMachineName
	I0816 14:07:17.164139   69794 main.go:141] libmachine: (enable-default-cni-251866) Calling .DriverName
	I0816 14:07:17.164290   69794 start.go:159] libmachine.API.Create for "enable-default-cni-251866" (driver="kvm2")
	I0816 14:07:17.164318   69794 client.go:168] LocalClient.Create starting
	I0816 14:07:17.164347   69794 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem
	I0816 14:07:17.164408   69794 main.go:141] libmachine: Decoding PEM data...
	I0816 14:07:17.164427   69794 main.go:141] libmachine: Parsing certificate...
	I0816 14:07:17.164479   69794 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem
	I0816 14:07:17.164499   69794 main.go:141] libmachine: Decoding PEM data...
	I0816 14:07:17.164512   69794 main.go:141] libmachine: Parsing certificate...
	I0816 14:07:17.164530   69794 main.go:141] libmachine: Running pre-create checks...
	I0816 14:07:17.164543   69794 main.go:141] libmachine: (enable-default-cni-251866) Calling .PreCreateCheck
	I0816 14:07:17.164894   69794 main.go:141] libmachine: (enable-default-cni-251866) Calling .GetConfigRaw
	I0816 14:07:17.165288   69794 main.go:141] libmachine: Creating machine...
	I0816 14:07:17.165303   69794 main.go:141] libmachine: (enable-default-cni-251866) Calling .Create
	I0816 14:07:17.165456   69794 main.go:141] libmachine: (enable-default-cni-251866) Creating KVM machine...
	I0816 14:07:17.166617   69794 main.go:141] libmachine: (enable-default-cni-251866) DBG | found existing default KVM network
	I0816 14:07:17.168011   69794 main.go:141] libmachine: (enable-default-cni-251866) DBG | I0816 14:07:17.167869   69817 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00026e0d0}
	I0816 14:07:17.168037   69794 main.go:141] libmachine: (enable-default-cni-251866) DBG | created network xml: 
	I0816 14:07:17.168051   69794 main.go:141] libmachine: (enable-default-cni-251866) DBG | <network>
	I0816 14:07:17.168064   69794 main.go:141] libmachine: (enable-default-cni-251866) DBG |   <name>mk-enable-default-cni-251866</name>
	I0816 14:07:17.168075   69794 main.go:141] libmachine: (enable-default-cni-251866) DBG |   <dns enable='no'/>
	I0816 14:07:17.168085   69794 main.go:141] libmachine: (enable-default-cni-251866) DBG |   
	I0816 14:07:17.168098   69794 main.go:141] libmachine: (enable-default-cni-251866) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0816 14:07:17.168108   69794 main.go:141] libmachine: (enable-default-cni-251866) DBG |     <dhcp>
	I0816 14:07:17.168121   69794 main.go:141] libmachine: (enable-default-cni-251866) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0816 14:07:17.168135   69794 main.go:141] libmachine: (enable-default-cni-251866) DBG |     </dhcp>
	I0816 14:07:17.168147   69794 main.go:141] libmachine: (enable-default-cni-251866) DBG |   </ip>
	I0816 14:07:17.168154   69794 main.go:141] libmachine: (enable-default-cni-251866) DBG |   
	I0816 14:07:17.168172   69794 main.go:141] libmachine: (enable-default-cni-251866) DBG | </network>
	I0816 14:07:17.168182   69794 main.go:141] libmachine: (enable-default-cni-251866) DBG | 
	I0816 14:07:17.173195   69794 main.go:141] libmachine: (enable-default-cni-251866) DBG | trying to create private KVM network mk-enable-default-cni-251866 192.168.39.0/24...
	I0816 14:07:17.244051   69794 main.go:141] libmachine: (enable-default-cni-251866) Setting up store path in /home/jenkins/minikube-integration/19423-3966/.minikube/machines/enable-default-cni-251866 ...
	I0816 14:07:17.244088   69794 main.go:141] libmachine: (enable-default-cni-251866) DBG | private KVM network mk-enable-default-cni-251866 192.168.39.0/24 created
	I0816 14:07:17.244104   69794 main.go:141] libmachine: (enable-default-cni-251866) Building disk image from file:///home/jenkins/minikube-integration/19423-3966/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso
	I0816 14:07:17.244127   69794 main.go:141] libmachine: (enable-default-cni-251866) DBG | I0816 14:07:17.243987   69817 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19423-3966/.minikube
	I0816 14:07:17.244161   69794 main.go:141] libmachine: (enable-default-cni-251866) Downloading /home/jenkins/minikube-integration/19423-3966/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19423-3966/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso...
	I0816 14:07:17.491570   69794 main.go:141] libmachine: (enable-default-cni-251866) DBG | I0816 14:07:17.491477   69817 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/enable-default-cni-251866/id_rsa...
	I0816 14:07:17.656152   69794 main.go:141] libmachine: (enable-default-cni-251866) DBG | I0816 14:07:17.656018   69817 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/enable-default-cni-251866/enable-default-cni-251866.rawdisk...
	I0816 14:07:17.656212   69794 main.go:141] libmachine: (enable-default-cni-251866) DBG | Writing magic tar header
	I0816 14:07:17.656235   69794 main.go:141] libmachine: (enable-default-cni-251866) DBG | Writing SSH key tar header
	I0816 14:07:17.656250   69794 main.go:141] libmachine: (enable-default-cni-251866) DBG | I0816 14:07:17.656128   69817 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19423-3966/.minikube/machines/enable-default-cni-251866 ...
	I0816 14:07:17.656270   69794 main.go:141] libmachine: (enable-default-cni-251866) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/enable-default-cni-251866
	I0816 14:07:17.656314   69794 main.go:141] libmachine: (enable-default-cni-251866) Setting executable bit set on /home/jenkins/minikube-integration/19423-3966/.minikube/machines/enable-default-cni-251866 (perms=drwx------)
	I0816 14:07:17.656329   69794 main.go:141] libmachine: (enable-default-cni-251866) Setting executable bit set on /home/jenkins/minikube-integration/19423-3966/.minikube/machines (perms=drwxr-xr-x)
	I0816 14:07:17.656350   69794 main.go:141] libmachine: (enable-default-cni-251866) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-3966/.minikube/machines
	I0816 14:07:17.656361   69794 main.go:141] libmachine: (enable-default-cni-251866) Setting executable bit set on /home/jenkins/minikube-integration/19423-3966/.minikube (perms=drwxr-xr-x)
	I0816 14:07:17.656374   69794 main.go:141] libmachine: (enable-default-cni-251866) Setting executable bit set on /home/jenkins/minikube-integration/19423-3966 (perms=drwxrwxr-x)
	I0816 14:07:17.656383   69794 main.go:141] libmachine: (enable-default-cni-251866) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0816 14:07:17.656394   69794 main.go:141] libmachine: (enable-default-cni-251866) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0816 14:07:17.656404   69794 main.go:141] libmachine: (enable-default-cni-251866) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-3966/.minikube
	I0816 14:07:17.656416   69794 main.go:141] libmachine: (enable-default-cni-251866) Creating domain...
	I0816 14:07:17.656431   69794 main.go:141] libmachine: (enable-default-cni-251866) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-3966
	I0816 14:07:17.656440   69794 main.go:141] libmachine: (enable-default-cni-251866) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0816 14:07:17.656450   69794 main.go:141] libmachine: (enable-default-cni-251866) DBG | Checking permissions on dir: /home/jenkins
	I0816 14:07:17.656458   69794 main.go:141] libmachine: (enable-default-cni-251866) DBG | Checking permissions on dir: /home
	I0816 14:07:17.656468   69794 main.go:141] libmachine: (enable-default-cni-251866) DBG | Skipping /home - not owner
	I0816 14:07:17.657896   69794 main.go:141] libmachine: (enable-default-cni-251866) define libvirt domain using xml: 
	I0816 14:07:17.657916   69794 main.go:141] libmachine: (enable-default-cni-251866) <domain type='kvm'>
	I0816 14:07:17.657927   69794 main.go:141] libmachine: (enable-default-cni-251866)   <name>enable-default-cni-251866</name>
	I0816 14:07:17.657935   69794 main.go:141] libmachine: (enable-default-cni-251866)   <memory unit='MiB'>3072</memory>
	I0816 14:07:17.657952   69794 main.go:141] libmachine: (enable-default-cni-251866)   <vcpu>2</vcpu>
	I0816 14:07:17.657974   69794 main.go:141] libmachine: (enable-default-cni-251866)   <features>
	I0816 14:07:17.657986   69794 main.go:141] libmachine: (enable-default-cni-251866)     <acpi/>
	I0816 14:07:17.657999   69794 main.go:141] libmachine: (enable-default-cni-251866)     <apic/>
	I0816 14:07:17.658011   69794 main.go:141] libmachine: (enable-default-cni-251866)     <pae/>
	I0816 14:07:17.658025   69794 main.go:141] libmachine: (enable-default-cni-251866)     
	I0816 14:07:17.658035   69794 main.go:141] libmachine: (enable-default-cni-251866)   </features>
	I0816 14:07:17.658045   69794 main.go:141] libmachine: (enable-default-cni-251866)   <cpu mode='host-passthrough'>
	I0816 14:07:17.658054   69794 main.go:141] libmachine: (enable-default-cni-251866)   
	I0816 14:07:17.658065   69794 main.go:141] libmachine: (enable-default-cni-251866)   </cpu>
	I0816 14:07:17.658183   69794 main.go:141] libmachine: (enable-default-cni-251866)   <os>
	I0816 14:07:17.658233   69794 main.go:141] libmachine: (enable-default-cni-251866)     <type>hvm</type>
	I0816 14:07:17.658258   69794 main.go:141] libmachine: (enable-default-cni-251866)     <boot dev='cdrom'/>
	I0816 14:07:17.658286   69794 main.go:141] libmachine: (enable-default-cni-251866)     <boot dev='hd'/>
	I0816 14:07:17.658300   69794 main.go:141] libmachine: (enable-default-cni-251866)     <bootmenu enable='no'/>
	I0816 14:07:17.658311   69794 main.go:141] libmachine: (enable-default-cni-251866)   </os>
	I0816 14:07:17.658318   69794 main.go:141] libmachine: (enable-default-cni-251866)   <devices>
	I0816 14:07:17.658331   69794 main.go:141] libmachine: (enable-default-cni-251866)     <disk type='file' device='cdrom'>
	I0816 14:07:17.658345   69794 main.go:141] libmachine: (enable-default-cni-251866)       <source file='/home/jenkins/minikube-integration/19423-3966/.minikube/machines/enable-default-cni-251866/boot2docker.iso'/>
	I0816 14:07:17.658357   69794 main.go:141] libmachine: (enable-default-cni-251866)       <target dev='hdc' bus='scsi'/>
	I0816 14:07:17.658366   69794 main.go:141] libmachine: (enable-default-cni-251866)       <readonly/>
	I0816 14:07:17.658371   69794 main.go:141] libmachine: (enable-default-cni-251866)     </disk>
	I0816 14:07:17.658378   69794 main.go:141] libmachine: (enable-default-cni-251866)     <disk type='file' device='disk'>
	I0816 14:07:17.658384   69794 main.go:141] libmachine: (enable-default-cni-251866)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0816 14:07:17.658393   69794 main.go:141] libmachine: (enable-default-cni-251866)       <source file='/home/jenkins/minikube-integration/19423-3966/.minikube/machines/enable-default-cni-251866/enable-default-cni-251866.rawdisk'/>
	I0816 14:07:17.658406   69794 main.go:141] libmachine: (enable-default-cni-251866)       <target dev='hda' bus='virtio'/>
	I0816 14:07:17.658418   69794 main.go:141] libmachine: (enable-default-cni-251866)     </disk>
	I0816 14:07:17.658430   69794 main.go:141] libmachine: (enable-default-cni-251866)     <interface type='network'>
	I0816 14:07:17.658443   69794 main.go:141] libmachine: (enable-default-cni-251866)       <source network='mk-enable-default-cni-251866'/>
	I0816 14:07:17.658453   69794 main.go:141] libmachine: (enable-default-cni-251866)       <model type='virtio'/>
	I0816 14:07:17.658466   69794 main.go:141] libmachine: (enable-default-cni-251866)     </interface>
	I0816 14:07:17.658473   69794 main.go:141] libmachine: (enable-default-cni-251866)     <interface type='network'>
	I0816 14:07:17.658478   69794 main.go:141] libmachine: (enable-default-cni-251866)       <source network='default'/>
	I0816 14:07:17.658483   69794 main.go:141] libmachine: (enable-default-cni-251866)       <model type='virtio'/>
	I0816 14:07:17.658491   69794 main.go:141] libmachine: (enable-default-cni-251866)     </interface>
	I0816 14:07:17.658502   69794 main.go:141] libmachine: (enable-default-cni-251866)     <serial type='pty'>
	I0816 14:07:17.658522   69794 main.go:141] libmachine: (enable-default-cni-251866)       <target port='0'/>
	I0816 14:07:17.658532   69794 main.go:141] libmachine: (enable-default-cni-251866)     </serial>
	I0816 14:07:17.658541   69794 main.go:141] libmachine: (enable-default-cni-251866)     <console type='pty'>
	I0816 14:07:17.658551   69794 main.go:141] libmachine: (enable-default-cni-251866)       <target type='serial' port='0'/>
	I0816 14:07:17.658560   69794 main.go:141] libmachine: (enable-default-cni-251866)     </console>
	I0816 14:07:17.658568   69794 main.go:141] libmachine: (enable-default-cni-251866)     <rng model='virtio'>
	I0816 14:07:17.658577   69794 main.go:141] libmachine: (enable-default-cni-251866)       <backend model='random'>/dev/random</backend>
	I0816 14:07:17.658587   69794 main.go:141] libmachine: (enable-default-cni-251866)     </rng>
	I0816 14:07:17.658596   69794 main.go:141] libmachine: (enable-default-cni-251866)     
	I0816 14:07:17.658605   69794 main.go:141] libmachine: (enable-default-cni-251866)     
	I0816 14:07:17.658614   69794 main.go:141] libmachine: (enable-default-cni-251866)   </devices>
	I0816 14:07:17.658624   69794 main.go:141] libmachine: (enable-default-cni-251866) </domain>
	I0816 14:07:17.658635   69794 main.go:141] libmachine: (enable-default-cni-251866) 
	I0816 14:07:17.662373   69794 main.go:141] libmachine: (enable-default-cni-251866) DBG | domain enable-default-cni-251866 has defined MAC address 52:54:00:8f:92:a4 in network default
	I0816 14:07:17.663027   69794 main.go:141] libmachine: (enable-default-cni-251866) Ensuring networks are active...
	I0816 14:07:17.663058   69794 main.go:141] libmachine: (enable-default-cni-251866) DBG | domain enable-default-cni-251866 has defined MAC address 52:54:00:17:7a:e2 in network mk-enable-default-cni-251866
	I0816 14:07:17.663834   69794 main.go:141] libmachine: (enable-default-cni-251866) Ensuring network default is active
	I0816 14:07:17.664224   69794 main.go:141] libmachine: (enable-default-cni-251866) Ensuring network mk-enable-default-cni-251866 is active
	I0816 14:07:17.664788   69794 main.go:141] libmachine: (enable-default-cni-251866) Getting domain xml...
	I0816 14:07:17.665571   69794 main.go:141] libmachine: (enable-default-cni-251866) Creating domain...
	I0816 14:07:19.023669   69794 main.go:141] libmachine: (enable-default-cni-251866) Waiting to get IP...
	I0816 14:07:19.024601   69794 main.go:141] libmachine: (enable-default-cni-251866) DBG | domain enable-default-cni-251866 has defined MAC address 52:54:00:17:7a:e2 in network mk-enable-default-cni-251866
	I0816 14:07:19.025154   69794 main.go:141] libmachine: (enable-default-cni-251866) DBG | unable to find current IP address of domain enable-default-cni-251866 in network mk-enable-default-cni-251866
	I0816 14:07:19.025185   69794 main.go:141] libmachine: (enable-default-cni-251866) DBG | I0816 14:07:19.025099   69817 retry.go:31] will retry after 222.143928ms: waiting for machine to come up
	I0816 14:07:19.249586   69794 main.go:141] libmachine: (enable-default-cni-251866) DBG | domain enable-default-cni-251866 has defined MAC address 52:54:00:17:7a:e2 in network mk-enable-default-cni-251866
	I0816 14:07:19.250142   69794 main.go:141] libmachine: (enable-default-cni-251866) DBG | unable to find current IP address of domain enable-default-cni-251866 in network mk-enable-default-cni-251866
	I0816 14:07:19.250172   69794 main.go:141] libmachine: (enable-default-cni-251866) DBG | I0816 14:07:19.250070   69817 retry.go:31] will retry after 357.637342ms: waiting for machine to come up
	I0816 14:07:19.609719   69794 main.go:141] libmachine: (enable-default-cni-251866) DBG | domain enable-default-cni-251866 has defined MAC address 52:54:00:17:7a:e2 in network mk-enable-default-cni-251866
	I0816 14:07:19.610149   69794 main.go:141] libmachine: (enable-default-cni-251866) DBG | unable to find current IP address of domain enable-default-cni-251866 in network mk-enable-default-cni-251866
	I0816 14:07:19.610184   69794 main.go:141] libmachine: (enable-default-cni-251866) DBG | I0816 14:07:19.610138   69817 retry.go:31] will retry after 399.663896ms: waiting for machine to come up
	I0816 14:07:20.011779   69794 main.go:141] libmachine: (enable-default-cni-251866) DBG | domain enable-default-cni-251866 has defined MAC address 52:54:00:17:7a:e2 in network mk-enable-default-cni-251866
	I0816 14:07:20.012356   69794 main.go:141] libmachine: (enable-default-cni-251866) DBG | unable to find current IP address of domain enable-default-cni-251866 in network mk-enable-default-cni-251866
	I0816 14:07:20.012379   69794 main.go:141] libmachine: (enable-default-cni-251866) DBG | I0816 14:07:20.012317   69817 retry.go:31] will retry after 498.285557ms: waiting for machine to come up
	I0816 14:07:20.512125   69794 main.go:141] libmachine: (enable-default-cni-251866) DBG | domain enable-default-cni-251866 has defined MAC address 52:54:00:17:7a:e2 in network mk-enable-default-cni-251866
	I0816 14:07:20.512679   69794 main.go:141] libmachine: (enable-default-cni-251866) DBG | unable to find current IP address of domain enable-default-cni-251866 in network mk-enable-default-cni-251866
	I0816 14:07:20.512699   69794 main.go:141] libmachine: (enable-default-cni-251866) DBG | I0816 14:07:20.512631   69817 retry.go:31] will retry after 494.041481ms: waiting for machine to come up
	I0816 14:07:21.008254   69794 main.go:141] libmachine: (enable-default-cni-251866) DBG | domain enable-default-cni-251866 has defined MAC address 52:54:00:17:7a:e2 in network mk-enable-default-cni-251866
	I0816 14:07:21.008743   69794 main.go:141] libmachine: (enable-default-cni-251866) DBG | unable to find current IP address of domain enable-default-cni-251866 in network mk-enable-default-cni-251866
	I0816 14:07:21.008765   69794 main.go:141] libmachine: (enable-default-cni-251866) DBG | I0816 14:07:21.008702   69817 retry.go:31] will retry after 819.035862ms: waiting for machine to come up
	I0816 14:07:21.829793   69794 main.go:141] libmachine: (enable-default-cni-251866) DBG | domain enable-default-cni-251866 has defined MAC address 52:54:00:17:7a:e2 in network mk-enable-default-cni-251866
	I0816 14:07:21.830453   69794 main.go:141] libmachine: (enable-default-cni-251866) DBG | unable to find current IP address of domain enable-default-cni-251866 in network mk-enable-default-cni-251866
	I0816 14:07:21.830492   69794 main.go:141] libmachine: (enable-default-cni-251866) DBG | I0816 14:07:21.830378   69817 retry.go:31] will retry after 883.197444ms: waiting for machine to come up
	I0816 14:07:19.895027   68056 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0816 14:07:19.895081   68056 ssh_runner.go:195] Run: stat -c "%s %y" /var/tmp/minikube/cni.yaml
	I0816 14:07:19.900808   68056 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%s %y" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/tmp/minikube/cni.yaml': No such file or directory
	I0816 14:07:19.900832   68056 ssh_runner.go:362] scp testdata/kube-flannel.yaml --> /var/tmp/minikube/cni.yaml (4591 bytes)
	I0816 14:07:19.935396   68056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0816 14:07:20.382470   68056 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 14:07:20.382529   68056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 14:07:20.382553   68056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes custom-flannel-251866 minikube.k8s.io/updated_at=2024_08_16T14_07_20_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ab84f9bc76071a77c857a14f5c66dccc01002b05 minikube.k8s.io/name=custom-flannel-251866 minikube.k8s.io/primary=true
	I0816 14:07:20.557425   68056 ops.go:34] apiserver oom_adj: -16
	I0816 14:07:20.583189   68056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 14:07:21.084018   68056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 14:07:21.583288   68056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 14:07:22.083559   68056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 14:07:22.583550   68056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 14:07:23.084069   68056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 14:07:23.583320   68056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 14:07:24.083254   68056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 14:07:24.583211   68056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 14:07:24.709209   68056 kubeadm.go:1113] duration metric: took 4.326731414s to wait for elevateKubeSystemPrivileges
	I0816 14:07:24.709243   68056 kubeadm.go:394] duration metric: took 15.587375738s to StartCluster
	I0816 14:07:24.709261   68056 settings.go:142] acquiring lock: {Name:mk96ffc9331ceacd8f3c1c33a59b38e047898a0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 14:07:24.709334   68056 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 14:07:24.710496   68056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/kubeconfig: {Name:mk102d7d0e1fecb6a50b9d6c1ee82dcff7f7a898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 14:07:24.710724   68056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 14:07:24.710727   68056 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.61.48 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 14:07:24.710836   68056 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 14:07:24.710910   68056 addons.go:69] Setting storage-provisioner=true in profile "custom-flannel-251866"
	I0816 14:07:24.710918   68056 config.go:182] Loaded profile config "custom-flannel-251866": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 14:07:24.710928   68056 addons.go:69] Setting default-storageclass=true in profile "custom-flannel-251866"
	I0816 14:07:24.710947   68056 addons.go:234] Setting addon storage-provisioner=true in "custom-flannel-251866"
	I0816 14:07:24.710974   68056 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "custom-flannel-251866"
	I0816 14:07:24.710990   68056 host.go:66] Checking if "custom-flannel-251866" exists ...
	I0816 14:07:24.711421   68056 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 14:07:24.711456   68056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 14:07:24.711531   68056 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 14:07:24.711555   68056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 14:07:24.712419   68056 out.go:177] * Verifying Kubernetes components...
	I0816 14:07:24.713915   68056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 14:07:24.731750   68056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40805
	I0816 14:07:24.731959   68056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46099
	I0816 14:07:24.732151   68056 main.go:141] libmachine: () Calling .GetVersion
	I0816 14:07:24.732250   68056 main.go:141] libmachine: () Calling .GetVersion
	I0816 14:07:24.732695   68056 main.go:141] libmachine: Using API Version  1
	I0816 14:07:24.732718   68056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 14:07:24.733024   68056 main.go:141] libmachine: Using API Version  1
	I0816 14:07:24.733043   68056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 14:07:24.733082   68056 main.go:141] libmachine: () Calling .GetMachineName
	I0816 14:07:24.733363   68056 main.go:141] libmachine: () Calling .GetMachineName
	I0816 14:07:24.733622   68056 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 14:07:24.733645   68056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 14:07:24.733851   68056 main.go:141] libmachine: (custom-flannel-251866) Calling .GetState
	I0816 14:07:24.737357   68056 addons.go:234] Setting addon default-storageclass=true in "custom-flannel-251866"
	I0816 14:07:24.737403   68056 host.go:66] Checking if "custom-flannel-251866" exists ...
	I0816 14:07:24.737784   68056 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 14:07:24.737824   68056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 14:07:24.752396   68056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43621
	I0816 14:07:24.752804   68056 main.go:141] libmachine: () Calling .GetVersion
	I0816 14:07:24.753360   68056 main.go:141] libmachine: Using API Version  1
	I0816 14:07:24.753391   68056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 14:07:24.753695   68056 main.go:141] libmachine: () Calling .GetMachineName
	I0816 14:07:24.754294   68056 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 14:07:24.754328   68056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 14:07:24.754778   68056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42415
	I0816 14:07:24.755125   68056 main.go:141] libmachine: () Calling .GetVersion
	I0816 14:07:24.755611   68056 main.go:141] libmachine: Using API Version  1
	I0816 14:07:24.755637   68056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 14:07:24.756011   68056 main.go:141] libmachine: () Calling .GetMachineName
	I0816 14:07:24.756209   68056 main.go:141] libmachine: (custom-flannel-251866) Calling .GetState
	I0816 14:07:24.758071   68056 main.go:141] libmachine: (custom-flannel-251866) Calling .DriverName
	I0816 14:07:24.760007   68056 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 14:07:24.761428   68056 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 14:07:24.761447   68056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 14:07:24.761465   68056 main.go:141] libmachine: (custom-flannel-251866) Calling .GetSSHHostname
	I0816 14:07:24.764880   68056 main.go:141] libmachine: (custom-flannel-251866) DBG | domain custom-flannel-251866 has defined MAC address 52:54:00:8f:fb:b4 in network mk-custom-flannel-251866
	I0816 14:07:24.765667   68056 main.go:141] libmachine: (custom-flannel-251866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:fb:b4", ip: ""} in network mk-custom-flannel-251866: {Iface:virbr3 ExpiryTime:2024-08-16 15:06:49 +0000 UTC Type:0 Mac:52:54:00:8f:fb:b4 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:custom-flannel-251866 Clientid:01:52:54:00:8f:fb:b4}
	I0816 14:07:24.765686   68056 main.go:141] libmachine: (custom-flannel-251866) Calling .GetSSHPort
	I0816 14:07:24.765700   68056 main.go:141] libmachine: (custom-flannel-251866) DBG | domain custom-flannel-251866 has defined IP address 192.168.61.48 and MAC address 52:54:00:8f:fb:b4 in network mk-custom-flannel-251866
	I0816 14:07:24.765880   68056 main.go:141] libmachine: (custom-flannel-251866) Calling .GetSSHKeyPath
	I0816 14:07:24.766056   68056 main.go:141] libmachine: (custom-flannel-251866) Calling .GetSSHUsername
	I0816 14:07:24.766198   68056 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/custom-flannel-251866/id_rsa Username:docker}
	I0816 14:07:24.772763   68056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39605
	I0816 14:07:24.773145   68056 main.go:141] libmachine: () Calling .GetVersion
	I0816 14:07:24.773654   68056 main.go:141] libmachine: Using API Version  1
	I0816 14:07:24.773682   68056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 14:07:24.774029   68056 main.go:141] libmachine: () Calling .GetMachineName
	I0816 14:07:24.774232   68056 main.go:141] libmachine: (custom-flannel-251866) Calling .GetState
	I0816 14:07:24.776126   68056 main.go:141] libmachine: (custom-flannel-251866) Calling .DriverName
	I0816 14:07:24.776342   68056 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 14:07:24.776356   68056 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 14:07:24.776385   68056 main.go:141] libmachine: (custom-flannel-251866) Calling .GetSSHHostname
	I0816 14:07:24.779314   68056 main.go:141] libmachine: (custom-flannel-251866) DBG | domain custom-flannel-251866 has defined MAC address 52:54:00:8f:fb:b4 in network mk-custom-flannel-251866
	I0816 14:07:24.779795   68056 main.go:141] libmachine: (custom-flannel-251866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:fb:b4", ip: ""} in network mk-custom-flannel-251866: {Iface:virbr3 ExpiryTime:2024-08-16 15:06:49 +0000 UTC Type:0 Mac:52:54:00:8f:fb:b4 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:custom-flannel-251866 Clientid:01:52:54:00:8f:fb:b4}
	I0816 14:07:24.779817   68056 main.go:141] libmachine: (custom-flannel-251866) DBG | domain custom-flannel-251866 has defined IP address 192.168.61.48 and MAC address 52:54:00:8f:fb:b4 in network mk-custom-flannel-251866
	I0816 14:07:24.779959   68056 main.go:141] libmachine: (custom-flannel-251866) Calling .GetSSHPort
	I0816 14:07:24.780101   68056 main.go:141] libmachine: (custom-flannel-251866) Calling .GetSSHKeyPath
	I0816 14:07:24.780209   68056 main.go:141] libmachine: (custom-flannel-251866) Calling .GetSSHUsername
	I0816 14:07:24.780313   68056 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/custom-flannel-251866/id_rsa Username:docker}
	I0816 14:07:24.885067   68056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0816 14:07:24.906660   68056 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 14:07:25.002891   68056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 14:07:25.075771   68056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 14:07:25.214434   68056 start.go:971] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0816 14:07:25.215545   68056 node_ready.go:35] waiting up to 15m0s for node "custom-flannel-251866" to be "Ready" ...
	I0816 14:07:25.687609   68056 main.go:141] libmachine: Making call to close driver server
	I0816 14:07:25.687655   68056 main.go:141] libmachine: (custom-flannel-251866) Calling .Close
	I0816 14:07:25.687694   68056 main.go:141] libmachine: Making call to close driver server
	I0816 14:07:25.687721   68056 main.go:141] libmachine: (custom-flannel-251866) Calling .Close
	I0816 14:07:25.687992   68056 main.go:141] libmachine: Successfully made call to close driver server
	I0816 14:07:25.688013   68056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 14:07:25.688016   68056 main.go:141] libmachine: (custom-flannel-251866) DBG | Closing plugin on server side
	I0816 14:07:25.688022   68056 main.go:141] libmachine: Making call to close driver server
	I0816 14:07:25.688044   68056 main.go:141] libmachine: (custom-flannel-251866) Calling .Close
	I0816 14:07:25.688118   68056 main.go:141] libmachine: (custom-flannel-251866) DBG | Closing plugin on server side
	I0816 14:07:25.688143   68056 main.go:141] libmachine: Successfully made call to close driver server
	I0816 14:07:25.688173   68056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 14:07:25.688186   68056 main.go:141] libmachine: Making call to close driver server
	I0816 14:07:25.688198   68056 main.go:141] libmachine: (custom-flannel-251866) Calling .Close
	I0816 14:07:25.688358   68056 main.go:141] libmachine: Successfully made call to close driver server
	I0816 14:07:25.688373   68056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 14:07:25.688389   68056 main.go:141] libmachine: (custom-flannel-251866) DBG | Closing plugin on server side
	I0816 14:07:25.688596   68056 main.go:141] libmachine: (custom-flannel-251866) DBG | Closing plugin on server side
	I0816 14:07:25.688627   68056 main.go:141] libmachine: Successfully made call to close driver server
	I0816 14:07:25.688662   68056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 14:07:25.698236   68056 main.go:141] libmachine: Making call to close driver server
	I0816 14:07:25.698255   68056 main.go:141] libmachine: (custom-flannel-251866) Calling .Close
	I0816 14:07:25.698531   68056 main.go:141] libmachine: Successfully made call to close driver server
	I0816 14:07:25.698549   68056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 14:07:25.700372   68056 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	
	
	==> CRI-O <==
	Aug 16 14:07:26 default-k8s-diff-port-893736 crio[726]: time="2024-08-16 14:07:26.988359997Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b,PodSandboxId:0832c1beaccf1e546405a95590e2f232fd9dc3af301b054d0ddd1c26def744c2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723815927288158093,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2fbf16a-3bc7-4300-8023-5dbb20ba70bc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53a684d7eb166f20d306b8d2f298e21663f877c1f86e1b35603cee544142d1af,PodSandboxId:f2cca593e350029016755210ab3afd4acfdb3a896b2a39a1aff8994c8254dab0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723815907382589834,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a2a34a97-11aa-4c0e-b5e7-061dba89ed2d,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910,PodSandboxId:3a0568e7a14e9087cc579e0b7e7de4698d9f45ce54d316edc12b53e5bdee8d9e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723815904126132263,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-xdwhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66987c52-9a8c-4ddd-a6cf-ac84172d8c8c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb,PodSandboxId:f947c137b097f1a1e432cca00e0188c9449ebef74565f313500bce79b947dc63,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723815896583153000,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-btq6r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b7b283-d
a62-4cb8-a039-07a509491e5e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825,PodSandboxId:0832c1beaccf1e546405a95590e2f232fd9dc3af301b054d0ddd1c26def744c2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723815896468762272,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2fbf16a-3bc7-4300-8023-
5dbb20ba70bc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9,PodSandboxId:910361984af1fb80fb91b7169b8066c03ad84a0bec20ffaf6c1dfa6f3c5799e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723815891719403480,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-893736,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd65b07d81e7fe90256eaf
6d40549d5a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176,PodSandboxId:4aceba4e7ec56d78083e97d22dd30b21d45b180cdcc11a0acacc2f9b61bc17fe,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723815891718770190,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-893736,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e3972b8e55820f8f106be0692f94f90,},Annotations:map[str
ing]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239,PodSandboxId:3758ce55631b96de1faf39ead67443a9acde0c3a40267f1fc5631306ed23670c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723815891711839477,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-893736,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57bd8aaf450c00c9ac4dc94bbc9c4
8de,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190,PodSandboxId:5042006cc8ce07e1595b62cd91a701e5674d2a8f26d0ee21ea000c84fa2100c5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723815891706741833,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-893736,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85077b11aa053e7b722c3c3d1f6c9c7
b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fea2678f-3739-481c-af96-eee1dd408e5f name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 14:07:26 default-k8s-diff-port-893736 crio[726]: time="2024-08-16 14:07:26.990420385Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=4b3cde51-e768-438a-87f7-d668d4a51805 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 16 14:07:26 default-k8s-diff-port-893736 crio[726]: time="2024-08-16 14:07:26.990778202Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:3a0568e7a14e9087cc579e0b7e7de4698d9f45ce54d316edc12b53e5bdee8d9e,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-xdwhx,Uid:66987c52-9a8c-4ddd-a6cf-ac84172d8c8c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723815903853596101,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-xdwhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66987c52-9a8c-4ddd-a6cf-ac84172d8c8c,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-16T13:44:55.988731812Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f2cca593e350029016755210ab3afd4acfdb3a896b2a39a1aff8994c8254dab0,Metadata:&PodSandboxMetadata{Name:busybox,Uid:a2a34a97-11aa-4c0e-b5e7-061dba89ed2d,Namespace:default,Attem
pt:0,},State:SANDBOX_READY,CreatedAt:1723815903852891551,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a2a34a97-11aa-4c0e-b5e7-061dba89ed2d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-16T13:44:55.988730968Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:10a3948041475f0c451ba2030a926ac49a93e132949031de4462f4fff9d12873,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-j9tqh,Uid:ef077e6d-f368-4872-bb87-9e031d3ea764,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723815902052308194,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-j9tqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef077e6d-f368-4872-bb87-9e031d3ea764,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-16
T13:44:55.988729190Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0832c1beaccf1e546405a95590e2f232fd9dc3af301b054d0ddd1c26def744c2,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:e2fbf16a-3bc7-4300-8023-5dbb20ba70bc,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723815896318274155,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2fbf16a-3bc7-4300-8023-5dbb20ba70bc,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"g
cr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-16T13:44:55.988730097Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f947c137b097f1a1e432cca00e0188c9449ebef74565f313500bce79b947dc63,Metadata:&PodSandboxMetadata{Name:kube-proxy-btq6r,Uid:a2b7b283-da62-4cb8-a039-07a509491e5e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723815896301765953,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-btq6r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b7b283-da62-4cb8-a039-07a509491e5e,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{ku
bernetes.io/config.seen: 2024-08-16T13:44:55.988727017Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4aceba4e7ec56d78083e97d22dd30b21d45b180cdcc11a0acacc2f9b61bc17fe,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-893736,Uid:9e3972b8e55820f8f106be0692f94f90,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723815891481546778,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-893736,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e3972b8e55820f8f106be0692f94f90,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.186:2379,kubernetes.io/config.hash: 9e3972b8e55820f8f106be0692f94f90,kubernetes.io/config.seen: 2024-08-16T13:44:51.026131356Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3758ce55631b96de1faf39ead67443a9acde0c3a40267f1fc5631306ed23670c,Metadata:&PodSandboxMetadata{Name:
kube-controller-manager-default-k8s-diff-port-893736,Uid:57bd8aaf450c00c9ac4dc94bbc9c48de,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723815891478514036,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-893736,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57bd8aaf450c00c9ac4dc94bbc9c48de,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 57bd8aaf450c00c9ac4dc94bbc9c48de,kubernetes.io/config.seen: 2024-08-16T13:44:50.989582437Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5042006cc8ce07e1595b62cd91a701e5674d2a8f26d0ee21ea000c84fa2100c5,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-893736,Uid:85077b11aa053e7b722c3c3d1f6c9c7b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723815891475384566,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name
: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-893736,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85077b11aa053e7b722c3c3d1f6c9c7b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.186:8444,kubernetes.io/config.hash: 85077b11aa053e7b722c3c3d1f6c9c7b,kubernetes.io/config.seen: 2024-08-16T13:44:50.989578439Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:910361984af1fb80fb91b7169b8066c03ad84a0bec20ffaf6c1dfa6f3c5799e6,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-893736,Uid:fd65b07d81e7fe90256eaf6d40549d5a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723815891474383268,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-893736,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd65b07d81e7fe90256eaf6d40549d5a,tier: control-p
lane,},Annotations:map[string]string{kubernetes.io/config.hash: fd65b07d81e7fe90256eaf6d40549d5a,kubernetes.io/config.seen: 2024-08-16T13:44:50.989583593Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=4b3cde51-e768-438a-87f7-d668d4a51805 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 16 14:07:26 default-k8s-diff-port-893736 crio[726]: time="2024-08-16 14:07:26.994099135Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4485fa85-1007-42b0-9074-aa77d36fe5c6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 14:07:26 default-k8s-diff-port-893736 crio[726]: time="2024-08-16 14:07:26.994159334Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4485fa85-1007-42b0-9074-aa77d36fe5c6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 14:07:26 default-k8s-diff-port-893736 crio[726]: time="2024-08-16 14:07:26.994339904Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b,PodSandboxId:0832c1beaccf1e546405a95590e2f232fd9dc3af301b054d0ddd1c26def744c2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723815927288158093,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2fbf16a-3bc7-4300-8023-5dbb20ba70bc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53a684d7eb166f20d306b8d2f298e21663f877c1f86e1b35603cee544142d1af,PodSandboxId:f2cca593e350029016755210ab3afd4acfdb3a896b2a39a1aff8994c8254dab0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723815907382589834,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a2a34a97-11aa-4c0e-b5e7-061dba89ed2d,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910,PodSandboxId:3a0568e7a14e9087cc579e0b7e7de4698d9f45ce54d316edc12b53e5bdee8d9e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723815904126132263,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-xdwhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66987c52-9a8c-4ddd-a6cf-ac84172d8c8c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb,PodSandboxId:f947c137b097f1a1e432cca00e0188c9449ebef74565f313500bce79b947dc63,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723815896583153000,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-btq6r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b7b283-d
a62-4cb8-a039-07a509491e5e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825,PodSandboxId:0832c1beaccf1e546405a95590e2f232fd9dc3af301b054d0ddd1c26def744c2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723815896468762272,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2fbf16a-3bc7-4300-8023-
5dbb20ba70bc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9,PodSandboxId:910361984af1fb80fb91b7169b8066c03ad84a0bec20ffaf6c1dfa6f3c5799e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723815891719403480,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-893736,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd65b07d81e7fe90256eaf
6d40549d5a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176,PodSandboxId:4aceba4e7ec56d78083e97d22dd30b21d45b180cdcc11a0acacc2f9b61bc17fe,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723815891718770190,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-893736,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e3972b8e55820f8f106be0692f94f90,},Annotations:map[str
ing]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239,PodSandboxId:3758ce55631b96de1faf39ead67443a9acde0c3a40267f1fc5631306ed23670c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723815891711839477,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-893736,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57bd8aaf450c00c9ac4dc94bbc9c4
8de,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190,PodSandboxId:5042006cc8ce07e1595b62cd91a701e5674d2a8f26d0ee21ea000c84fa2100c5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723815891706741833,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-893736,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85077b11aa053e7b722c3c3d1f6c9c7
b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4485fa85-1007-42b0-9074-aa77d36fe5c6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 14:07:27 default-k8s-diff-port-893736 crio[726]: time="2024-08-16 14:07:27.035723069Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=afef3eb2-7238-4497-a119-43bc41ccd335 name=/runtime.v1.RuntimeService/Version
	Aug 16 14:07:27 default-k8s-diff-port-893736 crio[726]: time="2024-08-16 14:07:27.035843215Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=afef3eb2-7238-4497-a119-43bc41ccd335 name=/runtime.v1.RuntimeService/Version
	Aug 16 14:07:27 default-k8s-diff-port-893736 crio[726]: time="2024-08-16 14:07:27.036914279Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=811f359e-bb2c-4ffe-926b-2af4cd99a47a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 14:07:27 default-k8s-diff-port-893736 crio[726]: time="2024-08-16 14:07:27.037291307Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723817247037269998,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=811f359e-bb2c-4ffe-926b-2af4cd99a47a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 14:07:27 default-k8s-diff-port-893736 crio[726]: time="2024-08-16 14:07:27.038105325Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4a6ae927-abaa-4beb-a144-abfdd4a73891 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 14:07:27 default-k8s-diff-port-893736 crio[726]: time="2024-08-16 14:07:27.038160705Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4a6ae927-abaa-4beb-a144-abfdd4a73891 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 14:07:27 default-k8s-diff-port-893736 crio[726]: time="2024-08-16 14:07:27.038403812Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b,PodSandboxId:0832c1beaccf1e546405a95590e2f232fd9dc3af301b054d0ddd1c26def744c2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723815927288158093,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2fbf16a-3bc7-4300-8023-5dbb20ba70bc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53a684d7eb166f20d306b8d2f298e21663f877c1f86e1b35603cee544142d1af,PodSandboxId:f2cca593e350029016755210ab3afd4acfdb3a896b2a39a1aff8994c8254dab0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723815907382589834,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a2a34a97-11aa-4c0e-b5e7-061dba89ed2d,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910,PodSandboxId:3a0568e7a14e9087cc579e0b7e7de4698d9f45ce54d316edc12b53e5bdee8d9e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723815904126132263,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-xdwhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66987c52-9a8c-4ddd-a6cf-ac84172d8c8c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb,PodSandboxId:f947c137b097f1a1e432cca00e0188c9449ebef74565f313500bce79b947dc63,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723815896583153000,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-btq6r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b7b283-d
a62-4cb8-a039-07a509491e5e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825,PodSandboxId:0832c1beaccf1e546405a95590e2f232fd9dc3af301b054d0ddd1c26def744c2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723815896468762272,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2fbf16a-3bc7-4300-8023-
5dbb20ba70bc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9,PodSandboxId:910361984af1fb80fb91b7169b8066c03ad84a0bec20ffaf6c1dfa6f3c5799e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723815891719403480,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-893736,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd65b07d81e7fe90256eaf
6d40549d5a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176,PodSandboxId:4aceba4e7ec56d78083e97d22dd30b21d45b180cdcc11a0acacc2f9b61bc17fe,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723815891718770190,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-893736,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e3972b8e55820f8f106be0692f94f90,},Annotations:map[str
ing]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239,PodSandboxId:3758ce55631b96de1faf39ead67443a9acde0c3a40267f1fc5631306ed23670c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723815891711839477,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-893736,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57bd8aaf450c00c9ac4dc94bbc9c4
8de,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190,PodSandboxId:5042006cc8ce07e1595b62cd91a701e5674d2a8f26d0ee21ea000c84fa2100c5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723815891706741833,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-893736,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85077b11aa053e7b722c3c3d1f6c9c7
b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4a6ae927-abaa-4beb-a144-abfdd4a73891 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 14:07:27 default-k8s-diff-port-893736 crio[726]: time="2024-08-16 14:07:27.046131350Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ee30d1b1-6a4a-4d35-8696-e8e9536a0876 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 16 14:07:27 default-k8s-diff-port-893736 crio[726]: time="2024-08-16 14:07:27.048753302Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:3a0568e7a14e9087cc579e0b7e7de4698d9f45ce54d316edc12b53e5bdee8d9e,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-xdwhx,Uid:66987c52-9a8c-4ddd-a6cf-ac84172d8c8c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723815903853596101,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-xdwhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66987c52-9a8c-4ddd-a6cf-ac84172d8c8c,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-16T13:44:55.988731812Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f2cca593e350029016755210ab3afd4acfdb3a896b2a39a1aff8994c8254dab0,Metadata:&PodSandboxMetadata{Name:busybox,Uid:a2a34a97-11aa-4c0e-b5e7-061dba89ed2d,Namespace:default,Attem
pt:0,},State:SANDBOX_READY,CreatedAt:1723815903852891551,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a2a34a97-11aa-4c0e-b5e7-061dba89ed2d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-16T13:44:55.988730968Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:10a3948041475f0c451ba2030a926ac49a93e132949031de4462f4fff9d12873,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-j9tqh,Uid:ef077e6d-f368-4872-bb87-9e031d3ea764,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723815902052308194,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-j9tqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef077e6d-f368-4872-bb87-9e031d3ea764,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-16
T13:44:55.988729190Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0832c1beaccf1e546405a95590e2f232fd9dc3af301b054d0ddd1c26def744c2,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:e2fbf16a-3bc7-4300-8023-5dbb20ba70bc,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723815896318274155,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2fbf16a-3bc7-4300-8023-5dbb20ba70bc,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"g
cr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-16T13:44:55.988730097Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f947c137b097f1a1e432cca00e0188c9449ebef74565f313500bce79b947dc63,Metadata:&PodSandboxMetadata{Name:kube-proxy-btq6r,Uid:a2b7b283-da62-4cb8-a039-07a509491e5e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723815896301765953,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-btq6r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b7b283-da62-4cb8-a039-07a509491e5e,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{ku
bernetes.io/config.seen: 2024-08-16T13:44:55.988727017Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4aceba4e7ec56d78083e97d22dd30b21d45b180cdcc11a0acacc2f9b61bc17fe,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-893736,Uid:9e3972b8e55820f8f106be0692f94f90,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723815891481546778,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-893736,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e3972b8e55820f8f106be0692f94f90,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.186:2379,kubernetes.io/config.hash: 9e3972b8e55820f8f106be0692f94f90,kubernetes.io/config.seen: 2024-08-16T13:44:51.026131356Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3758ce55631b96de1faf39ead67443a9acde0c3a40267f1fc5631306ed23670c,Metadata:&PodSandboxMetadata{Name:
kube-controller-manager-default-k8s-diff-port-893736,Uid:57bd8aaf450c00c9ac4dc94bbc9c48de,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723815891478514036,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-893736,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57bd8aaf450c00c9ac4dc94bbc9c48de,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 57bd8aaf450c00c9ac4dc94bbc9c48de,kubernetes.io/config.seen: 2024-08-16T13:44:50.989582437Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5042006cc8ce07e1595b62cd91a701e5674d2a8f26d0ee21ea000c84fa2100c5,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-893736,Uid:85077b11aa053e7b722c3c3d1f6c9c7b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723815891475384566,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name
: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-893736,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85077b11aa053e7b722c3c3d1f6c9c7b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.186:8444,kubernetes.io/config.hash: 85077b11aa053e7b722c3c3d1f6c9c7b,kubernetes.io/config.seen: 2024-08-16T13:44:50.989578439Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:910361984af1fb80fb91b7169b8066c03ad84a0bec20ffaf6c1dfa6f3c5799e6,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-893736,Uid:fd65b07d81e7fe90256eaf6d40549d5a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723815891474383268,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-893736,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd65b07d81e7fe90256eaf6d40549d5a,tier: control-p
lane,},Annotations:map[string]string{kubernetes.io/config.hash: fd65b07d81e7fe90256eaf6d40549d5a,kubernetes.io/config.seen: 2024-08-16T13:44:50.989583593Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=ee30d1b1-6a4a-4d35-8696-e8e9536a0876 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 16 14:07:27 default-k8s-diff-port-893736 crio[726]: time="2024-08-16 14:07:27.050874908Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d8582c2b-639e-4fd0-b93c-5d0e34ec856f name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 14:07:27 default-k8s-diff-port-893736 crio[726]: time="2024-08-16 14:07:27.051070688Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d8582c2b-639e-4fd0-b93c-5d0e34ec856f name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 14:07:27 default-k8s-diff-port-893736 crio[726]: time="2024-08-16 14:07:27.052144489Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b,PodSandboxId:0832c1beaccf1e546405a95590e2f232fd9dc3af301b054d0ddd1c26def744c2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723815927288158093,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2fbf16a-3bc7-4300-8023-5dbb20ba70bc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53a684d7eb166f20d306b8d2f298e21663f877c1f86e1b35603cee544142d1af,PodSandboxId:f2cca593e350029016755210ab3afd4acfdb3a896b2a39a1aff8994c8254dab0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723815907382589834,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a2a34a97-11aa-4c0e-b5e7-061dba89ed2d,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910,PodSandboxId:3a0568e7a14e9087cc579e0b7e7de4698d9f45ce54d316edc12b53e5bdee8d9e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723815904126132263,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-xdwhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66987c52-9a8c-4ddd-a6cf-ac84172d8c8c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb,PodSandboxId:f947c137b097f1a1e432cca00e0188c9449ebef74565f313500bce79b947dc63,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723815896583153000,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-btq6r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b7b283-d
a62-4cb8-a039-07a509491e5e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9,PodSandboxId:910361984af1fb80fb91b7169b8066c03ad84a0bec20ffaf6c1dfa6f3c5799e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723815891719403480,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-893736,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd65b07d8
1e7fe90256eaf6d40549d5a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176,PodSandboxId:4aceba4e7ec56d78083e97d22dd30b21d45b180cdcc11a0acacc2f9b61bc17fe,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723815891718770190,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-893736,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e3972b8e55820f8f106be0692f94f90,},Annota
tions:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239,PodSandboxId:3758ce55631b96de1faf39ead67443a9acde0c3a40267f1fc5631306ed23670c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723815891711839477,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-893736,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57bd8aaf450c00c9
ac4dc94bbc9c48de,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190,PodSandboxId:5042006cc8ce07e1595b62cd91a701e5674d2a8f26d0ee21ea000c84fa2100c5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723815891706741833,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-893736,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85077b11aa053e7b72
2c3c3d1f6c9c7b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d8582c2b-639e-4fd0-b93c-5d0e34ec856f name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 14:07:27 default-k8s-diff-port-893736 crio[726]: time="2024-08-16 14:07:27.086920848Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=27180822-1434-466f-8fb7-fb010cc6f051 name=/runtime.v1.RuntimeService/Version
	Aug 16 14:07:27 default-k8s-diff-port-893736 crio[726]: time="2024-08-16 14:07:27.087042228Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=27180822-1434-466f-8fb7-fb010cc6f051 name=/runtime.v1.RuntimeService/Version
	Aug 16 14:07:27 default-k8s-diff-port-893736 crio[726]: time="2024-08-16 14:07:27.088631744Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=27a62288-4d9d-44a1-a582-ae8e021b53a9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 14:07:27 default-k8s-diff-port-893736 crio[726]: time="2024-08-16 14:07:27.089204690Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723817247089171901,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=27a62288-4d9d-44a1-a582-ae8e021b53a9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 14:07:27 default-k8s-diff-port-893736 crio[726]: time="2024-08-16 14:07:27.090408335Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dea75d50-5f26-45fb-8151-20e3ae882bae name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 14:07:27 default-k8s-diff-port-893736 crio[726]: time="2024-08-16 14:07:27.090559451Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dea75d50-5f26-45fb-8151-20e3ae882bae name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 14:07:27 default-k8s-diff-port-893736 crio[726]: time="2024-08-16 14:07:27.090822744Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b,PodSandboxId:0832c1beaccf1e546405a95590e2f232fd9dc3af301b054d0ddd1c26def744c2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723815927288158093,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2fbf16a-3bc7-4300-8023-5dbb20ba70bc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53a684d7eb166f20d306b8d2f298e21663f877c1f86e1b35603cee544142d1af,PodSandboxId:f2cca593e350029016755210ab3afd4acfdb3a896b2a39a1aff8994c8254dab0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723815907382589834,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a2a34a97-11aa-4c0e-b5e7-061dba89ed2d,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910,PodSandboxId:3a0568e7a14e9087cc579e0b7e7de4698d9f45ce54d316edc12b53e5bdee8d9e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723815904126132263,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-xdwhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66987c52-9a8c-4ddd-a6cf-ac84172d8c8c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb,PodSandboxId:f947c137b097f1a1e432cca00e0188c9449ebef74565f313500bce79b947dc63,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723815896583153000,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-btq6r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b7b283-d
a62-4cb8-a039-07a509491e5e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825,PodSandboxId:0832c1beaccf1e546405a95590e2f232fd9dc3af301b054d0ddd1c26def744c2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723815896468762272,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2fbf16a-3bc7-4300-8023-
5dbb20ba70bc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9,PodSandboxId:910361984af1fb80fb91b7169b8066c03ad84a0bec20ffaf6c1dfa6f3c5799e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723815891719403480,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-893736,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd65b07d81e7fe90256eaf
6d40549d5a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176,PodSandboxId:4aceba4e7ec56d78083e97d22dd30b21d45b180cdcc11a0acacc2f9b61bc17fe,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723815891718770190,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-893736,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e3972b8e55820f8f106be0692f94f90,},Annotations:map[str
ing]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239,PodSandboxId:3758ce55631b96de1faf39ead67443a9acde0c3a40267f1fc5631306ed23670c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723815891711839477,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-893736,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57bd8aaf450c00c9ac4dc94bbc9c4
8de,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190,PodSandboxId:5042006cc8ce07e1595b62cd91a701e5674d2a8f26d0ee21ea000c84fa2100c5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723815891706741833,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-893736,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85077b11aa053e7b722c3c3d1f6c9c7
b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dea75d50-5f26-45fb-8151-20e3ae882bae name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7f296429e678f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Running             storage-provisioner       2                   0832c1beaccf1       storage-provisioner
	53a684d7eb166       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   22 minutes ago      Running             busybox                   1                   f2cca593e3500       busybox
	8922cc9760a0e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      22 minutes ago      Running             coredns                   1                   3a0568e7a14e9       coredns-6f6b679f8f-xdwhx
	99545c4e9a57a       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      22 minutes ago      Running             kube-proxy                1                   f947c137b097f       kube-proxy-btq6r
	17df9b5cc9f16       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      22 minutes ago      Exited              storage-provisioner       1                   0832c1beaccf1       storage-provisioner
	ec5ec870d772b       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      22 minutes ago      Running             kube-scheduler            1                   910361984af1f       kube-scheduler-default-k8s-diff-port-893736
	83bd481c9871b       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      22 minutes ago      Running             etcd                      1                   4aceba4e7ec56       etcd-default-k8s-diff-port-893736
	590cecb818b97       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      22 minutes ago      Running             kube-controller-manager   1                   3758ce55631b9       kube-controller-manager-default-k8s-diff-port-893736
	4f1bf38f05e69       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      22 minutes ago      Running             kube-apiserver            1                   5042006cc8ce0       kube-apiserver-default-k8s-diff-port-893736
	
	
	==> coredns [8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:50656 - 25965 "HINFO IN 1543422988393869237.765001929891377544. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.020181201s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-893736
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-893736
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab84f9bc76071a77c857a14f5c66dccc01002b05
	                    minikube.k8s.io/name=default-k8s-diff-port-893736
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_16T13_38_45_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 13:38:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-893736
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 14:07:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 14:05:51 +0000   Fri, 16 Aug 2024 13:38:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 14:05:51 +0000   Fri, 16 Aug 2024 13:38:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 14:05:51 +0000   Fri, 16 Aug 2024 13:38:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 14:05:51 +0000   Fri, 16 Aug 2024 13:45:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.186
	  Hostname:    default-k8s-diff-port-893736
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d6f3dd157da547f5bd69db04ff223432
	  System UUID:                d6f3dd15-7da5-47f5-bd69-db04ff223432
	  Boot ID:                    994e1b50-ef04-41ea-aa93-7dd82a2a6026
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 coredns-6f6b679f8f-xdwhx                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     28m
	  kube-system                 etcd-default-k8s-diff-port-893736                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         28m
	  kube-system                 kube-apiserver-default-k8s-diff-port-893736             250m (12%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-893736    200m (10%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-proxy-btq6r                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-scheduler-default-k8s-diff-port-893736             100m (5%)     0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 metrics-server-6867b74b74-j9tqh                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         27m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 22m                kube-proxy       
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     28m                kubelet          Node default-k8s-diff-port-893736 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node default-k8s-diff-port-893736 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node default-k8s-diff-port-893736 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                28m                kubelet          Node default-k8s-diff-port-893736 status is now: NodeReady
	  Normal  RegisteredNode           28m                node-controller  Node default-k8s-diff-port-893736 event: Registered Node default-k8s-diff-port-893736 in Controller
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node default-k8s-diff-port-893736 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node default-k8s-diff-port-893736 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node default-k8s-diff-port-893736 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           22m                node-controller  Node default-k8s-diff-port-893736 event: Registered Node default-k8s-diff-port-893736 in Controller
	
	
	==> dmesg <==
	[Aug16 13:44] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053339] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042395] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.239436] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.611213] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.383665] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.815541] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.061386] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060954] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +0.176878] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +0.136118] systemd-fstab-generator[681]: Ignoring "noauto" option for root device
	[  +0.309851] systemd-fstab-generator[710]: Ignoring "noauto" option for root device
	[  +4.225700] systemd-fstab-generator[808]: Ignoring "noauto" option for root device
	[  +2.167717] systemd-fstab-generator[930]: Ignoring "noauto" option for root device
	[  +0.065550] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.582640] kauditd_printk_skb: 69 callbacks suppressed
	[  +2.405193] systemd-fstab-generator[1555]: Ignoring "noauto" option for root device
	[Aug16 13:45] kauditd_printk_skb: 64 callbacks suppressed
	[  +5.046548] kauditd_printk_skb: 41 callbacks suppressed
	
	
	==> etcd [83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176] <==
	{"level":"warn","ts":"2024-08-16T14:05:12.362114Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.425704ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-16T14:05:12.362218Z","caller":"traceutil/trace.go:171","msg":"trace[1785495323] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1607; }","duration":"103.540842ms","start":"2024-08-16T14:05:12.258654Z","end":"2024-08-16T14:05:12.362195Z","steps":["trace[1785495323] 'range keys from in-memory index tree'  (duration: 103.416846ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T14:05:37.297657Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"268.339477ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7043789639099139657 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.50.186\" mod_revision:1618 > success:<request_put:<key:\"/registry/masterleases/192.168.50.186\" value_size:67 lease:7043789639099139653 >> failure:<request_range:<key:\"/registry/masterleases/192.168.50.186\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-08-16T14:05:37.298269Z","caller":"traceutil/trace.go:171","msg":"trace[686234450] transaction","detail":"{read_only:false; response_revision:1627; number_of_response:1; }","duration":"392.593819ms","start":"2024-08-16T14:05:36.905646Z","end":"2024-08-16T14:05:37.298240Z","steps":["trace[686234450] 'process raft request'  (duration: 392.278128ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T14:05:37.298417Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T14:05:36.905630Z","time spent":"392.728368ms","remote":"127.0.0.1:41344","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1625 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-08-16T14:05:37.298649Z","caller":"traceutil/trace.go:171","msg":"trace[1916975605] transaction","detail":"{read_only:false; response_revision:1626; number_of_response:1; }","duration":"395.694012ms","start":"2024-08-16T14:05:36.902943Z","end":"2024-08-16T14:05:37.298637Z","steps":["trace[1916975605] 'process raft request'  (duration: 125.719588ms)","trace[1916975605] 'compare'  (duration: 268.025718ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-16T14:05:37.298786Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T14:05:36.902926Z","time spent":"395.820527ms","remote":"127.0.0.1:41166","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":120,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.50.186\" mod_revision:1618 > success:<request_put:<key:\"/registry/masterleases/192.168.50.186\" value_size:67 lease:7043789639099139653 >> failure:<request_range:<key:\"/registry/masterleases/192.168.50.186\" > >"}
	{"level":"info","ts":"2024-08-16T14:06:05.578795Z","caller":"traceutil/trace.go:171","msg":"trace[1244067661] transaction","detail":"{read_only:false; response_revision:1650; number_of_response:1; }","duration":"133.496827ms","start":"2024-08-16T14:06:05.445283Z","end":"2024-08-16T14:06:05.578780Z","steps":["trace[1244067661] 'process raft request'  (duration: 133.360959ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T14:06:06.954654Z","caller":"traceutil/trace.go:171","msg":"trace[1680456381] transaction","detail":"{read_only:false; response_revision:1651; number_of_response:1; }","duration":"135.280403ms","start":"2024-08-16T14:06:06.819354Z","end":"2024-08-16T14:06:06.954634Z","steps":["trace[1680456381] 'process raft request'  (duration: 74.027869ms)","trace[1680456381] 'compare'  (duration: 60.842586ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-16T14:06:26.047986Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.641083ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-16T14:06:26.048490Z","caller":"traceutil/trace.go:171","msg":"trace[944440982] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1666; }","duration":"162.064445ms","start":"2024-08-16T14:06:25.886320Z","end":"2024-08-16T14:06:26.048384Z","steps":["trace[944440982] 'range keys from in-memory index tree'  (duration: 161.57415ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T14:06:31.938321Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.058603ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1129"}
	{"level":"info","ts":"2024-08-16T14:06:31.938763Z","caller":"traceutil/trace.go:171","msg":"trace[1673188030] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1672; }","duration":"110.500744ms","start":"2024-08-16T14:06:31.828240Z","end":"2024-08-16T14:06:31.938741Z","steps":["trace[1673188030] 'range keys from in-memory index tree'  (duration: 109.959834ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T14:06:32.256555Z","caller":"traceutil/trace.go:171","msg":"trace[683851141] transaction","detail":"{read_only:false; response_revision:1673; number_of_response:1; }","duration":"313.188217ms","start":"2024-08-16T14:06:31.943127Z","end":"2024-08-16T14:06:32.256315Z","steps":["trace[683851141] 'process raft request'  (duration: 313.106817ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T14:06:32.256685Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T14:06:31.943110Z","time spent":"313.513366ms","remote":"127.0.0.1:41344","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1670 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-08-16T14:06:32.742211Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"392.498535ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7043789639099139996 > lease_revoke:<id:61c0915b6f14c736>","response":"size:27"}
	{"level":"info","ts":"2024-08-16T14:07:02.797047Z","caller":"traceutil/trace.go:171","msg":"trace[2086295421] transaction","detail":"{read_only:false; response_revision:1698; number_of_response:1; }","duration":"395.635464ms","start":"2024-08-16T14:07:02.401367Z","end":"2024-08-16T14:07:02.797003Z","steps":["trace[2086295421] 'process raft request'  (duration: 376.884677ms)","trace[2086295421] 'compare'  (duration: 18.264715ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-16T14:07:02.797991Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T14:07:02.401350Z","time spent":"396.497338ms","remote":"127.0.0.1:41344","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1695 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-08-16T14:07:04.921011Z","caller":"traceutil/trace.go:171","msg":"trace[1356203814] transaction","detail":"{read_only:false; response_revision:1699; number_of_response:1; }","duration":"113.415033ms","start":"2024-08-16T14:07:04.807568Z","end":"2024-08-16T14:07:04.920983Z","steps":["trace[1356203814] 'process raft request'  (duration: 113.036863ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T14:07:06.988614Z","caller":"traceutil/trace.go:171","msg":"trace[1161777317] linearizableReadLoop","detail":"{readStateIndex:2016; appliedIndex:2015; }","duration":"104.566956ms","start":"2024-08-16T14:07:06.884025Z","end":"2024-08-16T14:07:06.988592Z","steps":["trace[1161777317] 'read index received'  (duration: 33.697123ms)","trace[1161777317] 'applied index is now lower than readState.Index'  (duration: 70.868842ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-16T14:07:06.988733Z","caller":"traceutil/trace.go:171","msg":"trace[1567213922] transaction","detail":"{read_only:false; response_revision:1700; number_of_response:1; }","duration":"148.247118ms","start":"2024-08-16T14:07:06.840476Z","end":"2024-08-16T14:07:06.988723Z","steps":["trace[1567213922] 'process raft request'  (duration: 77.300073ms)","trace[1567213922] 'compare'  (duration: 70.604231ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-16T14:07:06.989569Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.475325ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-16T14:07:06.991220Z","caller":"traceutil/trace.go:171","msg":"trace[2029780270] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1700; }","duration":"107.183372ms","start":"2024-08-16T14:07:06.884020Z","end":"2024-08-16T14:07:06.991204Z","steps":["trace[2029780270] 'agreement among raft nodes before linearized reading'  (duration: 105.399958ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T14:07:07.181409Z","caller":"traceutil/trace.go:171","msg":"trace[1992401049] transaction","detail":"{read_only:false; response_revision:1701; number_of_response:1; }","duration":"183.421738ms","start":"2024-08-16T14:07:06.997965Z","end":"2024-08-16T14:07:07.181387Z","steps":["trace[1992401049] 'process raft request'  (duration: 112.813077ms)","trace[1992401049] 'compare'  (duration: 70.510386ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-16T14:07:09.352879Z","caller":"traceutil/trace.go:171","msg":"trace[462234270] transaction","detail":"{read_only:false; response_revision:1702; number_of_response:1; }","duration":"161.550755ms","start":"2024-08-16T14:07:09.191283Z","end":"2024-08-16T14:07:09.352833Z","steps":["trace[462234270] 'process raft request'  (duration: 160.746934ms)"],"step_count":1}
	
	
	==> kernel <==
	 14:07:27 up 23 min,  0 users,  load average: 0.17, 0.25, 0.18
	Linux default-k8s-diff-port-893736 5.10.207 #1 SMP Wed Aug 14 19:18:01 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190] <==
	 > logger="UnhandledError"
	I0816 14:02:56.369683       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0816 14:04:55.367502       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 14:04:55.367672       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0816 14:04:56.370375       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 14:04:56.370630       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0816 14:04:56.370422       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 14:04:56.370766       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0816 14:04:56.371932       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0816 14:04:56.371960       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0816 14:05:56.372113       1 handler_proxy.go:99] no RequestInfo found in the context
	W0816 14:05:56.372113       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 14:05:56.372601       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0816 14:05:56.372599       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0816 14:05:56.373812       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0816 14:05:56.373834       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239] <==
	E0816 14:01:59.108279       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 14:01:59.564711       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 14:02:29.115058       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 14:02:29.572766       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 14:02:59.123306       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 14:02:59.580162       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 14:03:29.132796       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 14:03:29.589926       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 14:03:59.140221       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 14:03:59.599673       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 14:04:29.146577       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 14:04:29.606976       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 14:04:59.153200       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 14:04:59.614642       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 14:05:29.160667       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 14:05:29.624018       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0816 14:05:51.255092       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-893736"
	E0816 14:05:59.166835       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 14:05:59.634282       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0816 14:06:26.066065       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="215.519µs"
	E0816 14:06:29.174114       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 14:06:29.642025       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0816 14:06:41.061889       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="105.842µs"
	E0816 14:06:59.184658       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 14:06:59.651367       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0816 13:44:56.807267       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0816 13:44:56.817235       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.186"]
	E0816 13:44:56.817411       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 13:44:56.848664       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0816 13:44:56.848711       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0816 13:44:56.848737       1 server_linux.go:169] "Using iptables Proxier"
	I0816 13:44:56.852262       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 13:44:56.852605       1 server.go:483] "Version info" version="v1.31.0"
	I0816 13:44:56.852618       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 13:44:56.854157       1 config.go:197] "Starting service config controller"
	I0816 13:44:56.854183       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 13:44:56.854208       1 config.go:104] "Starting endpoint slice config controller"
	I0816 13:44:56.854213       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 13:44:56.857123       1 config.go:326] "Starting node config controller"
	I0816 13:44:56.857175       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 13:44:56.955262       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0816 13:44:56.955338       1 shared_informer.go:320] Caches are synced for service config
	I0816 13:44:56.957415       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9] <==
	I0816 13:44:52.926610       1 serving.go:386] Generated self-signed cert in-memory
	W0816 13:44:55.310586       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0816 13:44:55.312539       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0816 13:44:55.312765       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0816 13:44:55.312868       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0816 13:44:55.371928       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0816 13:44:55.371984       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 13:44:55.382179       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0816 13:44:55.382299       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0816 13:44:55.382334       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 13:44:55.382366       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0816 13:44:55.483608       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 16 14:06:11 default-k8s-diff-port-893736 kubelet[937]: E0816 14:06:11.382411     937 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723817171381193840,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 14:06:21 default-k8s-diff-port-893736 kubelet[937]: E0816 14:06:21.384993     937 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723817181384169483,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 14:06:21 default-k8s-diff-port-893736 kubelet[937]: E0816 14:06:21.385542     937 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723817181384169483,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 14:06:26 default-k8s-diff-port-893736 kubelet[937]: E0816 14:06:26.046673     937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-j9tqh" podUID="ef077e6d-f368-4872-bb87-9e031d3ea764"
	Aug 16 14:06:31 default-k8s-diff-port-893736 kubelet[937]: E0816 14:06:31.388756     937 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723817191387910708,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 14:06:31 default-k8s-diff-port-893736 kubelet[937]: E0816 14:06:31.388848     937 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723817191387910708,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 14:06:41 default-k8s-diff-port-893736 kubelet[937]: E0816 14:06:41.045932     937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-j9tqh" podUID="ef077e6d-f368-4872-bb87-9e031d3ea764"
	Aug 16 14:06:41 default-k8s-diff-port-893736 kubelet[937]: E0816 14:06:41.392030     937 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723817201391185448,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 14:06:41 default-k8s-diff-port-893736 kubelet[937]: E0816 14:06:41.392538     937 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723817201391185448,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 14:06:51 default-k8s-diff-port-893736 kubelet[937]: E0816 14:06:51.068735     937 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 14:06:51 default-k8s-diff-port-893736 kubelet[937]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 14:06:51 default-k8s-diff-port-893736 kubelet[937]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 14:06:51 default-k8s-diff-port-893736 kubelet[937]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 14:06:51 default-k8s-diff-port-893736 kubelet[937]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 14:06:51 default-k8s-diff-port-893736 kubelet[937]: E0816 14:06:51.394872     937 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723817211394334936,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 14:06:51 default-k8s-diff-port-893736 kubelet[937]: E0816 14:06:51.394900     937 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723817211394334936,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 14:06:52 default-k8s-diff-port-893736 kubelet[937]: E0816 14:06:52.046795     937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-j9tqh" podUID="ef077e6d-f368-4872-bb87-9e031d3ea764"
	Aug 16 14:07:01 default-k8s-diff-port-893736 kubelet[937]: E0816 14:07:01.398204     937 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723817221397533243,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 14:07:01 default-k8s-diff-port-893736 kubelet[937]: E0816 14:07:01.398625     937 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723817221397533243,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 14:07:05 default-k8s-diff-port-893736 kubelet[937]: E0816 14:07:05.048015     937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-j9tqh" podUID="ef077e6d-f368-4872-bb87-9e031d3ea764"
	Aug 16 14:07:11 default-k8s-diff-port-893736 kubelet[937]: E0816 14:07:11.401026     937 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723817231400632954,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 14:07:11 default-k8s-diff-port-893736 kubelet[937]: E0816 14:07:11.401092     937 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723817231400632954,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 14:07:20 default-k8s-diff-port-893736 kubelet[937]: E0816 14:07:20.046598     937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-j9tqh" podUID="ef077e6d-f368-4872-bb87-9e031d3ea764"
	Aug 16 14:07:21 default-k8s-diff-port-893736 kubelet[937]: E0816 14:07:21.403686     937 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723817241403145608,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 14:07:21 default-k8s-diff-port-893736 kubelet[937]: E0816 14:07:21.403751     937 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723817241403145608,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825] <==
	I0816 13:44:56.680103       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0816 13:45:26.684749       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b] <==
	I0816 13:45:27.384272       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0816 13:45:27.396206       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0816 13:45:27.396288       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0816 13:45:44.795204       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0816 13:45:44.795771       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b686b8e6-c7e8-4382-830a-268f7125cb2c", APIVersion:"v1", ResourceVersion:"646", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-893736_9dc92472-8aec-48b8-972b-56b4cd9bdaff became leader
	I0816 13:45:44.797879       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-893736_9dc92472-8aec-48b8-972b-56b4cd9bdaff!
	I0816 13:45:44.898673       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-893736_9dc92472-8aec-48b8-972b-56b4cd9bdaff!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-893736 -n default-k8s-diff-port-893736
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-893736 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-j9tqh
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-893736 describe pod metrics-server-6867b74b74-j9tqh
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-893736 describe pod metrics-server-6867b74b74-j9tqh: exit status 1 (69.965919ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-j9tqh" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-893736 describe pod metrics-server-6867b74b74-j9tqh: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (542.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (369.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-302520 -n embed-certs-302520
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-08-16 14:05:28.902696433 +0000 UTC m=+6271.811387052
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-302520 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-302520 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.258µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-302520 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-302520 -n embed-certs-302520
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-302520 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-302520 logs -n 25: (1.252553165s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p old-k8s-version-882237        | old-k8s-version-882237       | jenkins | v1.33.1 | 16 Aug 24 13:39 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-302520                                  | embed-certs-302520           | jenkins | v1.33.1 | 16 Aug 24 13:39 UTC | 16 Aug 24 13:50 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-311070                  | no-preload-311070            | jenkins | v1.33.1 | 16 Aug 24 13:39 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-311070                                   | no-preload-311070            | jenkins | v1.33.1 | 16 Aug 24 13:39 UTC | 16 Aug 24 13:48 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-893736  | default-k8s-diff-port-893736 | jenkins | v1.33.1 | 16 Aug 24 13:39 UTC | 16 Aug 24 13:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-893736 | jenkins | v1.33.1 | 16 Aug 24 13:39 UTC |                     |
	|         | default-k8s-diff-port-893736                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-882237                              | old-k8s-version-882237       | jenkins | v1.33.1 | 16 Aug 24 13:40 UTC | 16 Aug 24 13:40 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-882237             | old-k8s-version-882237       | jenkins | v1.33.1 | 16 Aug 24 13:40 UTC | 16 Aug 24 13:40 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-882237                              | old-k8s-version-882237       | jenkins | v1.33.1 | 16 Aug 24 13:40 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-893736       | default-k8s-diff-port-893736 | jenkins | v1.33.1 | 16 Aug 24 13:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-893736 | jenkins | v1.33.1 | 16 Aug 24 13:42 UTC | 16 Aug 24 13:49 UTC |
	|         | default-k8s-diff-port-893736                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-882237                              | old-k8s-version-882237       | jenkins | v1.33.1 | 16 Aug 24 14:03 UTC | 16 Aug 24 14:03 UTC |
	| start   | -p newest-cni-375308 --memory=2200 --alsologtostderr   | newest-cni-375308            | jenkins | v1.33.1 | 16 Aug 24 14:03 UTC | 16 Aug 24 14:04 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-375308             | newest-cni-375308            | jenkins | v1.33.1 | 16 Aug 24 14:04 UTC | 16 Aug 24 14:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-375308                                   | newest-cni-375308            | jenkins | v1.33.1 | 16 Aug 24 14:04 UTC | 16 Aug 24 14:04 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-375308                  | newest-cni-375308            | jenkins | v1.33.1 | 16 Aug 24 14:04 UTC | 16 Aug 24 14:04 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-375308 --memory=2200 --alsologtostderr   | newest-cni-375308            | jenkins | v1.33.1 | 16 Aug 24 14:04 UTC | 16 Aug 24 14:05 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-311070                                   | no-preload-311070            | jenkins | v1.33.1 | 16 Aug 24 14:04 UTC | 16 Aug 24 14:04 UTC |
	| start   | -p auto-251866 --memory=3072                           | auto-251866                  | jenkins | v1.33.1 | 16 Aug 24 14:04 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| image   | newest-cni-375308 image list                           | newest-cni-375308            | jenkins | v1.33.1 | 16 Aug 24 14:05 UTC | 16 Aug 24 14:05 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-375308                                   | newest-cni-375308            | jenkins | v1.33.1 | 16 Aug 24 14:05 UTC | 16 Aug 24 14:05 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-375308                                   | newest-cni-375308            | jenkins | v1.33.1 | 16 Aug 24 14:05 UTC | 16 Aug 24 14:05 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-375308                                   | newest-cni-375308            | jenkins | v1.33.1 | 16 Aug 24 14:05 UTC | 16 Aug 24 14:05 UTC |
	| delete  | -p newest-cni-375308                                   | newest-cni-375308            | jenkins | v1.33.1 | 16 Aug 24 14:05 UTC | 16 Aug 24 14:05 UTC |
	| start   | -p kindnet-251866                                      | kindnet-251866               | jenkins | v1.33.1 | 16 Aug 24 14:05 UTC |                     |
	|         | --memory=3072                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 14:05:27
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 14:05:27.320878   65852 out.go:345] Setting OutFile to fd 1 ...
	I0816 14:05:27.321159   65852 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 14:05:27.321167   65852 out.go:358] Setting ErrFile to fd 2...
	I0816 14:05:27.321171   65852 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 14:05:27.321347   65852 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-3966/.minikube/bin
	I0816 14:05:27.321909   65852 out.go:352] Setting JSON to false
	I0816 14:05:27.322852   65852 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6472,"bootTime":1723810655,"procs":227,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 14:05:27.322913   65852 start.go:139] virtualization: kvm guest
	I0816 14:05:27.325141   65852 out.go:177] * [kindnet-251866] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 14:05:27.326729   65852 notify.go:220] Checking for updates...
	I0816 14:05:27.326782   65852 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 14:05:27.328390   65852 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 14:05:27.329898   65852 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 14:05:27.331396   65852 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-3966/.minikube
	I0816 14:05:27.332667   65852 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 14:05:27.333900   65852 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 14:05:27.335682   65852 config.go:182] Loaded profile config "auto-251866": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 14:05:27.335774   65852 config.go:182] Loaded profile config "default-k8s-diff-port-893736": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 14:05:27.335848   65852 config.go:182] Loaded profile config "embed-certs-302520": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 14:05:27.335931   65852 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 14:05:27.372850   65852 out.go:177] * Using the kvm2 driver based on user configuration
	I0816 14:05:27.374168   65852 start.go:297] selected driver: kvm2
	I0816 14:05:27.374187   65852 start.go:901] validating driver "kvm2" against <nil>
	I0816 14:05:27.374198   65852 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 14:05:27.374914   65852 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 14:05:27.374984   65852 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-3966/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 14:05:27.391403   65852 install.go:137] /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0816 14:05:27.391464   65852 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 14:05:27.391750   65852 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 14:05:27.391826   65852 cni.go:84] Creating CNI manager for "kindnet"
	I0816 14:05:27.391838   65852 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0816 14:05:27.391909   65852 start.go:340] cluster config:
	{Name:kindnet-251866 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kindnet-251866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 14:05:27.392040   65852 iso.go:125] acquiring lock: {Name:mk00139b359c6fe4c84d80e843a2099303fb7c5b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 14:05:27.394897   65852 out.go:177] * Starting "kindnet-251866" primary control-plane node in "kindnet-251866" cluster
	I0816 14:05:24.230998   65125 main.go:141] libmachine: (auto-251866) DBG | domain auto-251866 has defined MAC address 52:54:00:21:f2:41 in network mk-auto-251866
	I0816 14:05:24.231411   65125 main.go:141] libmachine: (auto-251866) DBG | unable to find current IP address of domain auto-251866 in network mk-auto-251866
	I0816 14:05:24.231435   65125 main.go:141] libmachine: (auto-251866) DBG | I0816 14:05:24.231360   65181 retry.go:31] will retry after 5.21129538s: waiting for machine to come up
	
	
	==> CRI-O <==
	Aug 16 14:05:29 embed-certs-302520 crio[731]: time="2024-08-16 14:05:29.479682650Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723817129479652114,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f92471c7-bf6b-409f-8529-332d6a2d62c8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 14:05:29 embed-certs-302520 crio[731]: time="2024-08-16 14:05:29.481123198Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8a2dde2e-9631-4bb5-bf42-1eb83f2bc0bf name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 14:05:29 embed-certs-302520 crio[731]: time="2024-08-16 14:05:29.481328723Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8a2dde2e-9631-4bb5-bf42-1eb83f2bc0bf name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 14:05:29 embed-certs-302520 crio[731]: time="2024-08-16 14:05:29.482169207Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5d55f680e17867bafc1c19e765974907ab36d34fd5cc2d97ce049e2dae88cdb9,PodSandboxId:3263fbb7130a4509352aea8c9440162a738feafa6856e2e2b3d34d3db2ba7679,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723816211588747512,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e139aaf-e6d1-4661-8c7b-90c1cc9827d4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffcd4d176bf4be7886729826a56f7273e9e0838c165cc5fa840fd94d50c7c03a,PodSandboxId:9438a8a614cf7bbd8de7689ca3bc3629b2c1edf2e139f13ba4e815927f970ec5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723816210735087714,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-whnqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f4d69de-4130-4959-b1ef-9ddfbe5d6a72,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25b192393075efc6eb2238107efce33495fe6e172de9fcf1e68112955e90f670,PodSandboxId:f510cef5d08928e5b77a25465bb6e3a6eeea17c0801112c7d5360071d666cb7c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723816210671089660,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-zh69g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
65235cd-590b-4108-b5fc-b5f6072c8f5f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4414024807b0b08671bfa31dc0b388df67e122d81578315b8f1fc3bddab16b1,PodSandboxId:8f708e8ed40790f92dfaaebdc46d9ea5682c9cd2e827b524c98f18525041a515,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1723816209933953021,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-spgtw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9b2b029-a32e-4dd5-b4ff-bd3d61b97a02,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8473f5fc22d8f11c7a750bc72270174924ff8715e66f72e84090c6619f56d998,PodSandboxId:b32a6f7073caa7ea55df0673ea4d09884f9aecd96a926a08b267943d559f76a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723816199176703329,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-302520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18192e92212a656a4a67a5434489cfbb,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd62b9f92fb761c4352d0d4d0130256f136acb5a5bc6022f6c954b0b101ab4ed,PodSandboxId:d5efdc758602e539fb18df58bbdb5fd1a2a2f9af4ab74aefa3059ea11cea6fde,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723816199179375113,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-302520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9298f22fc34ff49d8719b2eab520558,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ac14a9494897cd2451337557224661bd534820b7d2b6abbe0b7f06a60433577,PodSandboxId:4724ddd4a2ac02adb82deac2dfe603c724cabdd076f443a88879ad90576bbfb6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723816199100245070,Labels:map[string]string{io.kubernetes.container.name: kube-api
server,io.kubernetes.pod.name: kube-apiserver-embed-certs-302520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea84440da82a0e6e97e5514a37c8c507,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c587865a89293996175b297ce0b354299a1c2641b13d975322b0180e1d9c22bd,PodSandboxId:0d8ae2faf420d01cc7a6e8c3f9b4b66d15cf8f400620b88d05777735165a0997,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723816199076235411,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-302520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5497839c317a5df8d8ab75b11fa2ea7f,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecc28eb673520758cb2eee6f8dba642d92cb4383b0bb5e85ee1b0608d7f24fa6,PodSandboxId:8682941ad9d1df82e5d9018cf36a238cdbe95cdb87e67b4bfc9901d879461f22,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723815913254305548,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-302520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea84440da82a0e6e97e5514a37c8c507,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8a2dde2e-9631-4bb5-bf42-1eb83f2bc0bf name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 14:05:29 embed-certs-302520 crio[731]: time="2024-08-16 14:05:29.531741534Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c37ec13c-9ee3-41ba-90d7-2ccee74786be name=/runtime.v1.RuntimeService/Version
	Aug 16 14:05:29 embed-certs-302520 crio[731]: time="2024-08-16 14:05:29.531924953Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c37ec13c-9ee3-41ba-90d7-2ccee74786be name=/runtime.v1.RuntimeService/Version
	Aug 16 14:05:29 embed-certs-302520 crio[731]: time="2024-08-16 14:05:29.534005440Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=54f369c0-7c92-44d9-9516-73e799b67c73 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 14:05:29 embed-certs-302520 crio[731]: time="2024-08-16 14:05:29.534516539Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723817129534492467,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=54f369c0-7c92-44d9-9516-73e799b67c73 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 14:05:29 embed-certs-302520 crio[731]: time="2024-08-16 14:05:29.535218288Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cf9b6c19-2b18-4c26-b164-ff7bef74fc01 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 14:05:29 embed-certs-302520 crio[731]: time="2024-08-16 14:05:29.535298689Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cf9b6c19-2b18-4c26-b164-ff7bef74fc01 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 14:05:29 embed-certs-302520 crio[731]: time="2024-08-16 14:05:29.535599178Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5d55f680e17867bafc1c19e765974907ab36d34fd5cc2d97ce049e2dae88cdb9,PodSandboxId:3263fbb7130a4509352aea8c9440162a738feafa6856e2e2b3d34d3db2ba7679,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723816211588747512,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e139aaf-e6d1-4661-8c7b-90c1cc9827d4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffcd4d176bf4be7886729826a56f7273e9e0838c165cc5fa840fd94d50c7c03a,PodSandboxId:9438a8a614cf7bbd8de7689ca3bc3629b2c1edf2e139f13ba4e815927f970ec5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723816210735087714,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-whnqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f4d69de-4130-4959-b1ef-9ddfbe5d6a72,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25b192393075efc6eb2238107efce33495fe6e172de9fcf1e68112955e90f670,PodSandboxId:f510cef5d08928e5b77a25465bb6e3a6eeea17c0801112c7d5360071d666cb7c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723816210671089660,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-zh69g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
65235cd-590b-4108-b5fc-b5f6072c8f5f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4414024807b0b08671bfa31dc0b388df67e122d81578315b8f1fc3bddab16b1,PodSandboxId:8f708e8ed40790f92dfaaebdc46d9ea5682c9cd2e827b524c98f18525041a515,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1723816209933953021,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-spgtw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9b2b029-a32e-4dd5-b4ff-bd3d61b97a02,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8473f5fc22d8f11c7a750bc72270174924ff8715e66f72e84090c6619f56d998,PodSandboxId:b32a6f7073caa7ea55df0673ea4d09884f9aecd96a926a08b267943d559f76a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723816199176703329,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-302520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18192e92212a656a4a67a5434489cfbb,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd62b9f92fb761c4352d0d4d0130256f136acb5a5bc6022f6c954b0b101ab4ed,PodSandboxId:d5efdc758602e539fb18df58bbdb5fd1a2a2f9af4ab74aefa3059ea11cea6fde,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723816199179375113,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-302520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9298f22fc34ff49d8719b2eab520558,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ac14a9494897cd2451337557224661bd534820b7d2b6abbe0b7f06a60433577,PodSandboxId:4724ddd4a2ac02adb82deac2dfe603c724cabdd076f443a88879ad90576bbfb6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723816199100245070,Labels:map[string]string{io.kubernetes.container.name: kube-api
server,io.kubernetes.pod.name: kube-apiserver-embed-certs-302520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea84440da82a0e6e97e5514a37c8c507,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c587865a89293996175b297ce0b354299a1c2641b13d975322b0180e1d9c22bd,PodSandboxId:0d8ae2faf420d01cc7a6e8c3f9b4b66d15cf8f400620b88d05777735165a0997,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723816199076235411,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-302520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5497839c317a5df8d8ab75b11fa2ea7f,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecc28eb673520758cb2eee6f8dba642d92cb4383b0bb5e85ee1b0608d7f24fa6,PodSandboxId:8682941ad9d1df82e5d9018cf36a238cdbe95cdb87e67b4bfc9901d879461f22,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723815913254305548,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-302520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea84440da82a0e6e97e5514a37c8c507,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cf9b6c19-2b18-4c26-b164-ff7bef74fc01 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 14:05:29 embed-certs-302520 crio[731]: time="2024-08-16 14:05:29.579240566Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7b3d7d11-fd24-4118-b70f-170babc0b046 name=/runtime.v1.RuntimeService/Version
	Aug 16 14:05:29 embed-certs-302520 crio[731]: time="2024-08-16 14:05:29.579339569Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7b3d7d11-fd24-4118-b70f-170babc0b046 name=/runtime.v1.RuntimeService/Version
	Aug 16 14:05:29 embed-certs-302520 crio[731]: time="2024-08-16 14:05:29.580948167Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c239a077-6f2b-404a-98d7-9f20350373cc name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 14:05:29 embed-certs-302520 crio[731]: time="2024-08-16 14:05:29.581649673Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723817129581617300,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c239a077-6f2b-404a-98d7-9f20350373cc name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 14:05:29 embed-certs-302520 crio[731]: time="2024-08-16 14:05:29.582570948Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=24b933c5-7c38-4929-9139-4f55e3cc0b6f name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 14:05:29 embed-certs-302520 crio[731]: time="2024-08-16 14:05:29.582648834Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=24b933c5-7c38-4929-9139-4f55e3cc0b6f name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 14:05:29 embed-certs-302520 crio[731]: time="2024-08-16 14:05:29.582979638Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5d55f680e17867bafc1c19e765974907ab36d34fd5cc2d97ce049e2dae88cdb9,PodSandboxId:3263fbb7130a4509352aea8c9440162a738feafa6856e2e2b3d34d3db2ba7679,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723816211588747512,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e139aaf-e6d1-4661-8c7b-90c1cc9827d4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffcd4d176bf4be7886729826a56f7273e9e0838c165cc5fa840fd94d50c7c03a,PodSandboxId:9438a8a614cf7bbd8de7689ca3bc3629b2c1edf2e139f13ba4e815927f970ec5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723816210735087714,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-whnqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f4d69de-4130-4959-b1ef-9ddfbe5d6a72,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25b192393075efc6eb2238107efce33495fe6e172de9fcf1e68112955e90f670,PodSandboxId:f510cef5d08928e5b77a25465bb6e3a6eeea17c0801112c7d5360071d666cb7c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723816210671089660,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-zh69g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
65235cd-590b-4108-b5fc-b5f6072c8f5f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4414024807b0b08671bfa31dc0b388df67e122d81578315b8f1fc3bddab16b1,PodSandboxId:8f708e8ed40790f92dfaaebdc46d9ea5682c9cd2e827b524c98f18525041a515,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1723816209933953021,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-spgtw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9b2b029-a32e-4dd5-b4ff-bd3d61b97a02,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8473f5fc22d8f11c7a750bc72270174924ff8715e66f72e84090c6619f56d998,PodSandboxId:b32a6f7073caa7ea55df0673ea4d09884f9aecd96a926a08b267943d559f76a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723816199176703329,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-302520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18192e92212a656a4a67a5434489cfbb,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd62b9f92fb761c4352d0d4d0130256f136acb5a5bc6022f6c954b0b101ab4ed,PodSandboxId:d5efdc758602e539fb18df58bbdb5fd1a2a2f9af4ab74aefa3059ea11cea6fde,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723816199179375113,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-302520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9298f22fc34ff49d8719b2eab520558,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ac14a9494897cd2451337557224661bd534820b7d2b6abbe0b7f06a60433577,PodSandboxId:4724ddd4a2ac02adb82deac2dfe603c724cabdd076f443a88879ad90576bbfb6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723816199100245070,Labels:map[string]string{io.kubernetes.container.name: kube-api
server,io.kubernetes.pod.name: kube-apiserver-embed-certs-302520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea84440da82a0e6e97e5514a37c8c507,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c587865a89293996175b297ce0b354299a1c2641b13d975322b0180e1d9c22bd,PodSandboxId:0d8ae2faf420d01cc7a6e8c3f9b4b66d15cf8f400620b88d05777735165a0997,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723816199076235411,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-302520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5497839c317a5df8d8ab75b11fa2ea7f,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecc28eb673520758cb2eee6f8dba642d92cb4383b0bb5e85ee1b0608d7f24fa6,PodSandboxId:8682941ad9d1df82e5d9018cf36a238cdbe95cdb87e67b4bfc9901d879461f22,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723815913254305548,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-302520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea84440da82a0e6e97e5514a37c8c507,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=24b933c5-7c38-4929-9139-4f55e3cc0b6f name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 14:05:29 embed-certs-302520 crio[731]: time="2024-08-16 14:05:29.623288286Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=439a8c69-24af-4efd-9ed7-9c99e840682b name=/runtime.v1.RuntimeService/Version
	Aug 16 14:05:29 embed-certs-302520 crio[731]: time="2024-08-16 14:05:29.623386046Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=439a8c69-24af-4efd-9ed7-9c99e840682b name=/runtime.v1.RuntimeService/Version
	Aug 16 14:05:29 embed-certs-302520 crio[731]: time="2024-08-16 14:05:29.624666795Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=15b6eff8-2a5d-4283-b7c9-eb32386e1753 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 14:05:29 embed-certs-302520 crio[731]: time="2024-08-16 14:05:29.625241990Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723817129625217460,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=15b6eff8-2a5d-4283-b7c9-eb32386e1753 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 14:05:29 embed-certs-302520 crio[731]: time="2024-08-16 14:05:29.625922318Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3e433c16-519f-4be4-a977-cbf46ca519eb name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 14:05:29 embed-certs-302520 crio[731]: time="2024-08-16 14:05:29.625973893Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3e433c16-519f-4be4-a977-cbf46ca519eb name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 14:05:29 embed-certs-302520 crio[731]: time="2024-08-16 14:05:29.626177558Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5d55f680e17867bafc1c19e765974907ab36d34fd5cc2d97ce049e2dae88cdb9,PodSandboxId:3263fbb7130a4509352aea8c9440162a738feafa6856e2e2b3d34d3db2ba7679,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723816211588747512,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e139aaf-e6d1-4661-8c7b-90c1cc9827d4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffcd4d176bf4be7886729826a56f7273e9e0838c165cc5fa840fd94d50c7c03a,PodSandboxId:9438a8a614cf7bbd8de7689ca3bc3629b2c1edf2e139f13ba4e815927f970ec5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723816210735087714,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-whnqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f4d69de-4130-4959-b1ef-9ddfbe5d6a72,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25b192393075efc6eb2238107efce33495fe6e172de9fcf1e68112955e90f670,PodSandboxId:f510cef5d08928e5b77a25465bb6e3a6eeea17c0801112c7d5360071d666cb7c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723816210671089660,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-zh69g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
65235cd-590b-4108-b5fc-b5f6072c8f5f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4414024807b0b08671bfa31dc0b388df67e122d81578315b8f1fc3bddab16b1,PodSandboxId:8f708e8ed40790f92dfaaebdc46d9ea5682c9cd2e827b524c98f18525041a515,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1723816209933953021,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-spgtw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9b2b029-a32e-4dd5-b4ff-bd3d61b97a02,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8473f5fc22d8f11c7a750bc72270174924ff8715e66f72e84090c6619f56d998,PodSandboxId:b32a6f7073caa7ea55df0673ea4d09884f9aecd96a926a08b267943d559f76a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723816199176703329,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-302520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18192e92212a656a4a67a5434489cfbb,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd62b9f92fb761c4352d0d4d0130256f136acb5a5bc6022f6c954b0b101ab4ed,PodSandboxId:d5efdc758602e539fb18df58bbdb5fd1a2a2f9af4ab74aefa3059ea11cea6fde,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723816199179375113,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-302520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9298f22fc34ff49d8719b2eab520558,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ac14a9494897cd2451337557224661bd534820b7d2b6abbe0b7f06a60433577,PodSandboxId:4724ddd4a2ac02adb82deac2dfe603c724cabdd076f443a88879ad90576bbfb6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723816199100245070,Labels:map[string]string{io.kubernetes.container.name: kube-api
server,io.kubernetes.pod.name: kube-apiserver-embed-certs-302520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea84440da82a0e6e97e5514a37c8c507,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c587865a89293996175b297ce0b354299a1c2641b13d975322b0180e1d9c22bd,PodSandboxId:0d8ae2faf420d01cc7a6e8c3f9b4b66d15cf8f400620b88d05777735165a0997,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723816199076235411,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-302520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5497839c317a5df8d8ab75b11fa2ea7f,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecc28eb673520758cb2eee6f8dba642d92cb4383b0bb5e85ee1b0608d7f24fa6,PodSandboxId:8682941ad9d1df82e5d9018cf36a238cdbe95cdb87e67b4bfc9901d879461f22,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723815913254305548,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-302520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea84440da82a0e6e97e5514a37c8c507,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3e433c16-519f-4be4-a977-cbf46ca519eb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5d55f680e1786       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   3263fbb7130a4       storage-provisioner
	ffcd4d176bf4b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Running             coredns                   0                   9438a8a614cf7       coredns-6f6b679f8f-whnqh
	25b192393075e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Running             coredns                   0                   f510cef5d0892       coredns-6f6b679f8f-zh69g
	c4414024807b0       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   15 minutes ago      Running             kube-proxy                0                   8f708e8ed4079       kube-proxy-spgtw
	bd62b9f92fb76       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   15 minutes ago      Running             kube-scheduler            2                   d5efdc758602e       kube-scheduler-embed-certs-302520
	8473f5fc22d8f       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   15 minutes ago      Running             etcd                      2                   b32a6f7073caa       etcd-embed-certs-302520
	3ac14a9494897       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   15 minutes ago      Running             kube-apiserver            2                   4724ddd4a2ac0       kube-apiserver-embed-certs-302520
	c587865a89293       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   15 minutes ago      Running             kube-controller-manager   2                   0d8ae2faf420d       kube-controller-manager-embed-certs-302520
	ecc28eb673520       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   20 minutes ago      Exited              kube-apiserver            1                   8682941ad9d1d       kube-apiserver-embed-certs-302520
	
	
	==> coredns [25b192393075efc6eb2238107efce33495fe6e172de9fcf1e68112955e90f670] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [ffcd4d176bf4be7886729826a56f7273e9e0838c165cc5fa840fd94d50c7c03a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-302520
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-302520
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab84f9bc76071a77c857a14f5c66dccc01002b05
	                    minikube.k8s.io/name=embed-certs-302520
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_16T13_50_05_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 13:50:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-302520
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 14:05:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 14:00:26 +0000   Fri, 16 Aug 2024 13:50:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 14:00:26 +0000   Fri, 16 Aug 2024 13:50:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 14:00:26 +0000   Fri, 16 Aug 2024 13:50:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 14:00:26 +0000   Fri, 16 Aug 2024 13:50:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.125
	  Hostname:    embed-certs-302520
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 66a9767091de4d4dbfef467bedb1fef1
	  System UUID:                66a97670-91de-4d4d-bfef-467bedb1fef1
	  Boot ID:                    214002d4-e2fe-469e-a5c9-fe7ebc908da5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-whnqh                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 coredns-6f6b679f8f-zh69g                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 etcd-embed-certs-302520                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         15m
	  kube-system                 kube-apiserver-embed-certs-302520             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-embed-certs-302520    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-spgtw                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-embed-certs-302520             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 metrics-server-6867b74b74-q58h2               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         15m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 15m   kube-proxy       
	  Normal  Starting                 15m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m   kubelet          Node embed-certs-302520 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m   kubelet          Node embed-certs-302520 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m   kubelet          Node embed-certs-302520 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m   node-controller  Node embed-certs-302520 event: Registered Node embed-certs-302520 in Controller
	
	
	==> dmesg <==
	[  +0.055279] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042166] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.001510] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.484198] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.621418] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000012] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Aug16 13:45] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.061219] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070451] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +0.184541] systemd-fstab-generator[674]: Ignoring "noauto" option for root device
	[  +0.162796] systemd-fstab-generator[686]: Ignoring "noauto" option for root device
	[  +0.294148] systemd-fstab-generator[715]: Ignoring "noauto" option for root device
	[  +4.223794] systemd-fstab-generator[813]: Ignoring "noauto" option for root device
	[  +0.065464] kauditd_printk_skb: 132 callbacks suppressed
	[  +2.210258] systemd-fstab-generator[935]: Ignoring "noauto" option for root device
	[  +4.635571] kauditd_printk_skb: 95 callbacks suppressed
	[  +6.898897] kauditd_printk_skb: 85 callbacks suppressed
	[Aug16 13:49] kauditd_printk_skb: 6 callbacks suppressed
	[  +0.970635] systemd-fstab-generator[2570]: Ignoring "noauto" option for root device
	[Aug16 13:50] kauditd_printk_skb: 54 callbacks suppressed
	[  +1.776109] systemd-fstab-generator[2888]: Ignoring "noauto" option for root device
	[  +5.452761] systemd-fstab-generator[3007]: Ignoring "noauto" option for root device
	[  +0.100947] kauditd_printk_skb: 14 callbacks suppressed
	[Aug16 13:51] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [8473f5fc22d8f11c7a750bc72270174924ff8715e66f72e84090c6619f56d998] <==
	{"level":"info","ts":"2024-08-16T13:50:00.099973Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"f4d3edba9e42b28c","local-member-attributes":"{Name:embed-certs-302520 ClientURLs:[https://192.168.39.125:2379]}","request-path":"/0/members/f4d3edba9e42b28c/attributes","cluster-id":"9838e9e2cfdaeabf","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-16T13:50:00.100876Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T13:50:00.101919Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T13:50:00.105877Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-16T13:50:00.106488Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T13:50:00.106916Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-16T13:50:00.106946Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-16T13:50:00.107445Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T13:50:00.111326Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.125:2379"}
	{"level":"info","ts":"2024-08-16T13:50:00.111727Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9838e9e2cfdaeabf","local-member-id":"f4d3edba9e42b28c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T13:50:00.111904Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T13:50:00.111951Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T14:00:00.156368Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":684}
	{"level":"info","ts":"2024-08-16T14:00:00.166402Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":684,"took":"8.927342ms","hash":1047988639,"current-db-size-bytes":2183168,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2183168,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-08-16T14:00:00.166540Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1047988639,"revision":684,"compact-revision":-1}
	{"level":"info","ts":"2024-08-16T14:04:20.559751Z","caller":"traceutil/trace.go:171","msg":"trace[411437764] linearizableReadLoop","detail":"{readStateIndex:1322; appliedIndex:1321; }","duration":"259.680357ms","start":"2024-08-16T14:04:20.299989Z","end":"2024-08-16T14:04:20.559670Z","steps":["trace[411437764] 'read index received'  (duration: 256.931531ms)","trace[411437764] 'applied index is now lower than readState.Index'  (duration: 2.748204ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-16T14:04:20.559929Z","caller":"traceutil/trace.go:171","msg":"trace[1397895486] transaction","detail":"{read_only:false; response_revision:1139; number_of_response:1; }","duration":"389.369188ms","start":"2024-08-16T14:04:20.170537Z","end":"2024-08-16T14:04:20.559906Z","steps":["trace[1397895486] 'process raft request'  (duration: 386.504106ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T14:04:20.561717Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T14:04:20.170520Z","time spent":"390.505306ms","remote":"127.0.0.1:46222","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1103,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1137 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-08-16T14:04:20.560193Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"260.000826ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-16T14:04:20.562001Z","caller":"traceutil/trace.go:171","msg":"trace[1389328759] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1139; }","duration":"262.004953ms","start":"2024-08-16T14:04:20.299983Z","end":"2024-08-16T14:04:20.561988Z","steps":["trace[1389328759] 'agreement among raft nodes before linearized reading'  (duration: 259.978947ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T14:05:00.164119Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":927}
	{"level":"info","ts":"2024-08-16T14:05:00.168077Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":927,"took":"3.52043ms","hash":577693759,"current-db-size-bytes":2183168,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":1581056,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-08-16T14:05:00.168131Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":577693759,"revision":927,"compact-revision":684}
	{"level":"warn","ts":"2024-08-16T14:05:13.775625Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.962733ms","expected-duration":"100ms","prefix":"","request":"header:<ID:12865818057461029665 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.39.125\" mod_revision:1175 > success:<request_put:<key:\"/registry/masterleases/192.168.39.125\" value_size:67 lease:3642446020606253855 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.125\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-08-16T14:05:13.776866Z","caller":"traceutil/trace.go:171","msg":"trace[851439540] transaction","detail":"{read_only:false; response_revision:1182; number_of_response:1; }","duration":"261.753371ms","start":"2024-08-16T14:05:13.514996Z","end":"2024-08-16T14:05:13.776750Z","steps":["trace[851439540] 'process raft request'  (duration: 128.287444ms)","trace[851439540] 'compare'  (duration: 131.823579ms)"],"step_count":2}
	
	
	==> kernel <==
	 14:05:30 up 20 min,  0 users,  load average: 0.14, 0.14, 0.15
	Linux embed-certs-302520 5.10.207 #1 SMP Wed Aug 14 19:18:01 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3ac14a9494897cd2451337557224661bd534820b7d2b6abbe0b7f06a60433577] <==
	I0816 14:01:02.623848       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0816 14:01:02.623875       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0816 14:03:02.624737       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 14:03:02.624943       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0816 14:03:02.625020       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 14:03:02.625036       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0816 14:03:02.626089       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0816 14:03:02.626156       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0816 14:05:01.623151       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 14:05:01.623282       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0816 14:05:02.625560       1 handler_proxy.go:99] no RequestInfo found in the context
	W0816 14:05:02.625580       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 14:05:02.625880       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0816 14:05:02.625836       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0816 14:05:02.627026       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0816 14:05:02.627092       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [ecc28eb673520758cb2eee6f8dba642d92cb4383b0bb5e85ee1b0608d7f24fa6] <==
	W0816 13:49:52.938631       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:49:53.028311       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:49:53.078602       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:49:53.081132       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:49:53.110155       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:49:53.145297       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:49:53.178944       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:49:53.196842       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:49:53.207551       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:49:53.239335       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:49:53.283672       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:49:53.323123       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:49:53.388561       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:49:53.399093       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:49:53.411666       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:49:53.434924       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:49:53.541079       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:49:53.732667       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:49:53.749106       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:49:53.794230       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:49:53.955247       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:49:53.970728       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:49:54.041710       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:49:54.294717       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 13:49:54.363176       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [c587865a89293996175b297ce0b354299a1c2641b13d975322b0180e1d9c22bd] <==
	E0816 14:00:08.647240       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 14:00:09.214174       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0816 14:00:26.894517       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-302520"
	E0816 14:00:38.653990       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 14:00:39.223559       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0816 14:01:02.547905       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="374.448µs"
	E0816 14:01:08.660486       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 14:01:09.232487       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0816 14:01:15.548857       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="295.332µs"
	E0816 14:01:38.668393       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 14:01:39.240442       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 14:02:08.674691       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 14:02:09.248363       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 14:02:38.680868       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 14:02:39.256976       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 14:03:08.688933       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 14:03:09.265503       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 14:03:38.694910       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 14:03:39.273438       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 14:04:08.701228       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 14:04:09.282826       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 14:04:38.707824       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 14:04:39.291864       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 14:05:08.715956       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 14:05:09.302631       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [c4414024807b0b08671bfa31dc0b388df67e122d81578315b8f1fc3bddab16b1] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0816 13:50:10.327993       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0816 13:50:10.338192       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.125"]
	E0816 13:50:10.338293       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 13:50:10.575891       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0816 13:50:10.575969       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0816 13:50:10.575999       1 server_linux.go:169] "Using iptables Proxier"
	I0816 13:50:10.599917       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 13:50:10.600255       1 server.go:483] "Version info" version="v1.31.0"
	I0816 13:50:10.600290       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 13:50:10.601582       1 config.go:197] "Starting service config controller"
	I0816 13:50:10.601608       1 config.go:104] "Starting endpoint slice config controller"
	I0816 13:50:10.601621       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 13:50:10.601646       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 13:50:10.602201       1 config.go:326] "Starting node config controller"
	I0816 13:50:10.602208       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 13:50:10.702067       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0816 13:50:10.702160       1 shared_informer.go:320] Caches are synced for service config
	I0816 13:50:10.702297       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [bd62b9f92fb761c4352d0d4d0130256f136acb5a5bc6022f6c954b0b101ab4ed] <==
	W0816 13:50:02.504029       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 13:50:02.504160       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0816 13:50:02.558348       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 13:50:02.558584       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0816 13:50:02.606550       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 13:50:02.606694       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 13:50:02.655457       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 13:50:02.655584       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 13:50:02.665498       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0816 13:50:02.665692       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 13:50:02.746206       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 13:50:02.746470       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 13:50:02.805547       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 13:50:02.805756       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 13:50:02.842397       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 13:50:02.844663       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 13:50:02.845160       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 13:50:02.845210       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 13:50:02.845169       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 13:50:02.845254       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 13:50:02.915603       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 13:50:02.915662       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 13:50:02.955048       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 13:50:02.955201       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0816 13:50:05.733675       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 16 14:04:19 embed-certs-302520 kubelet[2895]: E0816 14:04:19.531099    2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-q58h2" podUID="1351eabe-df61-4b9c-b67b-2e9c963b0eaf"
	Aug 16 14:04:24 embed-certs-302520 kubelet[2895]: E0816 14:04:24.785281    2895 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723817064784699818,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 14:04:24 embed-certs-302520 kubelet[2895]: E0816 14:04:24.785331    2895 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723817064784699818,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 14:04:30 embed-certs-302520 kubelet[2895]: E0816 14:04:30.530371    2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-q58h2" podUID="1351eabe-df61-4b9c-b67b-2e9c963b0eaf"
	Aug 16 14:04:34 embed-certs-302520 kubelet[2895]: E0816 14:04:34.787488    2895 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723817074787065490,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 14:04:34 embed-certs-302520 kubelet[2895]: E0816 14:04:34.787806    2895 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723817074787065490,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 14:04:44 embed-certs-302520 kubelet[2895]: E0816 14:04:44.790051    2895 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723817084789607500,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 14:04:44 embed-certs-302520 kubelet[2895]: E0816 14:04:44.790084    2895 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723817084789607500,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 14:04:45 embed-certs-302520 kubelet[2895]: E0816 14:04:45.530928    2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-q58h2" podUID="1351eabe-df61-4b9c-b67b-2e9c963b0eaf"
	Aug 16 14:04:54 embed-certs-302520 kubelet[2895]: E0816 14:04:54.792581    2895 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723817094792127550,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 14:04:54 embed-certs-302520 kubelet[2895]: E0816 14:04:54.793150    2895 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723817094792127550,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 14:04:58 embed-certs-302520 kubelet[2895]: E0816 14:04:58.531553    2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-q58h2" podUID="1351eabe-df61-4b9c-b67b-2e9c963b0eaf"
	Aug 16 14:05:04 embed-certs-302520 kubelet[2895]: E0816 14:05:04.556520    2895 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 14:05:04 embed-certs-302520 kubelet[2895]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 14:05:04 embed-certs-302520 kubelet[2895]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 14:05:04 embed-certs-302520 kubelet[2895]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 14:05:04 embed-certs-302520 kubelet[2895]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 14:05:04 embed-certs-302520 kubelet[2895]: E0816 14:05:04.795036    2895 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723817104794528010,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 14:05:04 embed-certs-302520 kubelet[2895]: E0816 14:05:04.795214    2895 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723817104794528010,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 14:05:11 embed-certs-302520 kubelet[2895]: E0816 14:05:11.531978    2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-q58h2" podUID="1351eabe-df61-4b9c-b67b-2e9c963b0eaf"
	Aug 16 14:05:14 embed-certs-302520 kubelet[2895]: E0816 14:05:14.797633    2895 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723817114797064375,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 14:05:14 embed-certs-302520 kubelet[2895]: E0816 14:05:14.797691    2895 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723817114797064375,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 14:05:24 embed-certs-302520 kubelet[2895]: E0816 14:05:24.531185    2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-q58h2" podUID="1351eabe-df61-4b9c-b67b-2e9c963b0eaf"
	Aug 16 14:05:24 embed-certs-302520 kubelet[2895]: E0816 14:05:24.799265    2895 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723817124798836658,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 14:05:24 embed-certs-302520 kubelet[2895]: E0816 14:05:24.799560    2895 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723817124798836658,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [5d55f680e17867bafc1c19e765974907ab36d34fd5cc2d97ce049e2dae88cdb9] <==
	I0816 13:50:11.745010       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0816 13:50:11.754761       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0816 13:50:11.755002       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0816 13:50:11.762949       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0816 13:50:11.763179       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-302520_93dce01a-f641-4cdf-ad96-662b0604b4bf!
	I0816 13:50:11.767323       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9bfc3439-9ea9-4a3c-8502-c0e0a228ca4f", APIVersion:"v1", ResourceVersion:"405", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-302520_93dce01a-f641-4cdf-ad96-662b0604b4bf became leader
	I0816 13:50:11.864009       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-302520_93dce01a-f641-4cdf-ad96-662b0604b4bf!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-302520 -n embed-certs-302520
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-302520 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-q58h2
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-302520 describe pod metrics-server-6867b74b74-q58h2
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-302520 describe pod metrics-server-6867b74b74-q58h2: exit status 1 (63.155417ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-q58h2" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-302520 describe pod metrics-server-6867b74b74-q58h2: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (369.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (130.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
E0816 14:03:43.991409   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/functional-756697/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.105:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-882237 -n old-k8s-version-882237
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-882237 -n old-k8s-version-882237: exit status 2 (221.559262ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-882237" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-882237 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-882237 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.788µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-882237 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-882237 -n old-k8s-version-882237
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-882237 -n old-k8s-version-882237: exit status 2 (216.65049ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-882237 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-882237 logs -n 25: (1.60095348s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cert-options-779306 -- sudo                         | cert-options-779306          | jenkins | v1.33.1 | 16 Aug 24 13:34 UTC | 16 Aug 24 13:34 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-779306                                 | cert-options-779306          | jenkins | v1.33.1 | 16 Aug 24 13:34 UTC | 16 Aug 24 13:34 UTC |
	| start   | -p no-preload-311070                                   | no-preload-311070            | jenkins | v1.33.1 | 16 Aug 24 13:34 UTC | 16 Aug 24 13:36 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-759623                           | kubernetes-upgrade-759623    | jenkins | v1.33.1 | 16 Aug 24 13:35 UTC | 16 Aug 24 13:35 UTC |
	| start   | -p embed-certs-302520                                  | embed-certs-302520           | jenkins | v1.33.1 | 16 Aug 24 13:35 UTC | 16 Aug 24 13:36 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-302520            | embed-certs-302520           | jenkins | v1.33.1 | 16 Aug 24 13:36 UTC | 16 Aug 24 13:36 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-302520                                  | embed-certs-302520           | jenkins | v1.33.1 | 16 Aug 24 13:36 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-311070             | no-preload-311070            | jenkins | v1.33.1 | 16 Aug 24 13:36 UTC | 16 Aug 24 13:37 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-311070                                   | no-preload-311070            | jenkins | v1.33.1 | 16 Aug 24 13:37 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-050553                              | cert-expiration-050553       | jenkins | v1.33.1 | 16 Aug 24 13:37 UTC | 16 Aug 24 13:38 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-050553                              | cert-expiration-050553       | jenkins | v1.33.1 | 16 Aug 24 13:38 UTC | 16 Aug 24 13:38 UTC |
	| delete  | -p                                                     | disable-driver-mounts-338033 | jenkins | v1.33.1 | 16 Aug 24 13:38 UTC | 16 Aug 24 13:38 UTC |
	|         | disable-driver-mounts-338033                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-893736 | jenkins | v1.33.1 | 16 Aug 24 13:38 UTC | 16 Aug 24 13:39 UTC |
	|         | default-k8s-diff-port-893736                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-302520                 | embed-certs-302520           | jenkins | v1.33.1 | 16 Aug 24 13:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-882237        | old-k8s-version-882237       | jenkins | v1.33.1 | 16 Aug 24 13:39 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-302520                                  | embed-certs-302520           | jenkins | v1.33.1 | 16 Aug 24 13:39 UTC | 16 Aug 24 13:50 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-311070                  | no-preload-311070            | jenkins | v1.33.1 | 16 Aug 24 13:39 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-311070                                   | no-preload-311070            | jenkins | v1.33.1 | 16 Aug 24 13:39 UTC | 16 Aug 24 13:48 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-893736  | default-k8s-diff-port-893736 | jenkins | v1.33.1 | 16 Aug 24 13:39 UTC | 16 Aug 24 13:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-893736 | jenkins | v1.33.1 | 16 Aug 24 13:39 UTC |                     |
	|         | default-k8s-diff-port-893736                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-882237                              | old-k8s-version-882237       | jenkins | v1.33.1 | 16 Aug 24 13:40 UTC | 16 Aug 24 13:40 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-882237             | old-k8s-version-882237       | jenkins | v1.33.1 | 16 Aug 24 13:40 UTC | 16 Aug 24 13:40 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-882237                              | old-k8s-version-882237       | jenkins | v1.33.1 | 16 Aug 24 13:40 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-893736       | default-k8s-diff-port-893736 | jenkins | v1.33.1 | 16 Aug 24 13:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-893736 | jenkins | v1.33.1 | 16 Aug 24 13:42 UTC | 16 Aug 24 13:49 UTC |
	|         | default-k8s-diff-port-893736                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 13:42:15
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 13:42:15.998819   58430 out.go:345] Setting OutFile to fd 1 ...
	I0816 13:42:15.998960   58430 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 13:42:15.998970   58430 out.go:358] Setting ErrFile to fd 2...
	I0816 13:42:15.998976   58430 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 13:42:15.999197   58430 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-3966/.minikube/bin
	I0816 13:42:15.999747   58430 out.go:352] Setting JSON to false
	I0816 13:42:16.000715   58430 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5081,"bootTime":1723810655,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 13:42:16.000770   58430 start.go:139] virtualization: kvm guest
	I0816 13:42:16.003216   58430 out.go:177] * [default-k8s-diff-port-893736] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 13:42:16.004663   58430 notify.go:220] Checking for updates...
	I0816 13:42:16.004698   58430 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 13:42:16.006298   58430 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 13:42:16.007719   58430 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 13:42:16.009073   58430 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-3966/.minikube
	I0816 13:42:16.010602   58430 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 13:42:16.012058   58430 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 13:42:16.013799   58430 config.go:182] Loaded profile config "default-k8s-diff-port-893736": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 13:42:16.014204   58430 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:42:16.014278   58430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:42:16.029427   58430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34093
	I0816 13:42:16.029977   58430 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:42:16.030548   58430 main.go:141] libmachine: Using API Version  1
	I0816 13:42:16.030573   58430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:42:16.030903   58430 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:42:16.031164   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:42:16.031412   58430 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 13:42:16.031691   58430 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:42:16.031731   58430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:42:16.046245   58430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38243
	I0816 13:42:16.046668   58430 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:42:16.047205   58430 main.go:141] libmachine: Using API Version  1
	I0816 13:42:16.047244   58430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:42:16.047537   58430 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:42:16.047730   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:42:16.080470   58430 out.go:177] * Using the kvm2 driver based on existing profile
	I0816 13:42:16.081700   58430 start.go:297] selected driver: kvm2
	I0816 13:42:16.081721   58430 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-893736 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-893736 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.186 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 13:42:16.081825   58430 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 13:42:16.082512   58430 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 13:42:16.082593   58430 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-3966/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 13:42:16.097784   58430 install.go:137] /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0816 13:42:16.098155   58430 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 13:42:16.098223   58430 cni.go:84] Creating CNI manager for ""
	I0816 13:42:16.098233   58430 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:42:16.098274   58430 start.go:340] cluster config:
	{Name:default-k8s-diff-port-893736 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-893736 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.186 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 13:42:16.098365   58430 iso.go:125] acquiring lock: {Name:mk00139b359c6fe4c84d80e843a2099303fb7c5b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 13:42:16.100341   58430 out.go:177] * Starting "default-k8s-diff-port-893736" primary control-plane node in "default-k8s-diff-port-893736" cluster
	I0816 13:42:17.205125   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:42:16.101925   58430 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 13:42:16.101966   58430 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0816 13:42:16.101973   58430 cache.go:56] Caching tarball of preloaded images
	I0816 13:42:16.102052   58430 preload.go:172] Found /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 13:42:16.102063   58430 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0816 13:42:16.102162   58430 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/default-k8s-diff-port-893736/config.json ...
	I0816 13:42:16.102344   58430 start.go:360] acquireMachinesLock for default-k8s-diff-port-893736: {Name:mk0b4b88ee8893cefe753073bc08069ac50e5641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 13:42:23.285172   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:42:26.357214   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:42:32.437218   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:42:35.509221   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:42:41.589174   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:42:44.661162   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:42:50.741223   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:42:53.813193   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:42:59.893180   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:43:02.965205   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:43:09.045252   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:43:12.117232   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:43:18.197189   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:43:21.269234   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:43:27.349182   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:43:30.421174   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:43:36.501197   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:43:39.573246   57240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I0816 13:43:42.577406   57440 start.go:364] duration metric: took 4m10.318515071s to acquireMachinesLock for "no-preload-311070"
	I0816 13:43:42.577513   57440 start.go:96] Skipping create...Using existing machine configuration
	I0816 13:43:42.577529   57440 fix.go:54] fixHost starting: 
	I0816 13:43:42.577955   57440 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:43:42.577989   57440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:43:42.593032   57440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43785
	I0816 13:43:42.593416   57440 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:43:42.593860   57440 main.go:141] libmachine: Using API Version  1
	I0816 13:43:42.593882   57440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:43:42.594256   57440 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:43:42.594434   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	I0816 13:43:42.594586   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetState
	I0816 13:43:42.596234   57440 fix.go:112] recreateIfNeeded on no-preload-311070: state=Stopped err=<nil>
	I0816 13:43:42.596261   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	W0816 13:43:42.596431   57440 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 13:43:42.598334   57440 out.go:177] * Restarting existing kvm2 VM for "no-preload-311070" ...
	I0816 13:43:42.574954   57240 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 13:43:42.574990   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetMachineName
	I0816 13:43:42.575324   57240 buildroot.go:166] provisioning hostname "embed-certs-302520"
	I0816 13:43:42.575349   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetMachineName
	I0816 13:43:42.575554   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:43:42.577250   57240 machine.go:96] duration metric: took 4m37.4289608s to provisionDockerMachine
	I0816 13:43:42.577309   57240 fix.go:56] duration metric: took 4m37.450613575s for fixHost
	I0816 13:43:42.577314   57240 start.go:83] releasing machines lock for "embed-certs-302520", held for 4m37.450631849s
	W0816 13:43:42.577330   57240 start.go:714] error starting host: provision: host is not running
	W0816 13:43:42.577401   57240 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0816 13:43:42.577410   57240 start.go:729] Will try again in 5 seconds ...
	I0816 13:43:42.599558   57440 main.go:141] libmachine: (no-preload-311070) Calling .Start
	I0816 13:43:42.599720   57440 main.go:141] libmachine: (no-preload-311070) Ensuring networks are active...
	I0816 13:43:42.600383   57440 main.go:141] libmachine: (no-preload-311070) Ensuring network default is active
	I0816 13:43:42.600682   57440 main.go:141] libmachine: (no-preload-311070) Ensuring network mk-no-preload-311070 is active
	I0816 13:43:42.601157   57440 main.go:141] libmachine: (no-preload-311070) Getting domain xml...
	I0816 13:43:42.601868   57440 main.go:141] libmachine: (no-preload-311070) Creating domain...
	I0816 13:43:43.816308   57440 main.go:141] libmachine: (no-preload-311070) Waiting to get IP...
	I0816 13:43:43.817179   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:43.817566   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:43.817586   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:43.817516   58770 retry.go:31] will retry after 295.385031ms: waiting for machine to come up
	I0816 13:43:44.115046   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:44.115850   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:44.115875   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:44.115787   58770 retry.go:31] will retry after 340.249659ms: waiting for machine to come up
	I0816 13:43:44.457278   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:44.457722   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:44.457752   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:44.457657   58770 retry.go:31] will retry after 476.905089ms: waiting for machine to come up
	I0816 13:43:44.936230   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:44.936674   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:44.936714   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:44.936640   58770 retry.go:31] will retry after 555.288542ms: waiting for machine to come up
	I0816 13:43:45.493301   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:45.493698   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:45.493724   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:45.493657   58770 retry.go:31] will retry after 462.336365ms: waiting for machine to come up
	I0816 13:43:45.957163   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:45.957553   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:45.957580   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:45.957509   58770 retry.go:31] will retry after 886.665194ms: waiting for machine to come up
	I0816 13:43:46.845380   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:46.845743   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:46.845763   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:46.845723   58770 retry.go:31] will retry after 909.05227ms: waiting for machine to come up
	I0816 13:43:47.579134   57240 start.go:360] acquireMachinesLock for embed-certs-302520: {Name:mk0b4b88ee8893cefe753073bc08069ac50e5641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 13:43:47.755998   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:47.756439   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:47.756460   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:47.756407   58770 retry.go:31] will retry after 1.380778497s: waiting for machine to come up
	I0816 13:43:49.138398   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:49.138861   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:49.138884   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:49.138811   58770 retry.go:31] will retry after 1.788185586s: waiting for machine to come up
	I0816 13:43:50.929915   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:50.930326   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:50.930356   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:50.930276   58770 retry.go:31] will retry after 1.603049262s: waiting for machine to come up
	I0816 13:43:52.536034   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:52.536492   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:52.536518   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:52.536438   58770 retry.go:31] will retry after 1.964966349s: waiting for machine to come up
	I0816 13:43:54.504003   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:54.504408   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:54.504440   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:54.504363   58770 retry.go:31] will retry after 3.616796835s: waiting for machine to come up
	I0816 13:43:58.122295   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:43:58.122714   57440 main.go:141] libmachine: (no-preload-311070) DBG | unable to find current IP address of domain no-preload-311070 in network mk-no-preload-311070
	I0816 13:43:58.122747   57440 main.go:141] libmachine: (no-preload-311070) DBG | I0816 13:43:58.122673   58770 retry.go:31] will retry after 3.893804146s: waiting for machine to come up
	I0816 13:44:02.020870   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.021351   57440 main.go:141] libmachine: (no-preload-311070) Found IP for machine: 192.168.61.116
	I0816 13:44:02.021372   57440 main.go:141] libmachine: (no-preload-311070) Reserving static IP address...
	I0816 13:44:02.021385   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has current primary IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.021917   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "no-preload-311070", mac: "52:54:00:14:17:b3", ip: "192.168.61.116"} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:02.021948   57440 main.go:141] libmachine: (no-preload-311070) Reserved static IP address: 192.168.61.116
	I0816 13:44:02.021966   57440 main.go:141] libmachine: (no-preload-311070) DBG | skip adding static IP to network mk-no-preload-311070 - found existing host DHCP lease matching {name: "no-preload-311070", mac: "52:54:00:14:17:b3", ip: "192.168.61.116"}
	I0816 13:44:02.021977   57440 main.go:141] libmachine: (no-preload-311070) DBG | Getting to WaitForSSH function...
	I0816 13:44:02.021989   57440 main.go:141] libmachine: (no-preload-311070) Waiting for SSH to be available...
	I0816 13:44:02.024661   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.025071   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:02.025094   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.025327   57440 main.go:141] libmachine: (no-preload-311070) DBG | Using SSH client type: external
	I0816 13:44:02.025349   57440 main.go:141] libmachine: (no-preload-311070) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/no-preload-311070/id_rsa (-rw-------)
	I0816 13:44:02.025376   57440 main.go:141] libmachine: (no-preload-311070) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-3966/.minikube/machines/no-preload-311070/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 13:44:02.025387   57440 main.go:141] libmachine: (no-preload-311070) DBG | About to run SSH command:
	I0816 13:44:02.025406   57440 main.go:141] libmachine: (no-preload-311070) DBG | exit 0
	I0816 13:44:02.148864   57440 main.go:141] libmachine: (no-preload-311070) DBG | SSH cmd err, output: <nil>: 
	I0816 13:44:02.149279   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetConfigRaw
	I0816 13:44:02.149868   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetIP
	I0816 13:44:02.152149   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.152460   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:02.152481   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.152681   57440 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070/config.json ...
	I0816 13:44:02.152853   57440 machine.go:93] provisionDockerMachine start ...
	I0816 13:44:02.152869   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	I0816 13:44:02.153131   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:02.155341   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.155703   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:02.155743   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.155845   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:03.389847   57945 start.go:364] duration metric: took 3m33.186277254s to acquireMachinesLock for "old-k8s-version-882237"
	I0816 13:44:03.389911   57945 start.go:96] Skipping create...Using existing machine configuration
	I0816 13:44:03.389923   57945 fix.go:54] fixHost starting: 
	I0816 13:44:03.390344   57945 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:03.390384   57945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:03.406808   57945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40841
	I0816 13:44:03.407227   57945 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:03.407790   57945 main.go:141] libmachine: Using API Version  1
	I0816 13:44:03.407819   57945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:03.408124   57945 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:03.408341   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:44:03.408506   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetState
	I0816 13:44:03.409993   57945 fix.go:112] recreateIfNeeded on old-k8s-version-882237: state=Stopped err=<nil>
	I0816 13:44:03.410029   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	W0816 13:44:03.410200   57945 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 13:44:03.412299   57945 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-882237" ...
	I0816 13:44:02.156024   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:02.156199   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:02.156350   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:02.156557   57440 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:02.156747   57440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0816 13:44:02.156758   57440 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 13:44:02.261263   57440 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 13:44:02.261290   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetMachineName
	I0816 13:44:02.261514   57440 buildroot.go:166] provisioning hostname "no-preload-311070"
	I0816 13:44:02.261528   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetMachineName
	I0816 13:44:02.261696   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:02.264473   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.264892   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:02.264936   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.265030   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:02.265198   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:02.265365   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:02.265485   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:02.265624   57440 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:02.265796   57440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0816 13:44:02.265813   57440 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-311070 && echo "no-preload-311070" | sudo tee /etc/hostname
	I0816 13:44:02.384079   57440 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-311070
	
	I0816 13:44:02.384112   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:02.386756   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.387065   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:02.387104   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.387285   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:02.387501   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:02.387699   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:02.387843   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:02.387997   57440 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:02.388193   57440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0816 13:44:02.388218   57440 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-311070' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-311070/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-311070' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 13:44:02.502089   57440 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 13:44:02.502122   57440 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-3966/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-3966/.minikube}
	I0816 13:44:02.502159   57440 buildroot.go:174] setting up certificates
	I0816 13:44:02.502173   57440 provision.go:84] configureAuth start
	I0816 13:44:02.502191   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetMachineName
	I0816 13:44:02.502484   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetIP
	I0816 13:44:02.505215   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.505523   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:02.505560   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.505726   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:02.507770   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.508033   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:02.508062   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.508193   57440 provision.go:143] copyHostCerts
	I0816 13:44:02.508249   57440 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem, removing ...
	I0816 13:44:02.508267   57440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem
	I0816 13:44:02.508336   57440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem (1082 bytes)
	I0816 13:44:02.508426   57440 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem, removing ...
	I0816 13:44:02.508435   57440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem
	I0816 13:44:02.508460   57440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem (1123 bytes)
	I0816 13:44:02.508520   57440 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem, removing ...
	I0816 13:44:02.508527   57440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem
	I0816 13:44:02.508548   57440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem (1675 bytes)
	I0816 13:44:02.508597   57440 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem org=jenkins.no-preload-311070 san=[127.0.0.1 192.168.61.116 localhost minikube no-preload-311070]
	I0816 13:44:02.732379   57440 provision.go:177] copyRemoteCerts
	I0816 13:44:02.732434   57440 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 13:44:02.732458   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:02.735444   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.735803   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:02.735837   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.736040   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:02.736274   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:02.736428   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:02.736587   57440 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/no-preload-311070/id_rsa Username:docker}
	I0816 13:44:02.819602   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 13:44:02.843489   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0816 13:44:02.866482   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 13:44:02.889908   57440 provision.go:87] duration metric: took 387.723287ms to configureAuth
	I0816 13:44:02.889936   57440 buildroot.go:189] setting minikube options for container-runtime
	I0816 13:44:02.890151   57440 config.go:182] Loaded profile config "no-preload-311070": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 13:44:02.890250   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:02.892851   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.893158   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:02.893184   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:02.893381   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:02.893607   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:02.893777   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:02.893925   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:02.894076   57440 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:02.894267   57440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0816 13:44:02.894286   57440 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 13:44:03.153730   57440 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 13:44:03.153755   57440 machine.go:96] duration metric: took 1.000891309s to provisionDockerMachine
	I0816 13:44:03.153766   57440 start.go:293] postStartSetup for "no-preload-311070" (driver="kvm2")
	I0816 13:44:03.153776   57440 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 13:44:03.153790   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	I0816 13:44:03.154088   57440 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 13:44:03.154122   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:03.156612   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:03.156931   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:03.156969   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:03.157113   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:03.157299   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:03.157438   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:03.157595   57440 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/no-preload-311070/id_rsa Username:docker}
	I0816 13:44:03.241700   57440 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 13:44:03.246133   57440 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 13:44:03.246155   57440 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/addons for local assets ...
	I0816 13:44:03.246221   57440 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/files for local assets ...
	I0816 13:44:03.246292   57440 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem -> 111492.pem in /etc/ssl/certs
	I0816 13:44:03.246379   57440 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 13:44:03.257778   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /etc/ssl/certs/111492.pem (1708 bytes)
	I0816 13:44:03.283511   57440 start.go:296] duration metric: took 129.718161ms for postStartSetup
	I0816 13:44:03.283552   57440 fix.go:56] duration metric: took 20.706029776s for fixHost
	I0816 13:44:03.283603   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:03.286296   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:03.286608   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:03.286651   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:03.286803   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:03.287016   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:03.287158   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:03.287298   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:03.287477   57440 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:03.287639   57440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0816 13:44:03.287649   57440 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 13:44:03.389691   57440 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723815843.358144829
	
	I0816 13:44:03.389710   57440 fix.go:216] guest clock: 1723815843.358144829
	I0816 13:44:03.389717   57440 fix.go:229] Guest: 2024-08-16 13:44:03.358144829 +0000 UTC Remote: 2024-08-16 13:44:03.283556408 +0000 UTC m=+271.159980604 (delta=74.588421ms)
	I0816 13:44:03.389749   57440 fix.go:200] guest clock delta is within tolerance: 74.588421ms
	I0816 13:44:03.389754   57440 start.go:83] releasing machines lock for "no-preload-311070", held for 20.812259998s
	I0816 13:44:03.389779   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	I0816 13:44:03.390029   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetIP
	I0816 13:44:03.392788   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:03.393137   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:03.393160   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:03.393365   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	I0816 13:44:03.393870   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	I0816 13:44:03.394042   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	I0816 13:44:03.394125   57440 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 13:44:03.394180   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:03.394215   57440 ssh_runner.go:195] Run: cat /version.json
	I0816 13:44:03.394235   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:03.396749   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:03.396813   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:03.397124   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:03.397152   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:03.397180   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:03.397197   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:03.397466   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:03.397543   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:03.397717   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:03.397731   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:03.397874   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:03.397921   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:03.397998   57440 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/no-preload-311070/id_rsa Username:docker}
	I0816 13:44:03.398077   57440 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/no-preload-311070/id_rsa Username:docker}
	I0816 13:44:03.473552   57440 ssh_runner.go:195] Run: systemctl --version
	I0816 13:44:03.497958   57440 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 13:44:03.644212   57440 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 13:44:03.651347   57440 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 13:44:03.651455   57440 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 13:44:03.667822   57440 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 13:44:03.667842   57440 start.go:495] detecting cgroup driver to use...
	I0816 13:44:03.667915   57440 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 13:44:03.685838   57440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 13:44:03.700002   57440 docker.go:217] disabling cri-docker service (if available) ...
	I0816 13:44:03.700073   57440 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 13:44:03.713465   57440 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 13:44:03.726793   57440 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 13:44:03.838274   57440 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 13:44:03.967880   57440 docker.go:233] disabling docker service ...
	I0816 13:44:03.967951   57440 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 13:44:03.982178   57440 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 13:44:03.994574   57440 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 13:44:04.132374   57440 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 13:44:04.242820   57440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 13:44:04.257254   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 13:44:04.277961   57440 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 13:44:04.278018   57440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:04.288557   57440 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 13:44:04.288621   57440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:04.299108   57440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:04.310139   57440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:04.320850   57440 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 13:44:04.332224   57440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:04.342905   57440 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:04.361606   57440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:04.372423   57440 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 13:44:04.382305   57440 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 13:44:04.382355   57440 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 13:44:04.396774   57440 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 13:44:04.408417   57440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:44:04.516638   57440 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 13:44:04.684247   57440 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 13:44:04.684316   57440 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 13:44:04.689824   57440 start.go:563] Will wait 60s for crictl version
	I0816 13:44:04.689878   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:44:04.693456   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 13:44:04.732628   57440 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 13:44:04.732712   57440 ssh_runner.go:195] Run: crio --version
	I0816 13:44:04.760111   57440 ssh_runner.go:195] Run: crio --version
	I0816 13:44:04.790127   57440 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 13:44:03.413613   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .Start
	I0816 13:44:03.413783   57945 main.go:141] libmachine: (old-k8s-version-882237) Ensuring networks are active...
	I0816 13:44:03.414567   57945 main.go:141] libmachine: (old-k8s-version-882237) Ensuring network default is active
	I0816 13:44:03.414873   57945 main.go:141] libmachine: (old-k8s-version-882237) Ensuring network mk-old-k8s-version-882237 is active
	I0816 13:44:03.415336   57945 main.go:141] libmachine: (old-k8s-version-882237) Getting domain xml...
	I0816 13:44:03.416198   57945 main.go:141] libmachine: (old-k8s-version-882237) Creating domain...
	I0816 13:44:04.671017   57945 main.go:141] libmachine: (old-k8s-version-882237) Waiting to get IP...
	I0816 13:44:04.672035   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:04.672467   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:04.672560   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:04.672467   58914 retry.go:31] will retry after 271.707338ms: waiting for machine to come up
	I0816 13:44:04.946147   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:04.946549   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:04.946577   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:04.946506   58914 retry.go:31] will retry after 324.872897ms: waiting for machine to come up
	I0816 13:44:04.791315   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetIP
	I0816 13:44:04.794224   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:04.794587   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:04.794613   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:04.794794   57440 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0816 13:44:04.798848   57440 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 13:44:04.811522   57440 kubeadm.go:883] updating cluster {Name:no-preload-311070 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-311070 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 13:44:04.811628   57440 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 13:44:04.811685   57440 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 13:44:04.845546   57440 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 13:44:04.845567   57440 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0816 13:44:04.845630   57440 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:04.845654   57440 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0816 13:44:04.845687   57440 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 13:44:04.845714   57440 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0816 13:44:04.845694   57440 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0816 13:44:04.845789   57440 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0816 13:44:04.845839   57440 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0816 13:44:04.845875   57440 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0816 13:44:04.847428   57440 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0816 13:44:04.847440   57440 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0816 13:44:04.847454   57440 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0816 13:44:04.847428   57440 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0816 13:44:04.847484   57440 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0816 13:44:04.847429   57440 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0816 13:44:04.847431   57440 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:04.847508   57440 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 13:44:05.036225   57440 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0816 13:44:05.071514   57440 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0816 13:44:05.075186   57440 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0816 13:44:05.075233   57440 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0816 13:44:05.075273   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:44:05.111591   57440 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0816 13:44:05.111634   57440 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0816 13:44:05.111687   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:44:05.111704   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0816 13:44:05.145127   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0816 13:44:05.145289   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0816 13:44:05.186194   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0816 13:44:05.200886   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0816 13:44:05.203824   57440 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0816 13:44:05.208201   57440 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0816 13:44:05.209021   57440 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 13:44:05.234117   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0816 13:44:05.234893   57440 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0816 13:44:05.245119   57440 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0816 13:44:05.305971   57440 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0816 13:44:05.306084   57440 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0816 13:44:05.374880   57440 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0816 13:44:05.374922   57440 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0816 13:44:05.374971   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:44:05.399114   57440 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0816 13:44:05.399156   57440 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0816 13:44:05.399187   57440 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0816 13:44:05.399216   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:44:05.399225   57440 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 13:44:05.399267   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:44:05.399318   57440 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0816 13:44:05.399414   57440 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0816 13:44:05.401940   57440 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0816 13:44:05.401975   57440 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0816 13:44:05.402006   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:44:05.513930   57440 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0816 13:44:05.513961   57440 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0816 13:44:05.514017   57440 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0816 13:44:05.514032   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0816 13:44:05.514059   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0816 13:44:05.514112   57440 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0816 13:44:05.514115   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 13:44:05.514150   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0816 13:44:05.634275   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0816 13:44:05.634340   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 13:44:05.864118   57440 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:05.273252   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:05.273730   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:05.273758   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:05.273682   58914 retry.go:31] will retry after 300.46858ms: waiting for machine to come up
	I0816 13:44:05.576567   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:05.577060   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:05.577088   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:05.577023   58914 retry.go:31] will retry after 471.968976ms: waiting for machine to come up
	I0816 13:44:06.050651   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:06.051035   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:06.051075   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:06.051005   58914 retry.go:31] will retry after 696.85088ms: waiting for machine to come up
	I0816 13:44:06.750108   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:06.750611   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:06.750643   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:06.750548   58914 retry.go:31] will retry after 752.204898ms: waiting for machine to come up
	I0816 13:44:07.504321   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:07.504741   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:07.504766   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:07.504706   58914 retry.go:31] will retry after 734.932569ms: waiting for machine to come up
	I0816 13:44:08.241587   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:08.241950   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:08.241977   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:08.241895   58914 retry.go:31] will retry after 1.245731112s: waiting for machine to come up
	I0816 13:44:09.488787   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:09.489326   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:09.489370   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:09.489282   58914 retry.go:31] will retry after 1.454286295s: waiting for machine to come up
	I0816 13:44:07.542707   57440 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (2.028664898s)
	I0816 13:44:07.542745   57440 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0816 13:44:07.542770   57440 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0816 13:44:07.542773   57440 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1: (2.028589727s)
	I0816 13:44:07.542817   57440 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0: (2.028737534s)
	I0816 13:44:07.542831   57440 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0816 13:44:07.542837   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0816 13:44:07.542869   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0816 13:44:07.542888   57440 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0: (1.908584925s)
	I0816 13:44:07.542937   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0816 13:44:07.542951   57440 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0: (1.908590671s)
	I0816 13:44:07.542995   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 13:44:07.543034   57440 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.678883978s)
	I0816 13:44:07.543068   57440 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0816 13:44:07.543103   57440 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:07.543138   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:44:11.390456   57440 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0: (3.847434032s)
	I0816 13:44:11.390507   57440 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0816 13:44:11.390610   57440 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0816 13:44:11.390647   57440 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.847797916s)
	I0816 13:44:11.390674   57440 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0816 13:44:11.390684   57440 ssh_runner.go:235] Completed: which crictl: (3.847535001s)
	I0816 13:44:11.390740   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:11.390780   57440 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0: (3.847819859s)
	I0816 13:44:11.390810   57440 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1: (3.847960553s)
	I0816 13:44:11.390825   57440 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0816 13:44:11.390848   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0816 13:44:11.390908   57440 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0816 13:44:11.390923   57440 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0: (3.848033361s)
	I0816 13:44:11.390978   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0816 13:44:11.461833   57440 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0816 13:44:11.461859   57440 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0816 13:44:11.461905   57440 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0816 13:44:11.461922   57440 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0816 13:44:11.461843   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:11.461990   57440 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0816 13:44:11.462013   57440 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0816 13:44:11.462557   57440 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0816 13:44:11.462649   57440 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0816 13:44:10.944947   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:10.945395   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:10.945459   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:10.945352   58914 retry.go:31] will retry after 1.738238967s: waiting for machine to come up
	I0816 13:44:12.686147   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:12.686673   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:12.686701   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:12.686630   58914 retry.go:31] will retry after 2.778761596s: waiting for machine to come up
	I0816 13:44:13.839070   57440 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (2.377139357s)
	I0816 13:44:13.839101   57440 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0816 13:44:13.839141   57440 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0816 13:44:13.839207   57440 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0816 13:44:13.839255   57440 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.377282192s)
	I0816 13:44:13.839312   57440 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1: (2.377281378s)
	I0816 13:44:13.839358   57440 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0816 13:44:13.839358   57440 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0: (2.376690281s)
	I0816 13:44:13.839379   57440 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0816 13:44:13.839318   57440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:13.880059   57440 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0816 13:44:13.880203   57440 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0816 13:44:15.818912   57440 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (1.979684366s)
	I0816 13:44:15.818943   57440 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0816 13:44:15.818975   57440 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.938747663s)
	I0816 13:44:15.818986   57440 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0816 13:44:15.819000   57440 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0816 13:44:15.819043   57440 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0816 13:44:15.468356   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:15.468788   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:15.468817   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:15.468739   58914 retry.go:31] will retry after 2.807621726s: waiting for machine to come up
	I0816 13:44:18.277604   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:18.277980   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | unable to find current IP address of domain old-k8s-version-882237 in network mk-old-k8s-version-882237
	I0816 13:44:18.278013   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | I0816 13:44:18.277937   58914 retry.go:31] will retry after 4.131806684s: waiting for machine to come up
	I0816 13:44:17.795989   57440 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.976923514s)
	I0816 13:44:17.796013   57440 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0816 13:44:17.796040   57440 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0816 13:44:17.796088   57440 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0816 13:44:19.147815   57440 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.351703003s)
	I0816 13:44:19.147843   57440 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0816 13:44:19.147869   57440 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0816 13:44:19.147919   57440 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0816 13:44:19.791370   57440 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0816 13:44:19.791414   57440 cache_images.go:123] Successfully loaded all cached images
	I0816 13:44:19.791421   57440 cache_images.go:92] duration metric: took 14.945842963s to LoadCachedImages
	I0816 13:44:19.791440   57440 kubeadm.go:934] updating node { 192.168.61.116 8443 v1.31.0 crio true true} ...
	I0816 13:44:19.791593   57440 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-311070 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-311070 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 13:44:19.791681   57440 ssh_runner.go:195] Run: crio config
	I0816 13:44:19.843963   57440 cni.go:84] Creating CNI manager for ""
	I0816 13:44:19.843984   57440 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:44:19.844003   57440 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 13:44:19.844029   57440 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.116 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-311070 NodeName:no-preload-311070 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 13:44:19.844189   57440 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.116
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-311070"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 13:44:19.844250   57440 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 13:44:19.854942   57440 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 13:44:19.855014   57440 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 13:44:19.864794   57440 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0816 13:44:19.881210   57440 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 13:44:19.897450   57440 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0816 13:44:19.916038   57440 ssh_runner.go:195] Run: grep 192.168.61.116	control-plane.minikube.internal$ /etc/hosts
	I0816 13:44:19.919995   57440 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 13:44:19.934081   57440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:44:20.077422   57440 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 13:44:20.093846   57440 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070 for IP: 192.168.61.116
	I0816 13:44:20.093864   57440 certs.go:194] generating shared ca certs ...
	I0816 13:44:20.093881   57440 certs.go:226] acquiring lock for ca certs: {Name:mkdb46755373c135acf8239d2d4352f4b0b3d1d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:44:20.094055   57440 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key
	I0816 13:44:20.094120   57440 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key
	I0816 13:44:20.094135   57440 certs.go:256] generating profile certs ...
	I0816 13:44:20.094236   57440 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070/client.key
	I0816 13:44:20.094325   57440 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070/apiserver.key.000c4f90
	I0816 13:44:20.094391   57440 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070/proxy-client.key
	I0816 13:44:20.094529   57440 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem (1338 bytes)
	W0816 13:44:20.094571   57440 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149_empty.pem, impossibly tiny 0 bytes
	I0816 13:44:20.094584   57440 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 13:44:20.094621   57440 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem (1082 bytes)
	I0816 13:44:20.094654   57440 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem (1123 bytes)
	I0816 13:44:20.094795   57440 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem (1675 bytes)
	I0816 13:44:20.094874   57440 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem (1708 bytes)
	I0816 13:44:20.096132   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 13:44:20.130987   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 13:44:20.160701   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 13:44:20.187948   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 13:44:20.217162   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0816 13:44:20.242522   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 13:44:20.273582   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 13:44:20.300613   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 13:44:20.328363   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /usr/share/ca-certificates/111492.pem (1708 bytes)
	I0816 13:44:20.353396   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 13:44:20.377770   57440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem --> /usr/share/ca-certificates/11149.pem (1338 bytes)
	I0816 13:44:20.401760   57440 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 13:44:20.418302   57440 ssh_runner.go:195] Run: openssl version
	I0816 13:44:20.424065   57440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111492.pem && ln -fs /usr/share/ca-certificates/111492.pem /etc/ssl/certs/111492.pem"
	I0816 13:44:20.434841   57440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111492.pem
	I0816 13:44:20.439352   57440 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 12:33 /usr/share/ca-certificates/111492.pem
	I0816 13:44:20.439398   57440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111492.pem
	I0816 13:44:20.445210   57440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111492.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 13:44:20.455727   57440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 13:44:20.466095   57440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:44:20.470528   57440 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 12:22 /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:44:20.470568   57440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:44:20.476080   57440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 13:44:20.486189   57440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11149.pem && ln -fs /usr/share/ca-certificates/11149.pem /etc/ssl/certs/11149.pem"
	I0816 13:44:20.496373   57440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11149.pem
	I0816 13:44:20.500696   57440 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 12:33 /usr/share/ca-certificates/11149.pem
	I0816 13:44:20.500737   57440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11149.pem
	I0816 13:44:20.506426   57440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11149.pem /etc/ssl/certs/51391683.0"
	I0816 13:44:20.517130   57440 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 13:44:20.521664   57440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 13:44:20.527604   57440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 13:44:20.533478   57440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 13:44:20.539285   57440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 13:44:20.545042   57440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 13:44:20.550681   57440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 13:44:20.556239   57440 kubeadm.go:392] StartCluster: {Name:no-preload-311070 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-311070 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 13:44:20.556314   57440 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 13:44:20.556391   57440 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 13:44:20.594069   57440 cri.go:89] found id: ""
	I0816 13:44:20.594128   57440 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 13:44:20.604067   57440 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 13:44:20.604085   57440 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 13:44:20.604131   57440 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 13:44:20.614182   57440 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 13:44:20.615072   57440 kubeconfig.go:125] found "no-preload-311070" server: "https://192.168.61.116:8443"
	I0816 13:44:20.617096   57440 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 13:44:20.626046   57440 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.116
	I0816 13:44:20.626069   57440 kubeadm.go:1160] stopping kube-system containers ...
	I0816 13:44:20.626078   57440 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 13:44:20.626114   57440 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 13:44:20.659889   57440 cri.go:89] found id: ""
	I0816 13:44:20.659954   57440 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 13:44:20.676977   57440 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 13:44:20.686930   57440 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 13:44:20.686946   57440 kubeadm.go:157] found existing configuration files:
	
	I0816 13:44:20.686985   57440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 13:44:20.696144   57440 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 13:44:20.696222   57440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 13:44:20.705550   57440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 13:44:20.714350   57440 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 13:44:20.714399   57440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 13:44:20.723636   57440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 13:44:20.732287   57440 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 13:44:20.732329   57440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 13:44:20.741390   57440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 13:44:20.749913   57440 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 13:44:20.749956   57440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 13:44:20.758968   57440 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 13:44:20.768054   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:20.872847   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:21.933273   57440 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.060394194s)
	I0816 13:44:21.933303   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:22.130462   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:23.689897   58430 start.go:364] duration metric: took 2m7.587518205s to acquireMachinesLock for "default-k8s-diff-port-893736"
	I0816 13:44:23.689958   58430 start.go:96] Skipping create...Using existing machine configuration
	I0816 13:44:23.689965   58430 fix.go:54] fixHost starting: 
	I0816 13:44:23.690363   58430 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:23.690401   58430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:23.707766   58430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40097
	I0816 13:44:23.708281   58430 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:23.709439   58430 main.go:141] libmachine: Using API Version  1
	I0816 13:44:23.709462   58430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:23.709757   58430 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:23.709906   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:44:23.710017   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetState
	I0816 13:44:23.711612   58430 fix.go:112] recreateIfNeeded on default-k8s-diff-port-893736: state=Stopped err=<nil>
	I0816 13:44:23.711655   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	W0816 13:44:23.711797   58430 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 13:44:23.713600   58430 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-893736" ...
	I0816 13:44:22.413954   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.414552   57945 main.go:141] libmachine: (old-k8s-version-882237) Found IP for machine: 192.168.72.105
	I0816 13:44:22.414575   57945 main.go:141] libmachine: (old-k8s-version-882237) Reserving static IP address...
	I0816 13:44:22.414591   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has current primary IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.415085   57945 main.go:141] libmachine: (old-k8s-version-882237) Reserved static IP address: 192.168.72.105
	I0816 13:44:22.415142   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "old-k8s-version-882237", mac: "52:54:00:ce:02:bd", ip: "192.168.72.105"} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:22.415157   57945 main.go:141] libmachine: (old-k8s-version-882237) Waiting for SSH to be available...
	I0816 13:44:22.415183   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | skip adding static IP to network mk-old-k8s-version-882237 - found existing host DHCP lease matching {name: "old-k8s-version-882237", mac: "52:54:00:ce:02:bd", ip: "192.168.72.105"}
	I0816 13:44:22.415195   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | Getting to WaitForSSH function...
	I0816 13:44:22.417524   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.417844   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:22.417875   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.417987   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | Using SSH client type: external
	I0816 13:44:22.418014   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/old-k8s-version-882237/id_rsa (-rw-------)
	I0816 13:44:22.418052   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.105 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-3966/.minikube/machines/old-k8s-version-882237/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 13:44:22.418072   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | About to run SSH command:
	I0816 13:44:22.418086   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | exit 0
	I0816 13:44:22.536890   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | SSH cmd err, output: <nil>: 
	I0816 13:44:22.537284   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetConfigRaw
	I0816 13:44:22.537843   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetIP
	I0816 13:44:22.540100   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.540454   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:22.540490   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.540683   57945 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/config.json ...
	I0816 13:44:22.540939   57945 machine.go:93] provisionDockerMachine start ...
	I0816 13:44:22.540960   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:44:22.541184   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:22.543102   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.543385   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:22.543413   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.543505   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:44:22.543664   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:22.543798   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:22.543991   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:44:22.544177   57945 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:22.544497   57945 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.105 22 <nil> <nil>}
	I0816 13:44:22.544519   57945 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 13:44:22.641319   57945 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 13:44:22.641355   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetMachineName
	I0816 13:44:22.641606   57945 buildroot.go:166] provisioning hostname "old-k8s-version-882237"
	I0816 13:44:22.641630   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetMachineName
	I0816 13:44:22.641820   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:22.644657   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.645053   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:22.645085   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.645279   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:44:22.645476   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:22.645656   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:22.645827   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:44:22.646013   57945 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:22.646233   57945 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.105 22 <nil> <nil>}
	I0816 13:44:22.646248   57945 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-882237 && echo "old-k8s-version-882237" | sudo tee /etc/hostname
	I0816 13:44:22.759488   57945 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-882237
	
	I0816 13:44:22.759526   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:22.762382   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.762774   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:22.762811   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.762959   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:44:22.763163   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:22.763353   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:22.763534   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:44:22.763738   57945 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:22.763967   57945 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.105 22 <nil> <nil>}
	I0816 13:44:22.763995   57945 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-882237' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-882237/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-882237' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 13:44:22.878120   57945 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 13:44:22.878158   57945 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-3966/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-3966/.minikube}
	I0816 13:44:22.878215   57945 buildroot.go:174] setting up certificates
	I0816 13:44:22.878230   57945 provision.go:84] configureAuth start
	I0816 13:44:22.878244   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetMachineName
	I0816 13:44:22.878581   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetIP
	I0816 13:44:22.881426   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.881808   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:22.881843   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.881971   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:22.884352   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.884750   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:22.884778   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:22.884932   57945 provision.go:143] copyHostCerts
	I0816 13:44:22.884994   57945 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem, removing ...
	I0816 13:44:22.885016   57945 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem
	I0816 13:44:22.885084   57945 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem (1082 bytes)
	I0816 13:44:22.885230   57945 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem, removing ...
	I0816 13:44:22.885242   57945 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem
	I0816 13:44:22.885276   57945 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem (1123 bytes)
	I0816 13:44:22.885374   57945 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem, removing ...
	I0816 13:44:22.885383   57945 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem
	I0816 13:44:22.885415   57945 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem (1675 bytes)
	I0816 13:44:22.885503   57945 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-882237 san=[127.0.0.1 192.168.72.105 localhost minikube old-k8s-version-882237]
	I0816 13:44:23.017446   57945 provision.go:177] copyRemoteCerts
	I0816 13:44:23.017519   57945 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 13:44:23.017555   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:23.020030   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.020423   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:23.020460   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.020678   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:44:23.020888   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:23.021076   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:44:23.021199   57945 sshutil.go:53] new ssh client: &{IP:192.168.72.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/old-k8s-version-882237/id_rsa Username:docker}
	I0816 13:44:23.100006   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0816 13:44:23.128795   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 13:44:23.157542   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 13:44:23.182619   57945 provision.go:87] duration metric: took 304.375843ms to configureAuth
	I0816 13:44:23.182652   57945 buildroot.go:189] setting minikube options for container-runtime
	I0816 13:44:23.182882   57945 config.go:182] Loaded profile config "old-k8s-version-882237": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0816 13:44:23.182984   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:23.186043   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.186441   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:23.186474   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.186648   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:44:23.186844   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:23.187015   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:23.187196   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:44:23.187383   57945 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:23.187566   57945 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.105 22 <nil> <nil>}
	I0816 13:44:23.187587   57945 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 13:44:23.459221   57945 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 13:44:23.459248   57945 machine.go:96] duration metric: took 918.295024ms to provisionDockerMachine
	I0816 13:44:23.459261   57945 start.go:293] postStartSetup for "old-k8s-version-882237" (driver="kvm2")
	I0816 13:44:23.459275   57945 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 13:44:23.459305   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:44:23.459614   57945 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 13:44:23.459649   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:23.462624   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.463010   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:23.463033   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.463210   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:44:23.463405   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:23.463584   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:44:23.463715   57945 sshutil.go:53] new ssh client: &{IP:192.168.72.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/old-k8s-version-882237/id_rsa Username:docker}
	I0816 13:44:23.550785   57945 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 13:44:23.554984   57945 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 13:44:23.555009   57945 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/addons for local assets ...
	I0816 13:44:23.555078   57945 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/files for local assets ...
	I0816 13:44:23.555171   57945 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem -> 111492.pem in /etc/ssl/certs
	I0816 13:44:23.555290   57945 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 13:44:23.564655   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /etc/ssl/certs/111492.pem (1708 bytes)
	I0816 13:44:23.588471   57945 start.go:296] duration metric: took 129.196791ms for postStartSetup
	I0816 13:44:23.588515   57945 fix.go:56] duration metric: took 20.198590598s for fixHost
	I0816 13:44:23.588544   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:23.591443   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.591805   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:23.591835   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.591959   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:44:23.592145   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:23.592354   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:23.592492   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:44:23.592668   57945 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:23.592868   57945 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.105 22 <nil> <nil>}
	I0816 13:44:23.592885   57945 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 13:44:23.689724   57945 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723815863.663875328
	
	I0816 13:44:23.689760   57945 fix.go:216] guest clock: 1723815863.663875328
	I0816 13:44:23.689771   57945 fix.go:229] Guest: 2024-08-16 13:44:23.663875328 +0000 UTC Remote: 2024-08-16 13:44:23.588520483 +0000 UTC m=+233.521229154 (delta=75.354845ms)
	I0816 13:44:23.689796   57945 fix.go:200] guest clock delta is within tolerance: 75.354845ms
	I0816 13:44:23.689806   57945 start.go:83] releasing machines lock for "old-k8s-version-882237", held for 20.299922092s
	I0816 13:44:23.689839   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:44:23.690115   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetIP
	I0816 13:44:23.692683   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.693079   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:23.693108   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.693268   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:44:23.693753   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:44:23.693926   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .DriverName
	I0816 13:44:23.694009   57945 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 13:44:23.694062   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:23.694142   57945 ssh_runner.go:195] Run: cat /version.json
	I0816 13:44:23.694167   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHHostname
	I0816 13:44:23.696872   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.696897   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.697247   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:23.697281   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:23.697309   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.697340   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:23.697622   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:44:23.697801   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHPort
	I0816 13:44:23.697830   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:23.697974   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:44:23.698010   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHKeyPath
	I0816 13:44:23.698144   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetSSHUsername
	I0816 13:44:23.698155   57945 sshutil.go:53] new ssh client: &{IP:192.168.72.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/old-k8s-version-882237/id_rsa Username:docker}
	I0816 13:44:23.698312   57945 sshutil.go:53] new ssh client: &{IP:192.168.72.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/old-k8s-version-882237/id_rsa Username:docker}
	I0816 13:44:23.774706   57945 ssh_runner.go:195] Run: systemctl --version
	I0816 13:44:23.802788   57945 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 13:44:23.955361   57945 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 13:44:23.963291   57945 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 13:44:23.963363   57945 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 13:44:23.979542   57945 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 13:44:23.979579   57945 start.go:495] detecting cgroup driver to use...
	I0816 13:44:23.979645   57945 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 13:44:24.002509   57945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 13:44:24.019715   57945 docker.go:217] disabling cri-docker service (if available) ...
	I0816 13:44:24.019773   57945 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 13:44:24.033677   57945 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 13:44:24.049195   57945 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 13:44:24.168789   57945 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 13:44:24.346709   57945 docker.go:233] disabling docker service ...
	I0816 13:44:24.346772   57945 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 13:44:24.363739   57945 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 13:44:24.378785   57945 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 13:44:24.547705   57945 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 13:44:24.738866   57945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 13:44:24.756139   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 13:44:24.775999   57945 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0816 13:44:24.776060   57945 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:24.786682   57945 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 13:44:24.786783   57945 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:24.797385   57945 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:24.807825   57945 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:24.817919   57945 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 13:44:24.828884   57945 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 13:44:24.838725   57945 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 13:44:24.838782   57945 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 13:44:24.852544   57945 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 13:44:24.868302   57945 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:44:24.980614   57945 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 13:44:25.122584   57945 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 13:44:25.122660   57945 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 13:44:25.128622   57945 start.go:563] Will wait 60s for crictl version
	I0816 13:44:25.128694   57945 ssh_runner.go:195] Run: which crictl
	I0816 13:44:25.133726   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 13:44:25.188714   57945 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 13:44:25.188801   57945 ssh_runner.go:195] Run: crio --version
	I0816 13:44:25.223719   57945 ssh_runner.go:195] Run: crio --version
	I0816 13:44:25.263894   57945 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0816 13:44:23.714877   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .Start
	I0816 13:44:23.715069   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Ensuring networks are active...
	I0816 13:44:23.715788   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Ensuring network default is active
	I0816 13:44:23.716164   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Ensuring network mk-default-k8s-diff-port-893736 is active
	I0816 13:44:23.716648   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Getting domain xml...
	I0816 13:44:23.717424   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Creating domain...
	I0816 13:44:24.979917   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting to get IP...
	I0816 13:44:24.980942   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:24.981375   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:24.981448   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:24.981363   59082 retry.go:31] will retry after 199.038336ms: waiting for machine to come up
	I0816 13:44:25.181886   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:25.182350   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:25.182374   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:25.182330   59082 retry.go:31] will retry after 297.566018ms: waiting for machine to come up
	I0816 13:44:25.481811   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:25.482271   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:25.482296   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:25.482234   59082 retry.go:31] will retry after 297.833233ms: waiting for machine to come up
	I0816 13:44:25.781831   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:25.782445   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:25.782479   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:25.782400   59082 retry.go:31] will retry after 459.810978ms: waiting for machine to come up
	I0816 13:44:22.220022   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:22.317717   57440 api_server.go:52] waiting for apiserver process to appear ...
	I0816 13:44:22.317800   57440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:22.818025   57440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:23.318171   57440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:23.354996   57440 api_server.go:72] duration metric: took 1.037294965s to wait for apiserver process to appear ...
	I0816 13:44:23.355023   57440 api_server.go:88] waiting for apiserver healthz status ...
	I0816 13:44:23.355043   57440 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0816 13:44:23.355677   57440 api_server.go:269] stopped: https://192.168.61.116:8443/healthz: Get "https://192.168.61.116:8443/healthz": dial tcp 192.168.61.116:8443: connect: connection refused
	I0816 13:44:23.855190   57440 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0816 13:44:26.719152   57440 api_server.go:279] https://192.168.61.116:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 13:44:26.719184   57440 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 13:44:26.719204   57440 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0816 13:44:26.756329   57440 api_server.go:279] https://192.168.61.116:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 13:44:26.756366   57440 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 13:44:26.855581   57440 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0816 13:44:26.862856   57440 api_server.go:279] https://192.168.61.116:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 13:44:26.862885   57440 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 13:44:27.355555   57440 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0816 13:44:27.365664   57440 api_server.go:279] https://192.168.61.116:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 13:44:27.365702   57440 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 13:44:27.855844   57440 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0816 13:44:27.863185   57440 api_server.go:279] https://192.168.61.116:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 13:44:27.863227   57440 api_server.go:103] status: https://192.168.61.116:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 13:44:28.355490   57440 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0816 13:44:28.361410   57440 api_server.go:279] https://192.168.61.116:8443/healthz returned 200:
	ok
	I0816 13:44:28.374558   57440 api_server.go:141] control plane version: v1.31.0
	I0816 13:44:28.374593   57440 api_server.go:131] duration metric: took 5.019562023s to wait for apiserver health ...
	I0816 13:44:28.374604   57440 cni.go:84] Creating CNI manager for ""
	I0816 13:44:28.374613   57440 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:44:28.376749   57440 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 13:44:28.378413   57440 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 13:44:28.401199   57440 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 13:44:28.420798   57440 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 13:44:28.452605   57440 system_pods.go:59] 8 kube-system pods found
	I0816 13:44:28.452645   57440 system_pods.go:61] "coredns-6f6b679f8f-8kbs6" [e732183e-3b22-4a11-909a-246de5fc1c8a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 13:44:28.452655   57440 system_pods.go:61] "etcd-no-preload-311070" [e05deb95-06ff-4a8d-b520-0350b2332814] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 13:44:28.452663   57440 system_pods.go:61] "kube-apiserver-no-preload-311070" [06cb4989-0511-4c14-a3ec-e966829cf2ec] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 13:44:28.452671   57440 system_pods.go:61] "kube-controller-manager-no-preload-311070" [baa9f379-4e14-4873-a456-42e90a816b0b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 13:44:28.452680   57440 system_pods.go:61] "kube-proxy-b8d5b" [9ed1c33b-903f-43e8-880c-b9a49c658806] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0816 13:44:28.452689   57440 system_pods.go:61] "kube-scheduler-no-preload-311070" [166c0f64-ebf6-413f-82d5-f1c32991c63a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 13:44:28.452704   57440 system_pods.go:61] "metrics-server-6867b74b74-mgxhv" [e9654a8e-4db2-494d-93a7-a134b0e2bb50] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 13:44:28.452710   57440 system_pods.go:61] "storage-provisioner" [f340d2e3-2889-4200-b477-830494b827c6] Running
	I0816 13:44:28.452719   57440 system_pods.go:74] duration metric: took 31.89892ms to wait for pod list to return data ...
	I0816 13:44:28.452726   57440 node_conditions.go:102] verifying NodePressure condition ...
	I0816 13:44:28.463229   57440 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 13:44:28.463262   57440 node_conditions.go:123] node cpu capacity is 2
	I0816 13:44:28.463275   57440 node_conditions.go:105] duration metric: took 10.544476ms to run NodePressure ...
	I0816 13:44:28.463296   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:28.809304   57440 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0816 13:44:28.819091   57440 kubeadm.go:739] kubelet initialised
	I0816 13:44:28.819115   57440 kubeadm.go:740] duration metric: took 9.779672ms waiting for restarted kubelet to initialise ...
	I0816 13:44:28.819124   57440 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:44:28.827828   57440 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-8kbs6" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:28.840277   57440 pod_ready.go:98] node "no-preload-311070" hosting pod "coredns-6f6b679f8f-8kbs6" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:28.840310   57440 pod_ready.go:82] duration metric: took 12.450089ms for pod "coredns-6f6b679f8f-8kbs6" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:28.840322   57440 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-311070" hosting pod "coredns-6f6b679f8f-8kbs6" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:28.840333   57440 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:28.847012   57440 pod_ready.go:98] node "no-preload-311070" hosting pod "etcd-no-preload-311070" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:28.847036   57440 pod_ready.go:82] duration metric: took 6.692927ms for pod "etcd-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:28.847045   57440 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-311070" hosting pod "etcd-no-preload-311070" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:28.847050   57440 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:28.861358   57440 pod_ready.go:98] node "no-preload-311070" hosting pod "kube-apiserver-no-preload-311070" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:28.861404   57440 pod_ready.go:82] duration metric: took 14.346379ms for pod "kube-apiserver-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:28.861417   57440 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-311070" hosting pod "kube-apiserver-no-preload-311070" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:28.861428   57440 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:28.870641   57440 pod_ready.go:98] node "no-preload-311070" hosting pod "kube-controller-manager-no-preload-311070" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:28.870663   57440 pod_ready.go:82] duration metric: took 9.224713ms for pod "kube-controller-manager-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:28.870671   57440 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-311070" hosting pod "kube-controller-manager-no-preload-311070" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:28.870678   57440 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-b8d5b" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:29.224281   57440 pod_ready.go:98] node "no-preload-311070" hosting pod "kube-proxy-b8d5b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:29.224310   57440 pod_ready.go:82] duration metric: took 353.622663ms for pod "kube-proxy-b8d5b" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:29.224322   57440 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-311070" hosting pod "kube-proxy-b8d5b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:29.224331   57440 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:29.624518   57440 pod_ready.go:98] node "no-preload-311070" hosting pod "kube-scheduler-no-preload-311070" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:29.624552   57440 pod_ready.go:82] duration metric: took 400.212041ms for pod "kube-scheduler-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:29.624567   57440 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-311070" hosting pod "kube-scheduler-no-preload-311070" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:29.624577   57440 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:30.030291   57440 pod_ready.go:98] node "no-preload-311070" hosting pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:30.030327   57440 pod_ready.go:82] duration metric: took 405.73495ms for pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:30.030341   57440 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-311070" hosting pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:30.030352   57440 pod_ready.go:39] duration metric: took 1.211214389s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:44:30.030372   57440 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 13:44:30.045247   57440 ops.go:34] apiserver oom_adj: -16
	I0816 13:44:30.045279   57440 kubeadm.go:597] duration metric: took 9.441179951s to restartPrimaryControlPlane
	I0816 13:44:30.045291   57440 kubeadm.go:394] duration metric: took 9.489057744s to StartCluster
	I0816 13:44:30.045312   57440 settings.go:142] acquiring lock: {Name:mk96ffc9331ceacd8f3c1c33a59b38e047898a0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:44:30.045410   57440 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 13:44:30.047053   57440 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/kubeconfig: {Name:mk102d7d0e1fecb6a50b9d6c1ee82dcff7f7a898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:44:30.047310   57440 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 13:44:30.047415   57440 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 13:44:30.047486   57440 addons.go:69] Setting storage-provisioner=true in profile "no-preload-311070"
	I0816 13:44:30.047521   57440 addons.go:234] Setting addon storage-provisioner=true in "no-preload-311070"
	W0816 13:44:30.047534   57440 addons.go:243] addon storage-provisioner should already be in state true
	I0816 13:44:30.047569   57440 host.go:66] Checking if "no-preload-311070" exists ...
	I0816 13:44:30.048048   57440 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:30.048079   57440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:30.048302   57440 addons.go:69] Setting default-storageclass=true in profile "no-preload-311070"
	I0816 13:44:30.048339   57440 addons.go:69] Setting metrics-server=true in profile "no-preload-311070"
	I0816 13:44:30.048377   57440 addons.go:234] Setting addon metrics-server=true in "no-preload-311070"
	W0816 13:44:30.048387   57440 addons.go:243] addon metrics-server should already be in state true
	I0816 13:44:30.048424   57440 host.go:66] Checking if "no-preload-311070" exists ...
	I0816 13:44:30.048343   57440 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-311070"
	I0816 13:44:30.048812   57440 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:30.048834   57440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:30.048933   57440 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:30.048957   57440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:30.049282   57440 config.go:182] Loaded profile config "no-preload-311070": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 13:44:30.050905   57440 out.go:177] * Verifying Kubernetes components...
	I0816 13:44:30.052478   57440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:44:30.069405   57440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40065
	I0816 13:44:30.069463   57440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33057
	I0816 13:44:30.069735   57440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46331
	I0816 13:44:30.069949   57440 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:30.070072   57440 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:30.070145   57440 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:30.070488   57440 main.go:141] libmachine: Using API Version  1
	I0816 13:44:30.070506   57440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:30.070586   57440 main.go:141] libmachine: Using API Version  1
	I0816 13:44:30.070598   57440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:30.070618   57440 main.go:141] libmachine: Using API Version  1
	I0816 13:44:30.070627   57440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:30.070977   57440 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:30.071006   57440 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:30.071031   57440 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:30.071212   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetState
	I0816 13:44:30.071602   57440 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:30.071602   57440 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:30.071639   57440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:30.071621   57440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:30.074680   57440 addons.go:234] Setting addon default-storageclass=true in "no-preload-311070"
	W0816 13:44:30.074699   57440 addons.go:243] addon default-storageclass should already be in state true
	I0816 13:44:30.074730   57440 host.go:66] Checking if "no-preload-311070" exists ...
	I0816 13:44:30.075073   57440 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:30.075100   57440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:30.088961   57440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46717
	I0816 13:44:30.089421   57440 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:30.089952   57440 main.go:141] libmachine: Using API Version  1
	I0816 13:44:30.089971   57440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:30.090128   57440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37475
	I0816 13:44:30.090429   57440 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:30.090491   57440 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:30.090744   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetState
	I0816 13:44:30.090933   57440 main.go:141] libmachine: Using API Version  1
	I0816 13:44:30.090950   57440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:30.091263   57440 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:30.091463   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetState
	I0816 13:44:30.093258   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	I0816 13:44:30.093571   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	I0816 13:44:25.265126   57945 main.go:141] libmachine: (old-k8s-version-882237) Calling .GetIP
	I0816 13:44:25.268186   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:25.268630   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:02:bd", ip: ""} in network mk-old-k8s-version-882237: {Iface:virbr4 ExpiryTime:2024-08-16 14:44:14 +0000 UTC Type:0 Mac:52:54:00:ce:02:bd Iaid: IPaddr:192.168.72.105 Prefix:24 Hostname:old-k8s-version-882237 Clientid:01:52:54:00:ce:02:bd}
	I0816 13:44:25.268662   57945 main.go:141] libmachine: (old-k8s-version-882237) DBG | domain old-k8s-version-882237 has defined IP address 192.168.72.105 and MAC address 52:54:00:ce:02:bd in network mk-old-k8s-version-882237
	I0816 13:44:25.268927   57945 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0816 13:44:25.274101   57945 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 13:44:25.288155   57945 kubeadm.go:883] updating cluster {Name:old-k8s-version-882237 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-882237 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.105 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 13:44:25.288260   57945 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0816 13:44:25.288311   57945 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 13:44:25.342303   57945 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0816 13:44:25.342377   57945 ssh_runner.go:195] Run: which lz4
	I0816 13:44:25.346641   57945 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 13:44:25.350761   57945 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 13:44:25.350793   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0816 13:44:27.052140   57945 crio.go:462] duration metric: took 1.705504554s to copy over tarball
	I0816 13:44:27.052223   57945 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 13:44:30.094479   57440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35323
	I0816 13:44:30.094965   57440 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:30.095482   57440 main.go:141] libmachine: Using API Version  1
	I0816 13:44:30.095502   57440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:30.095857   57440 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:30.096322   57440 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:30.096361   57440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:30.128555   57440 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:30.128676   57440 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0816 13:44:26.244353   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:26.245158   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:26.245183   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:26.245062   59082 retry.go:31] will retry after 680.176025ms: waiting for machine to come up
	I0816 13:44:26.926654   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:26.927139   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:26.927183   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:26.927106   59082 retry.go:31] will retry after 720.530442ms: waiting for machine to come up
	I0816 13:44:27.648858   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:27.649342   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:27.649367   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:27.649289   59082 retry.go:31] will retry after 930.752133ms: waiting for machine to come up
	I0816 13:44:28.581283   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:28.581684   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:28.581709   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:28.581635   59082 retry.go:31] will retry after 972.791503ms: waiting for machine to come up
	I0816 13:44:29.556168   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:29.556563   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:29.556583   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:29.556525   59082 retry.go:31] will retry after 1.18129541s: waiting for machine to come up
	I0816 13:44:30.739498   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:30.739951   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:30.739978   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:30.739883   59082 retry.go:31] will retry after 2.27951459s: waiting for machine to come up
	I0816 13:44:30.133959   57440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39625
	I0816 13:44:30.134516   57440 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:30.135080   57440 main.go:141] libmachine: Using API Version  1
	I0816 13:44:30.135105   57440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:30.135463   57440 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:30.135598   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetState
	I0816 13:44:30.137494   57440 main.go:141] libmachine: (no-preload-311070) Calling .DriverName
	I0816 13:44:30.137805   57440 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 13:44:30.137824   57440 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 13:44:30.137839   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:30.141006   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:30.141509   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:30.141544   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:30.141772   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:30.141952   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:30.142150   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:30.142305   57440 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/no-preload-311070/id_rsa Username:docker}
	I0816 13:44:30.164598   57440 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 13:44:30.164627   57440 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 13:44:30.164653   57440 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 13:44:30.164654   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:30.164662   57440 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 13:44:30.164687   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHHostname
	I0816 13:44:30.168935   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:30.169259   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:30.169588   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:30.169615   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:30.169806   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:30.169828   57440 main.go:141] libmachine: (no-preload-311070) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:17:b3", ip: ""} in network mk-no-preload-311070: {Iface:virbr3 ExpiryTime:2024-08-16 14:43:53 +0000 UTC Type:0 Mac:52:54:00:14:17:b3 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:no-preload-311070 Clientid:01:52:54:00:14:17:b3}
	I0816 13:44:30.169859   57440 main.go:141] libmachine: (no-preload-311070) DBG | domain no-preload-311070 has defined IP address 192.168.61.116 and MAC address 52:54:00:14:17:b3 in network mk-no-preload-311070
	I0816 13:44:30.169953   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:30.170096   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:30.170103   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHPort
	I0816 13:44:30.170243   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHKeyPath
	I0816 13:44:30.170241   57440 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/no-preload-311070/id_rsa Username:docker}
	I0816 13:44:30.170389   57440 main.go:141] libmachine: (no-preload-311070) Calling .GetSSHUsername
	I0816 13:44:30.170505   57440 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/no-preload-311070/id_rsa Username:docker}
	I0816 13:44:30.285806   57440 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 13:44:30.312267   57440 node_ready.go:35] waiting up to 6m0s for node "no-preload-311070" to be "Ready" ...
	I0816 13:44:30.406371   57440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 13:44:30.409491   57440 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 13:44:30.409515   57440 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0816 13:44:30.440485   57440 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 13:44:30.440508   57440 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 13:44:30.480735   57440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 13:44:30.484549   57440 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 13:44:30.484573   57440 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 13:44:30.541485   57440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 13:44:32.535406   57440 node_ready.go:53] node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:33.204746   57440 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.723973086s)
	I0816 13:44:33.204802   57440 main.go:141] libmachine: Making call to close driver server
	I0816 13:44:33.204817   57440 main.go:141] libmachine: (no-preload-311070) Calling .Close
	I0816 13:44:33.204843   57440 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.798437569s)
	I0816 13:44:33.204877   57440 main.go:141] libmachine: Making call to close driver server
	I0816 13:44:33.204889   57440 main.go:141] libmachine: (no-preload-311070) Calling .Close
	I0816 13:44:33.205092   57440 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:44:33.205116   57440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:44:33.205126   57440 main.go:141] libmachine: Making call to close driver server
	I0816 13:44:33.205134   57440 main.go:141] libmachine: (no-preload-311070) Calling .Close
	I0816 13:44:33.205357   57440 main.go:141] libmachine: (no-preload-311070) DBG | Closing plugin on server side
	I0816 13:44:33.205359   57440 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:44:33.205379   57440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:44:33.205387   57440 main.go:141] libmachine: Making call to close driver server
	I0816 13:44:33.205395   57440 main.go:141] libmachine: (no-preload-311070) Calling .Close
	I0816 13:44:33.205408   57440 main.go:141] libmachine: (no-preload-311070) DBG | Closing plugin on server side
	I0816 13:44:33.205445   57440 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:44:33.205454   57440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:44:33.205593   57440 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:44:33.205605   57440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:44:33.214075   57440 main.go:141] libmachine: Making call to close driver server
	I0816 13:44:33.214095   57440 main.go:141] libmachine: (no-preload-311070) Calling .Close
	I0816 13:44:33.214307   57440 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:44:33.214320   57440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:44:33.259136   57440 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.717608276s)
	I0816 13:44:33.259188   57440 main.go:141] libmachine: Making call to close driver server
	I0816 13:44:33.259212   57440 main.go:141] libmachine: (no-preload-311070) Calling .Close
	I0816 13:44:33.259468   57440 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:44:33.259485   57440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:44:33.259495   57440 main.go:141] libmachine: Making call to close driver server
	I0816 13:44:33.259503   57440 main.go:141] libmachine: (no-preload-311070) Calling .Close
	I0816 13:44:33.259988   57440 main.go:141] libmachine: (no-preload-311070) DBG | Closing plugin on server side
	I0816 13:44:33.260004   57440 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:44:33.260016   57440 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:44:33.260026   57440 addons.go:475] Verifying addon metrics-server=true in "no-preload-311070"
	I0816 13:44:33.262190   57440 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0816 13:44:30.191146   57945 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.138885293s)
	I0816 13:44:30.191188   57945 crio.go:469] duration metric: took 3.139020745s to extract the tarball
	I0816 13:44:30.191198   57945 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 13:44:30.249011   57945 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 13:44:30.285826   57945 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0816 13:44:30.285847   57945 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0816 13:44:30.285918   57945 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:30.285940   57945 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:44:30.285947   57945 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0816 13:44:30.285922   57945 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:44:30.285971   57945 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0816 13:44:30.286019   57945 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:44:30.285922   57945 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:44:30.285979   57945 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0816 13:44:30.288208   57945 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:44:30.288272   57945 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:30.288275   57945 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0816 13:44:30.288205   57945 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0816 13:44:30.288303   57945 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0816 13:44:30.288320   57945 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:44:30.288211   57945 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:44:30.288207   57945 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:44:30.434593   57945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:44:30.434847   57945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:44:30.438852   57945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:44:30.449704   57945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0816 13:44:30.451130   57945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:44:30.454848   57945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0816 13:44:30.513569   57945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0816 13:44:30.594404   57945 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0816 13:44:30.594453   57945 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:44:30.594509   57945 ssh_runner.go:195] Run: which crictl
	I0816 13:44:30.612653   57945 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0816 13:44:30.612699   57945 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:44:30.612746   57945 ssh_runner.go:195] Run: which crictl
	I0816 13:44:30.652117   57945 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0816 13:44:30.652162   57945 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:44:30.652214   57945 ssh_runner.go:195] Run: which crictl
	I0816 13:44:30.681057   57945 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0816 13:44:30.681116   57945 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0816 13:44:30.681163   57945 ssh_runner.go:195] Run: which crictl
	I0816 13:44:30.681239   57945 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0816 13:44:30.681296   57945 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:44:30.681341   57945 ssh_runner.go:195] Run: which crictl
	I0816 13:44:30.688696   57945 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0816 13:44:30.688739   57945 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0816 13:44:30.688785   57945 ssh_runner.go:195] Run: which crictl
	I0816 13:44:30.706749   57945 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0816 13:44:30.706802   57945 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0816 13:44:30.706816   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:44:30.706843   57945 ssh_runner.go:195] Run: which crictl
	I0816 13:44:30.706911   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:44:30.706938   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:44:30.706987   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 13:44:30.707031   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:44:30.707045   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 13:44:30.913446   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 13:44:30.913520   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 13:44:30.913548   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:44:30.913611   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:44:30.913653   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 13:44:30.913675   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:44:30.913813   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:44:31.079066   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 13:44:31.079100   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 13:44:31.079140   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 13:44:31.103707   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 13:44:31.103890   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 13:44:31.106587   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 13:44:31.106723   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 13:44:31.210359   57945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:31.226549   57945 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0816 13:44:31.226605   57945 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0816 13:44:31.226648   57945 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0816 13:44:31.266238   57945 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0816 13:44:31.266256   57945 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0816 13:44:31.269423   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 13:44:31.270551   57945 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0816 13:44:31.399144   57945 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0816 13:44:31.399220   57945 cache_images.go:92] duration metric: took 1.113354806s to LoadCachedImages
	W0816 13:44:31.399297   57945 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19423-3966/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0816 13:44:31.399311   57945 kubeadm.go:934] updating node { 192.168.72.105 8443 v1.20.0 crio true true} ...
	I0816 13:44:31.399426   57945 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-882237 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.105
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-882237 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 13:44:31.399515   57945 ssh_runner.go:195] Run: crio config
	I0816 13:44:31.459182   57945 cni.go:84] Creating CNI manager for ""
	I0816 13:44:31.459226   57945 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:44:31.459244   57945 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 13:44:31.459270   57945 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.105 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-882237 NodeName:old-k8s-version-882237 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.105"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.105 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0816 13:44:31.459439   57945 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.105
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-882237"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.105
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.105"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 13:44:31.459521   57945 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0816 13:44:31.470415   57945 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 13:44:31.470500   57945 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 13:44:31.480890   57945 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0816 13:44:31.498797   57945 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 13:44:31.516425   57945 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0816 13:44:31.536528   57945 ssh_runner.go:195] Run: grep 192.168.72.105	control-plane.minikube.internal$ /etc/hosts
	I0816 13:44:31.540569   57945 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.105	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 13:44:31.553530   57945 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:44:31.693191   57945 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 13:44:31.711162   57945 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237 for IP: 192.168.72.105
	I0816 13:44:31.711190   57945 certs.go:194] generating shared ca certs ...
	I0816 13:44:31.711209   57945 certs.go:226] acquiring lock for ca certs: {Name:mkdb46755373c135acf8239d2d4352f4b0b3d1d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:44:31.711382   57945 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key
	I0816 13:44:31.711465   57945 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key
	I0816 13:44:31.711478   57945 certs.go:256] generating profile certs ...
	I0816 13:44:31.711596   57945 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/client.key
	I0816 13:44:31.711676   57945 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/apiserver.key.e63f19d8
	I0816 13:44:31.711739   57945 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/proxy-client.key
	I0816 13:44:31.711906   57945 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem (1338 bytes)
	W0816 13:44:31.711969   57945 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149_empty.pem, impossibly tiny 0 bytes
	I0816 13:44:31.711984   57945 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 13:44:31.712019   57945 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem (1082 bytes)
	I0816 13:44:31.712058   57945 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem (1123 bytes)
	I0816 13:44:31.712089   57945 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem (1675 bytes)
	I0816 13:44:31.712146   57945 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem (1708 bytes)
	I0816 13:44:31.713101   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 13:44:31.748701   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 13:44:31.789308   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 13:44:31.814410   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 13:44:31.841281   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0816 13:44:31.867939   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 13:44:31.894410   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 13:44:31.921591   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 13:44:31.952356   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /usr/share/ca-certificates/111492.pem (1708 bytes)
	I0816 13:44:31.982171   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 13:44:32.008849   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem --> /usr/share/ca-certificates/11149.pem (1338 bytes)
	I0816 13:44:32.034750   57945 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 13:44:32.051812   57945 ssh_runner.go:195] Run: openssl version
	I0816 13:44:32.057774   57945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11149.pem && ln -fs /usr/share/ca-certificates/11149.pem /etc/ssl/certs/11149.pem"
	I0816 13:44:32.068553   57945 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11149.pem
	I0816 13:44:32.073022   57945 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 12:33 /usr/share/ca-certificates/11149.pem
	I0816 13:44:32.073095   57945 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11149.pem
	I0816 13:44:32.079239   57945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11149.pem /etc/ssl/certs/51391683.0"
	I0816 13:44:32.089825   57945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111492.pem && ln -fs /usr/share/ca-certificates/111492.pem /etc/ssl/certs/111492.pem"
	I0816 13:44:32.100630   57945 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111492.pem
	I0816 13:44:32.105792   57945 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 12:33 /usr/share/ca-certificates/111492.pem
	I0816 13:44:32.105851   57945 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111492.pem
	I0816 13:44:32.112004   57945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111492.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 13:44:32.122723   57945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 13:44:32.133560   57945 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:44:32.138215   57945 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 12:22 /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:44:32.138260   57945 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:44:32.144059   57945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 13:44:32.155210   57945 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 13:44:32.159746   57945 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 13:44:32.165984   57945 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 13:44:32.171617   57945 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 13:44:32.177778   57945 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 13:44:32.183623   57945 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 13:44:32.189537   57945 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 13:44:32.195627   57945 kubeadm.go:392] StartCluster: {Name:old-k8s-version-882237 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-882237 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.105 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 13:44:32.195706   57945 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 13:44:32.195741   57945 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 13:44:32.235910   57945 cri.go:89] found id: ""
	I0816 13:44:32.235978   57945 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 13:44:32.248201   57945 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 13:44:32.248223   57945 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 13:44:32.248286   57945 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 13:44:32.258917   57945 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 13:44:32.260386   57945 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-882237" does not appear in /home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 13:44:32.261475   57945 kubeconfig.go:62] /home/jenkins/minikube-integration/19423-3966/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-882237" cluster setting kubeconfig missing "old-k8s-version-882237" context setting]
	I0816 13:44:32.263041   57945 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/kubeconfig: {Name:mk102d7d0e1fecb6a50b9d6c1ee82dcff7f7a898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:44:32.335150   57945 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 13:44:32.346103   57945 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.105
	I0816 13:44:32.346141   57945 kubeadm.go:1160] stopping kube-system containers ...
	I0816 13:44:32.346155   57945 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 13:44:32.346212   57945 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 13:44:32.390110   57945 cri.go:89] found id: ""
	I0816 13:44:32.390197   57945 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 13:44:32.408685   57945 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 13:44:32.419119   57945 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 13:44:32.419146   57945 kubeadm.go:157] found existing configuration files:
	
	I0816 13:44:32.419227   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 13:44:32.429282   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 13:44:32.429352   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 13:44:32.439444   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 13:44:32.449342   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 13:44:32.449409   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 13:44:32.459836   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 13:44:32.469581   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 13:44:32.469653   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 13:44:32.479655   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 13:44:32.489139   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 13:44:32.489204   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 13:44:32.499439   57945 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 13:44:32.509706   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:32.672388   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:33.787722   57945 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.115294487s)
	I0816 13:44:33.787763   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:34.027016   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:34.141852   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:34.247190   57945 api_server.go:52] waiting for apiserver process to appear ...
	I0816 13:44:34.247286   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:34.747781   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:33.022378   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:33.023000   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:33.023028   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:33.022950   59082 retry.go:31] will retry after 1.906001247s: waiting for machine to come up
	I0816 13:44:34.930169   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:34.930674   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:34.930702   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:34.930612   59082 retry.go:31] will retry after 2.809719622s: waiting for machine to come up
	I0816 13:44:33.263780   57440 addons.go:510] duration metric: took 3.216351591s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0816 13:44:34.816280   57440 node_ready.go:53] node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:36.817474   57440 node_ready.go:53] node "no-preload-311070" has status "Ready":"False"
	I0816 13:44:35.248075   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:35.747575   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:36.247693   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:36.748219   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:37.247519   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:37.748189   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:38.248143   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:38.748193   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:39.247412   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:39.748043   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:37.742122   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:37.742506   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | unable to find current IP address of domain default-k8s-diff-port-893736 in network mk-default-k8s-diff-port-893736
	I0816 13:44:37.742545   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | I0816 13:44:37.742464   59082 retry.go:31] will retry after 4.139761236s: waiting for machine to come up
	I0816 13:44:37.815407   57440 node_ready.go:49] node "no-preload-311070" has status "Ready":"True"
	I0816 13:44:37.815428   57440 node_ready.go:38] duration metric: took 7.503128864s for node "no-preload-311070" to be "Ready" ...
	I0816 13:44:37.815437   57440 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:44:37.820318   57440 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-8kbs6" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:37.825460   57440 pod_ready.go:93] pod "coredns-6f6b679f8f-8kbs6" in "kube-system" namespace has status "Ready":"True"
	I0816 13:44:37.825478   57440 pod_ready.go:82] duration metric: took 5.136508ms for pod "coredns-6f6b679f8f-8kbs6" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:37.825486   57440 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:37.829609   57440 pod_ready.go:93] pod "etcd-no-preload-311070" in "kube-system" namespace has status "Ready":"True"
	I0816 13:44:37.829628   57440 pod_ready.go:82] duration metric: took 4.133294ms for pod "etcd-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:37.829635   57440 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:39.835973   57440 pod_ready.go:103] pod "kube-apiserver-no-preload-311070" in "kube-system" namespace has status "Ready":"False"
	I0816 13:44:40.335270   57440 pod_ready.go:93] pod "kube-apiserver-no-preload-311070" in "kube-system" namespace has status "Ready":"True"
	I0816 13:44:40.335289   57440 pod_ready.go:82] duration metric: took 2.505647853s for pod "kube-apiserver-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:40.335298   57440 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:43.233555   57240 start.go:364] duration metric: took 55.654362151s to acquireMachinesLock for "embed-certs-302520"
	I0816 13:44:43.233638   57240 start.go:96] Skipping create...Using existing machine configuration
	I0816 13:44:43.233649   57240 fix.go:54] fixHost starting: 
	I0816 13:44:43.234047   57240 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:43.234078   57240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:43.253929   57240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34851
	I0816 13:44:43.254405   57240 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:43.254878   57240 main.go:141] libmachine: Using API Version  1
	I0816 13:44:43.254900   57240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:43.255235   57240 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:43.255400   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	I0816 13:44:43.255578   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetState
	I0816 13:44:43.257434   57240 fix.go:112] recreateIfNeeded on embed-certs-302520: state=Stopped err=<nil>
	I0816 13:44:43.257472   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	W0816 13:44:43.257637   57240 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 13:44:43.259743   57240 out.go:177] * Restarting existing kvm2 VM for "embed-certs-302520" ...
	I0816 13:44:41.885729   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:41.886143   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Found IP for machine: 192.168.50.186
	I0816 13:44:41.886162   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Reserving static IP address...
	I0816 13:44:41.886178   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has current primary IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:41.886540   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-893736", mac: "52:54:00:5f:b2:25", ip: "192.168.50.186"} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:41.886570   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | skip adding static IP to network mk-default-k8s-diff-port-893736 - found existing host DHCP lease matching {name: "default-k8s-diff-port-893736", mac: "52:54:00:5f:b2:25", ip: "192.168.50.186"}
	I0816 13:44:41.886585   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Reserved static IP address: 192.168.50.186
	I0816 13:44:41.886600   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Waiting for SSH to be available...
	I0816 13:44:41.886617   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | Getting to WaitForSSH function...
	I0816 13:44:41.888671   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:41.889003   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:41.889047   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:41.889118   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | Using SSH client type: external
	I0816 13:44:41.889142   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/default-k8s-diff-port-893736/id_rsa (-rw-------)
	I0816 13:44:41.889181   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.186 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-3966/.minikube/machines/default-k8s-diff-port-893736/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 13:44:41.889201   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | About to run SSH command:
	I0816 13:44:41.889215   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | exit 0
	I0816 13:44:42.017010   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | SSH cmd err, output: <nil>: 
	I0816 13:44:42.017374   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetConfigRaw
	I0816 13:44:42.017979   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetIP
	I0816 13:44:42.020580   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.020958   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:42.020992   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.021174   58430 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/default-k8s-diff-port-893736/config.json ...
	I0816 13:44:42.021342   58430 machine.go:93] provisionDockerMachine start ...
	I0816 13:44:42.021356   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:44:42.021521   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:42.023732   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.024033   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:42.024057   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.024201   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:42.024354   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:42.024526   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:42.024667   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:42.024811   58430 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:42.024994   58430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.186 22 <nil> <nil>}
	I0816 13:44:42.025005   58430 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 13:44:42.137459   58430 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 13:44:42.137495   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetMachineName
	I0816 13:44:42.137722   58430 buildroot.go:166] provisioning hostname "default-k8s-diff-port-893736"
	I0816 13:44:42.137745   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetMachineName
	I0816 13:44:42.137925   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:42.140599   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.140987   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:42.141017   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.141148   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:42.141309   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:42.141430   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:42.141536   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:42.141677   58430 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:42.141843   58430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.186 22 <nil> <nil>}
	I0816 13:44:42.141855   58430 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-893736 && echo "default-k8s-diff-port-893736" | sudo tee /etc/hostname
	I0816 13:44:42.267643   58430 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-893736
	
	I0816 13:44:42.267670   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:42.270489   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.270834   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:42.270867   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.271089   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:42.271266   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:42.271405   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:42.271527   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:42.271675   58430 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:42.271829   58430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.186 22 <nil> <nil>}
	I0816 13:44:42.271847   58430 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-893736' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-893736/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-893736' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 13:44:42.398010   58430 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 13:44:42.398057   58430 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-3966/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-3966/.minikube}
	I0816 13:44:42.398122   58430 buildroot.go:174] setting up certificates
	I0816 13:44:42.398139   58430 provision.go:84] configureAuth start
	I0816 13:44:42.398157   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetMachineName
	I0816 13:44:42.398484   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetIP
	I0816 13:44:42.401217   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.401566   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:42.401587   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.401749   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:42.404082   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.404380   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:42.404425   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.404541   58430 provision.go:143] copyHostCerts
	I0816 13:44:42.404596   58430 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem, removing ...
	I0816 13:44:42.404606   58430 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem
	I0816 13:44:42.404666   58430 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem (1123 bytes)
	I0816 13:44:42.404758   58430 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem, removing ...
	I0816 13:44:42.404767   58430 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem
	I0816 13:44:42.404788   58430 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem (1675 bytes)
	I0816 13:44:42.404850   58430 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem, removing ...
	I0816 13:44:42.404857   58430 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem
	I0816 13:44:42.404873   58430 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem (1082 bytes)
	I0816 13:44:42.404965   58430 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-893736 san=[127.0.0.1 192.168.50.186 default-k8s-diff-port-893736 localhost minikube]
	I0816 13:44:42.551867   58430 provision.go:177] copyRemoteCerts
	I0816 13:44:42.551928   58430 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 13:44:42.551954   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:42.554945   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.555276   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:42.555316   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.555517   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:42.555699   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:42.555838   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:42.555964   58430 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/default-k8s-diff-port-893736/id_rsa Username:docker}
	I0816 13:44:42.643591   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 13:44:42.667108   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0816 13:44:42.690852   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 13:44:42.714001   58430 provision.go:87] duration metric: took 315.84846ms to configureAuth
	I0816 13:44:42.714030   58430 buildroot.go:189] setting minikube options for container-runtime
	I0816 13:44:42.714189   58430 config.go:182] Loaded profile config "default-k8s-diff-port-893736": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 13:44:42.714263   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:42.716726   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.717082   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:42.717110   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.717282   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:42.717486   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:42.717621   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:42.717740   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:42.717883   58430 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:42.718038   58430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.186 22 <nil> <nil>}
	I0816 13:44:42.718055   58430 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 13:44:42.988769   58430 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 13:44:42.988798   58430 machine.go:96] duration metric: took 967.444538ms to provisionDockerMachine
	I0816 13:44:42.988814   58430 start.go:293] postStartSetup for "default-k8s-diff-port-893736" (driver="kvm2")
	I0816 13:44:42.988833   58430 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 13:44:42.988864   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:44:42.989226   58430 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 13:44:42.989261   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:42.991868   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.992162   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:42.992184   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:42.992364   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:42.992537   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:42.992689   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:42.992838   58430 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/default-k8s-diff-port-893736/id_rsa Username:docker}
	I0816 13:44:43.079199   58430 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 13:44:43.083277   58430 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 13:44:43.083296   58430 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/addons for local assets ...
	I0816 13:44:43.083357   58430 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/files for local assets ...
	I0816 13:44:43.083459   58430 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem -> 111492.pem in /etc/ssl/certs
	I0816 13:44:43.083576   58430 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 13:44:43.092684   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /etc/ssl/certs/111492.pem (1708 bytes)
	I0816 13:44:43.115693   58430 start.go:296] duration metric: took 126.86489ms for postStartSetup
	I0816 13:44:43.115735   58430 fix.go:56] duration metric: took 19.425768942s for fixHost
	I0816 13:44:43.115761   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:43.118597   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:43.118915   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:43.118947   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:43.119100   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:43.119306   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:43.119442   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:43.119563   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:43.119683   58430 main.go:141] libmachine: Using SSH client type: native
	I0816 13:44:43.119840   58430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.186 22 <nil> <nil>}
	I0816 13:44:43.119850   58430 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 13:44:43.233362   58430 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723815883.193133132
	
	I0816 13:44:43.233394   58430 fix.go:216] guest clock: 1723815883.193133132
	I0816 13:44:43.233406   58430 fix.go:229] Guest: 2024-08-16 13:44:43.193133132 +0000 UTC Remote: 2024-08-16 13:44:43.115740856 +0000 UTC m=+147.151935383 (delta=77.392276ms)
	I0816 13:44:43.233479   58430 fix.go:200] guest clock delta is within tolerance: 77.392276ms
	I0816 13:44:43.233486   58430 start.go:83] releasing machines lock for "default-k8s-diff-port-893736", held for 19.543554553s
	I0816 13:44:43.233517   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:44:43.233783   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetIP
	I0816 13:44:43.236492   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:43.236875   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:43.236901   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:43.237136   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:44:43.237703   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:44:43.237943   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:44:43.238074   58430 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 13:44:43.238153   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:43.238182   58430 ssh_runner.go:195] Run: cat /version.json
	I0816 13:44:43.238215   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:43.240639   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:43.241000   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:43.241029   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:43.241052   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:43.241193   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:43.241360   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:43.241573   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:43.241581   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:43.241601   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:43.241733   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:43.241732   58430 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/default-k8s-diff-port-893736/id_rsa Username:docker}
	I0816 13:44:43.241895   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:43.242052   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:43.242178   58430 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/default-k8s-diff-port-893736/id_rsa Username:docker}
	I0816 13:44:43.352903   58430 ssh_runner.go:195] Run: systemctl --version
	I0816 13:44:43.359071   58430 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 13:44:43.509233   58430 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 13:44:43.516592   58430 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 13:44:43.516666   58430 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 13:44:43.534069   58430 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 13:44:43.534096   58430 start.go:495] detecting cgroup driver to use...
	I0816 13:44:43.534167   58430 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 13:44:43.553305   58430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 13:44:43.569958   58430 docker.go:217] disabling cri-docker service (if available) ...
	I0816 13:44:43.570007   58430 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 13:44:43.590642   58430 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 13:44:43.606411   58430 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 13:44:43.733331   58430 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 13:44:43.882032   58430 docker.go:233] disabling docker service ...
	I0816 13:44:43.882110   58430 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 13:44:43.896780   58430 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 13:44:43.909702   58430 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 13:44:44.044071   58430 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 13:44:44.170798   58430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 13:44:44.184421   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 13:44:44.203201   58430 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 13:44:44.203269   58430 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:44.213647   58430 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 13:44:44.213708   58430 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:44.224261   58430 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:44.235295   58430 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:44.247670   58430 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 13:44:44.264065   58430 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:44.276212   58430 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:44.296049   58430 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:44:44.307920   58430 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 13:44:44.319689   58430 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 13:44:44.319746   58430 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 13:44:44.335735   58430 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 13:44:44.352364   58430 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:44:44.476754   58430 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 13:44:44.618847   58430 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 13:44:44.618914   58430 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 13:44:44.623946   58430 start.go:563] Will wait 60s for crictl version
	I0816 13:44:44.624004   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:44:44.627796   58430 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 13:44:44.666274   58430 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 13:44:44.666350   58430 ssh_runner.go:195] Run: crio --version
	I0816 13:44:44.694476   58430 ssh_runner.go:195] Run: crio --version
	I0816 13:44:44.723937   58430 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 13:44:43.261237   57240 main.go:141] libmachine: (embed-certs-302520) Calling .Start
	I0816 13:44:43.261399   57240 main.go:141] libmachine: (embed-certs-302520) Ensuring networks are active...
	I0816 13:44:43.262183   57240 main.go:141] libmachine: (embed-certs-302520) Ensuring network default is active
	I0816 13:44:43.262591   57240 main.go:141] libmachine: (embed-certs-302520) Ensuring network mk-embed-certs-302520 is active
	I0816 13:44:43.263044   57240 main.go:141] libmachine: (embed-certs-302520) Getting domain xml...
	I0816 13:44:43.263849   57240 main.go:141] libmachine: (embed-certs-302520) Creating domain...
	I0816 13:44:44.565632   57240 main.go:141] libmachine: (embed-certs-302520) Waiting to get IP...
	I0816 13:44:44.566705   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:44.567120   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:44.567211   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:44.567113   59274 retry.go:31] will retry after 259.265867ms: waiting for machine to come up
	I0816 13:44:44.827603   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:44.828117   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:44.828152   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:44.828043   59274 retry.go:31] will retry after 271.270487ms: waiting for machine to come up
	I0816 13:44:40.247541   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:40.747938   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:41.247408   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:41.747777   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:42.248295   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:42.747393   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:43.247508   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:43.748151   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:44.247958   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:44.747978   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:44.725112   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetIP
	I0816 13:44:44.728077   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:44.728446   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:44.728469   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:44.728728   58430 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0816 13:44:44.733365   58430 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 13:44:44.746196   58430 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-893736 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-893736 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.186 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 13:44:44.746325   58430 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 13:44:44.746385   58430 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 13:44:44.787402   58430 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 13:44:44.787481   58430 ssh_runner.go:195] Run: which lz4
	I0816 13:44:44.791755   58430 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 13:44:44.797290   58430 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 13:44:44.797320   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0816 13:44:42.342663   57440 pod_ready.go:93] pod "kube-controller-manager-no-preload-311070" in "kube-system" namespace has status "Ready":"True"
	I0816 13:44:42.342685   57440 pod_ready.go:82] duration metric: took 2.007381193s for pod "kube-controller-manager-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:42.342694   57440 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-b8d5b" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:42.346807   57440 pod_ready.go:93] pod "kube-proxy-b8d5b" in "kube-system" namespace has status "Ready":"True"
	I0816 13:44:42.346824   57440 pod_ready.go:82] duration metric: took 4.124529ms for pod "kube-proxy-b8d5b" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:42.346832   57440 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:42.351010   57440 pod_ready.go:93] pod "kube-scheduler-no-preload-311070" in "kube-system" namespace has status "Ready":"True"
	I0816 13:44:42.351025   57440 pod_ready.go:82] duration metric: took 4.186812ms for pod "kube-scheduler-no-preload-311070" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:42.351032   57440 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:44.358663   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:44:46.359708   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:44:45.100554   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:45.101150   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:45.101265   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:45.101207   59274 retry.go:31] will retry after 309.469795ms: waiting for machine to come up
	I0816 13:44:45.412518   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:45.413009   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:45.413040   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:45.412975   59274 retry.go:31] will retry after 502.564219ms: waiting for machine to come up
	I0816 13:44:45.917731   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:45.918284   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:45.918316   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:45.918235   59274 retry.go:31] will retry after 723.442166ms: waiting for machine to come up
	I0816 13:44:46.642971   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:46.643467   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:46.643498   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:46.643400   59274 retry.go:31] will retry after 600.365383ms: waiting for machine to come up
	I0816 13:44:47.245233   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:47.245756   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:47.245785   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:47.245710   59274 retry.go:31] will retry after 1.06438693s: waiting for machine to come up
	I0816 13:44:48.312043   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:48.312842   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:48.312886   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:48.312840   59274 retry.go:31] will retry after 903.877948ms: waiting for machine to come up
	I0816 13:44:49.218419   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:49.218805   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:49.218835   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:49.218758   59274 retry.go:31] will retry after 1.73892963s: waiting for machine to come up
	I0816 13:44:45.247523   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:45.747694   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:46.248397   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:46.747660   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:47.247382   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:47.748220   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:48.248130   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:48.747818   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:49.248360   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:49.747962   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:46.230345   58430 crio.go:462] duration metric: took 1.438624377s to copy over tarball
	I0816 13:44:46.230429   58430 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 13:44:48.358060   58430 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.127589486s)
	I0816 13:44:48.358131   58430 crio.go:469] duration metric: took 2.127754698s to extract the tarball
	I0816 13:44:48.358145   58430 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 13:44:48.398054   58430 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 13:44:48.449391   58430 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 13:44:48.449416   58430 cache_images.go:84] Images are preloaded, skipping loading
	I0816 13:44:48.449425   58430 kubeadm.go:934] updating node { 192.168.50.186 8444 v1.31.0 crio true true} ...
	I0816 13:44:48.449576   58430 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-893736 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.186
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-893736 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 13:44:48.449662   58430 ssh_runner.go:195] Run: crio config
	I0816 13:44:48.499389   58430 cni.go:84] Creating CNI manager for ""
	I0816 13:44:48.499413   58430 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:44:48.499424   58430 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 13:44:48.499452   58430 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.186 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-893736 NodeName:default-k8s-diff-port-893736 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.186"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.186 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 13:44:48.499576   58430 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.186
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-893736"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.186
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.186"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 13:44:48.499653   58430 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 13:44:48.509639   58430 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 13:44:48.509706   58430 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 13:44:48.519099   58430 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0816 13:44:48.535866   58430 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 13:44:48.552977   58430 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0816 13:44:48.571198   58430 ssh_runner.go:195] Run: grep 192.168.50.186	control-plane.minikube.internal$ /etc/hosts
	I0816 13:44:48.575881   58430 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.186	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 13:44:48.587850   58430 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:44:48.703848   58430 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 13:44:48.721449   58430 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/default-k8s-diff-port-893736 for IP: 192.168.50.186
	I0816 13:44:48.721476   58430 certs.go:194] generating shared ca certs ...
	I0816 13:44:48.721496   58430 certs.go:226] acquiring lock for ca certs: {Name:mkdb46755373c135acf8239d2d4352f4b0b3d1d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:44:48.721677   58430 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key
	I0816 13:44:48.721731   58430 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key
	I0816 13:44:48.721745   58430 certs.go:256] generating profile certs ...
	I0816 13:44:48.721843   58430 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/default-k8s-diff-port-893736/client.key
	I0816 13:44:48.721926   58430 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/default-k8s-diff-port-893736/apiserver.key.64c9b41b
	I0816 13:44:48.721980   58430 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/default-k8s-diff-port-893736/proxy-client.key
	I0816 13:44:48.722107   58430 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem (1338 bytes)
	W0816 13:44:48.722138   58430 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149_empty.pem, impossibly tiny 0 bytes
	I0816 13:44:48.722149   58430 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 13:44:48.722182   58430 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem (1082 bytes)
	I0816 13:44:48.722204   58430 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem (1123 bytes)
	I0816 13:44:48.722225   58430 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem (1675 bytes)
	I0816 13:44:48.722258   58430 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem (1708 bytes)
	I0816 13:44:48.722818   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 13:44:48.779462   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 13:44:48.814653   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 13:44:48.887435   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 13:44:48.913644   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/default-k8s-diff-port-893736/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0816 13:44:48.937536   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/default-k8s-diff-port-893736/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 13:44:48.960729   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/default-k8s-diff-port-893736/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 13:44:48.984375   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/default-k8s-diff-port-893736/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 13:44:49.007997   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 13:44:49.031631   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem --> /usr/share/ca-certificates/11149.pem (1338 bytes)
	I0816 13:44:49.054333   58430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /usr/share/ca-certificates/111492.pem (1708 bytes)
	I0816 13:44:49.076566   58430 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 13:44:49.092986   58430 ssh_runner.go:195] Run: openssl version
	I0816 13:44:49.098555   58430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111492.pem && ln -fs /usr/share/ca-certificates/111492.pem /etc/ssl/certs/111492.pem"
	I0816 13:44:49.109454   58430 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111492.pem
	I0816 13:44:49.114868   58430 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 12:33 /usr/share/ca-certificates/111492.pem
	I0816 13:44:49.114934   58430 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111492.pem
	I0816 13:44:49.120811   58430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111492.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 13:44:49.131829   58430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 13:44:49.142825   58430 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:44:49.147276   58430 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 12:22 /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:44:49.147322   58430 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:44:49.152678   58430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 13:44:49.163622   58430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11149.pem && ln -fs /usr/share/ca-certificates/11149.pem /etc/ssl/certs/11149.pem"
	I0816 13:44:49.174426   58430 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11149.pem
	I0816 13:44:49.179353   58430 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 12:33 /usr/share/ca-certificates/11149.pem
	I0816 13:44:49.179406   58430 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11149.pem
	I0816 13:44:49.185129   58430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11149.pem /etc/ssl/certs/51391683.0"
	I0816 13:44:49.196668   58430 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 13:44:49.201447   58430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 13:44:49.207718   58430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 13:44:49.213869   58430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 13:44:49.220325   58430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 13:44:49.226220   58430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 13:44:49.231971   58430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 13:44:49.238080   58430 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-893736 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-893736 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.186 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 13:44:49.238178   58430 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 13:44:49.238231   58430 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 13:44:49.276621   58430 cri.go:89] found id: ""
	I0816 13:44:49.276719   58430 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 13:44:49.287765   58430 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 13:44:49.287785   58430 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 13:44:49.287829   58430 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 13:44:49.298038   58430 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 13:44:49.299171   58430 kubeconfig.go:125] found "default-k8s-diff-port-893736" server: "https://192.168.50.186:8444"
	I0816 13:44:49.301521   58430 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 13:44:49.311800   58430 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.186
	I0816 13:44:49.311833   58430 kubeadm.go:1160] stopping kube-system containers ...
	I0816 13:44:49.311845   58430 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 13:44:49.311899   58430 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 13:44:49.363716   58430 cri.go:89] found id: ""
	I0816 13:44:49.363784   58430 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 13:44:49.381053   58430 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 13:44:49.391306   58430 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 13:44:49.391322   58430 kubeadm.go:157] found existing configuration files:
	
	I0816 13:44:49.391370   58430 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0816 13:44:49.400770   58430 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 13:44:49.400829   58430 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 13:44:49.410252   58430 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0816 13:44:49.419405   58430 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 13:44:49.419481   58430 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 13:44:49.429330   58430 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0816 13:44:49.438521   58430 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 13:44:49.438587   58430 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 13:44:49.448144   58430 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0816 13:44:49.456744   58430 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 13:44:49.456805   58430 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 13:44:49.466062   58430 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 13:44:49.476159   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:49.597639   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:50.673182   58430 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.075495766s)
	I0816 13:44:50.673218   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:50.887802   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:50.958384   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:48.858145   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:44:51.358083   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:44:50.959807   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:50.960217   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:50.960236   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:50.960188   59274 retry.go:31] will retry after 2.32558417s: waiting for machine to come up
	I0816 13:44:53.287947   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:53.288441   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:53.288470   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:53.288388   59274 retry.go:31] will retry after 1.85414625s: waiting for machine to come up
	I0816 13:44:50.247710   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:50.747741   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:51.248099   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:51.748052   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:52.247958   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:52.748141   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:53.247751   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:53.747353   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:54.247624   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:54.747699   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:51.054015   58430 api_server.go:52] waiting for apiserver process to appear ...
	I0816 13:44:51.054101   58430 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:51.554846   58430 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:52.055178   58430 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:52.082087   58430 api_server.go:72] duration metric: took 1.028080423s to wait for apiserver process to appear ...
	I0816 13:44:52.082114   58430 api_server.go:88] waiting for apiserver healthz status ...
	I0816 13:44:52.082133   58430 api_server.go:253] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 13:44:52.082624   58430 api_server.go:269] stopped: https://192.168.50.186:8444/healthz: Get "https://192.168.50.186:8444/healthz": dial tcp 192.168.50.186:8444: connect: connection refused
	I0816 13:44:52.582261   58430 api_server.go:253] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 13:44:55.336530   58430 api_server.go:279] https://192.168.50.186:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 13:44:55.336565   58430 api_server.go:103] status: https://192.168.50.186:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 13:44:55.336580   58430 api_server.go:253] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 13:44:55.374699   58430 api_server.go:279] https://192.168.50.186:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 13:44:55.374733   58430 api_server.go:103] status: https://192.168.50.186:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 13:44:55.583112   58430 api_server.go:253] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 13:44:55.588756   58430 api_server.go:279] https://192.168.50.186:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 13:44:55.588782   58430 api_server.go:103] status: https://192.168.50.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 13:44:56.082182   58430 api_server.go:253] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 13:44:56.088062   58430 api_server.go:279] https://192.168.50.186:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 13:44:56.088108   58430 api_server.go:103] status: https://192.168.50.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 13:44:56.582273   58430 api_server.go:253] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 13:44:56.587049   58430 api_server.go:279] https://192.168.50.186:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 13:44:56.587087   58430 api_server.go:103] status: https://192.168.50.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 13:44:57.082664   58430 api_server.go:253] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 13:44:57.092562   58430 api_server.go:279] https://192.168.50.186:8444/healthz returned 200:
	ok
	I0816 13:44:57.100740   58430 api_server.go:141] control plane version: v1.31.0
	I0816 13:44:57.100767   58430 api_server.go:131] duration metric: took 5.018647278s to wait for apiserver health ...
	I0816 13:44:57.100777   58430 cni.go:84] Creating CNI manager for ""
	I0816 13:44:57.100784   58430 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:44:57.102775   58430 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 13:44:53.358390   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:44:55.358437   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:44:57.104079   58430 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 13:44:57.115212   58430 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 13:44:57.137445   58430 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 13:44:57.150376   58430 system_pods.go:59] 8 kube-system pods found
	I0816 13:44:57.150412   58430 system_pods.go:61] "coredns-6f6b679f8f-xdwhx" [66987c52-9a8c-4ddd-a6cf-ac84172d8c8c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 13:44:57.150422   58430 system_pods.go:61] "etcd-default-k8s-diff-port-893736" [54f5bb47-00f5-4c65-88ae-460571872fd1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 13:44:57.150435   58430 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-893736" [c8c57619-3a4c-4cc9-b637-7842194071ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 13:44:57.150448   58430 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-893736" [e553aced-34bd-4d30-b473-082001fe2948] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 13:44:57.150454   58430 system_pods.go:61] "kube-proxy-btq6r" [a2b7b283-da62-4cb8-a039-07a509491e5e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0816 13:44:57.150458   58430 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-893736" [0a30cbd0-a2ac-4ca9-bddc-674e6fc9ae77] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 13:44:57.150463   58430 system_pods.go:61] "metrics-server-6867b74b74-j9tqh" [ef077e6d-f368-4872-bb87-9e031d3ea764] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 13:44:57.150472   58430 system_pods.go:61] "storage-provisioner" [e2fbf16a-3bc7-4300-8023-5dbb20ba70bc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0816 13:44:57.150481   58430 system_pods.go:74] duration metric: took 13.019757ms to wait for pod list to return data ...
	I0816 13:44:57.150489   58430 node_conditions.go:102] verifying NodePressure condition ...
	I0816 13:44:57.153699   58430 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 13:44:57.153721   58430 node_conditions.go:123] node cpu capacity is 2
	I0816 13:44:57.153731   58430 node_conditions.go:105] duration metric: took 3.237407ms to run NodePressure ...
	I0816 13:44:57.153752   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:44:57.439130   58430 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0816 13:44:57.446848   58430 kubeadm.go:739] kubelet initialised
	I0816 13:44:57.446876   58430 kubeadm.go:740] duration metric: took 7.718113ms waiting for restarted kubelet to initialise ...
	I0816 13:44:57.446885   58430 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:44:57.452263   58430 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-xdwhx" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:57.459002   58430 pod_ready.go:98] node "default-k8s-diff-port-893736" hosting pod "coredns-6f6b679f8f-xdwhx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:57.459024   58430 pod_ready.go:82] duration metric: took 6.735487ms for pod "coredns-6f6b679f8f-xdwhx" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:57.459033   58430 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-893736" hosting pod "coredns-6f6b679f8f-xdwhx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:57.459039   58430 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:57.463723   58430 pod_ready.go:98] node "default-k8s-diff-port-893736" hosting pod "etcd-default-k8s-diff-port-893736" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:57.463742   58430 pod_ready.go:82] duration metric: took 4.695932ms for pod "etcd-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:57.463751   58430 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-893736" hosting pod "etcd-default-k8s-diff-port-893736" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:57.463756   58430 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:57.468593   58430 pod_ready.go:98] node "default-k8s-diff-port-893736" hosting pod "kube-apiserver-default-k8s-diff-port-893736" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:57.468619   58430 pod_ready.go:82] duration metric: took 4.856498ms for pod "kube-apiserver-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:57.468632   58430 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-893736" hosting pod "kube-apiserver-default-k8s-diff-port-893736" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:57.468643   58430 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:57.541251   58430 pod_ready.go:98] node "default-k8s-diff-port-893736" hosting pod "kube-controller-manager-default-k8s-diff-port-893736" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:57.541278   58430 pod_ready.go:82] duration metric: took 72.626413ms for pod "kube-controller-manager-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:57.541290   58430 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-893736" hosting pod "kube-controller-manager-default-k8s-diff-port-893736" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:57.541296   58430 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-btq6r" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:57.940580   58430 pod_ready.go:98] node "default-k8s-diff-port-893736" hosting pod "kube-proxy-btq6r" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:57.940616   58430 pod_ready.go:82] duration metric: took 399.312571ms for pod "kube-proxy-btq6r" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:57.940627   58430 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-893736" hosting pod "kube-proxy-btq6r" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:57.940635   58430 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:58.340647   58430 pod_ready.go:98] node "default-k8s-diff-port-893736" hosting pod "kube-scheduler-default-k8s-diff-port-893736" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:58.340671   58430 pod_ready.go:82] duration metric: took 400.026004ms for pod "kube-scheduler-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:58.340683   58430 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-893736" hosting pod "kube-scheduler-default-k8s-diff-port-893736" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:58.340694   58430 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace to be "Ready" ...
	I0816 13:44:58.750549   58430 pod_ready.go:98] node "default-k8s-diff-port-893736" hosting pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:58.750573   58430 pod_ready.go:82] duration metric: took 409.872187ms for pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace to be "Ready" ...
	E0816 13:44:58.750588   58430 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-893736" hosting pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:44:58.750598   58430 pod_ready.go:39] duration metric: took 1.303702313s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:44:58.750626   58430 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 13:44:58.766462   58430 ops.go:34] apiserver oom_adj: -16
	I0816 13:44:58.766482   58430 kubeadm.go:597] duration metric: took 9.478690644s to restartPrimaryControlPlane
	I0816 13:44:58.766491   58430 kubeadm.go:394] duration metric: took 9.528416258s to StartCluster
	I0816 13:44:58.766509   58430 settings.go:142] acquiring lock: {Name:mk96ffc9331ceacd8f3c1c33a59b38e047898a0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:44:58.766572   58430 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 13:44:58.770737   58430 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/kubeconfig: {Name:mk102d7d0e1fecb6a50b9d6c1ee82dcff7f7a898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:44:58.771036   58430 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.186 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 13:44:58.771138   58430 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 13:44:58.771218   58430 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-893736"
	I0816 13:44:58.771232   58430 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-893736"
	I0816 13:44:58.771245   58430 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-893736"
	I0816 13:44:58.771281   58430 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-893736"
	I0816 13:44:58.771252   58430 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-893736"
	W0816 13:44:58.771337   58430 addons.go:243] addon storage-provisioner should already be in state true
	I0816 13:44:58.771371   58430 host.go:66] Checking if "default-k8s-diff-port-893736" exists ...
	I0816 13:44:58.771285   58430 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-893736"
	W0816 13:44:58.771447   58430 addons.go:243] addon metrics-server should already be in state true
	I0816 13:44:58.771485   58430 host.go:66] Checking if "default-k8s-diff-port-893736" exists ...
	I0816 13:44:58.771231   58430 config.go:182] Loaded profile config "default-k8s-diff-port-893736": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 13:44:58.771653   58430 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:58.771682   58430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:58.771750   58430 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:58.771781   58430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:58.771839   58430 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:58.771886   58430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:58.772665   58430 out.go:177] * Verifying Kubernetes components...
	I0816 13:44:58.773992   58430 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:44:58.788717   58430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41949
	I0816 13:44:58.789233   58430 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:58.789833   58430 main.go:141] libmachine: Using API Version  1
	I0816 13:44:58.789859   58430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:58.790269   58430 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:58.790882   58430 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:58.790913   58430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:58.791553   58430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35753
	I0816 13:44:58.791556   58430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36985
	I0816 13:44:58.791945   58430 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:58.791979   58430 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:58.792413   58430 main.go:141] libmachine: Using API Version  1
	I0816 13:44:58.792440   58430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:58.792813   58430 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:58.792963   58430 main.go:141] libmachine: Using API Version  1
	I0816 13:44:58.792986   58430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:58.793013   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetState
	I0816 13:44:58.793374   58430 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:58.793940   58430 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:58.793986   58430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:58.796723   58430 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-893736"
	W0816 13:44:58.796740   58430 addons.go:243] addon default-storageclass should already be in state true
	I0816 13:44:58.796763   58430 host.go:66] Checking if "default-k8s-diff-port-893736" exists ...
	I0816 13:44:58.797138   58430 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:58.797184   58430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:58.806753   58430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41511
	I0816 13:44:58.807162   58430 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:58.807605   58430 main.go:141] libmachine: Using API Version  1
	I0816 13:44:58.807624   58430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:58.807984   58430 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:58.808229   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetState
	I0816 13:44:58.809833   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:44:58.811642   58430 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:44:58.812716   58430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39567
	I0816 13:44:58.812888   58430 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 13:44:58.812902   58430 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 13:44:58.812937   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:58.813184   58430 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:58.813668   58430 main.go:141] libmachine: Using API Version  1
	I0816 13:44:58.813695   58430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:58.813725   58430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40851
	I0816 13:44:58.814101   58430 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:58.814207   58430 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:58.814696   58430 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:44:58.814715   58430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:44:58.814948   58430 main.go:141] libmachine: Using API Version  1
	I0816 13:44:58.814961   58430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:58.815304   58430 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:58.815518   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetState
	I0816 13:44:58.816936   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:58.817482   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:44:58.817529   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:58.817543   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:58.817871   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:58.818057   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:58.818219   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:58.818397   58430 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/default-k8s-diff-port-893736/id_rsa Username:docker}
	I0816 13:44:58.819251   58430 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0816 13:44:55.143862   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:55.144403   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:55.144433   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:55.144354   59274 retry.go:31] will retry after 3.573850343s: waiting for machine to come up
	I0816 13:44:58.720104   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:44:58.720571   57240 main.go:141] libmachine: (embed-certs-302520) DBG | unable to find current IP address of domain embed-certs-302520 in network mk-embed-certs-302520
	I0816 13:44:58.720606   57240 main.go:141] libmachine: (embed-certs-302520) DBG | I0816 13:44:58.720510   59274 retry.go:31] will retry after 4.52867767s: waiting for machine to come up
	I0816 13:44:55.248021   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:55.747406   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:56.247470   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:56.747399   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:57.247462   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:57.747637   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:58.248194   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:58.747381   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:59.247772   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:59.748373   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:44:58.820720   58430 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 13:44:58.820733   58430 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 13:44:58.820747   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:58.823868   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:58.824290   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:58.824305   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:58.824489   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:58.824629   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:58.824764   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:58.824860   58430 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/default-k8s-diff-port-893736/id_rsa Username:docker}
	I0816 13:44:58.830530   58430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38427
	I0816 13:44:58.830894   58430 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:44:58.831274   58430 main.go:141] libmachine: Using API Version  1
	I0816 13:44:58.831294   58430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:44:58.831583   58430 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:44:58.831729   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetState
	I0816 13:44:58.833321   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .DriverName
	I0816 13:44:58.833512   58430 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 13:44:58.833526   58430 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 13:44:58.833543   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHHostname
	I0816 13:44:58.836244   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:58.836626   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:b2:25", ip: ""} in network mk-default-k8s-diff-port-893736: {Iface:virbr2 ExpiryTime:2024-08-16 14:44:35 +0000 UTC Type:0 Mac:52:54:00:5f:b2:25 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-diff-port-893736 Clientid:01:52:54:00:5f:b2:25}
	I0816 13:44:58.836649   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | domain default-k8s-diff-port-893736 has defined IP address 192.168.50.186 and MAC address 52:54:00:5f:b2:25 in network mk-default-k8s-diff-port-893736
	I0816 13:44:58.836762   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHPort
	I0816 13:44:58.836947   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHKeyPath
	I0816 13:44:58.837098   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .GetSSHUsername
	I0816 13:44:58.837234   58430 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/default-k8s-diff-port-893736/id_rsa Username:docker}
	I0816 13:44:58.973561   58430 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 13:44:58.995763   58430 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-893736" to be "Ready" ...
	I0816 13:44:59.118558   58430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 13:44:59.126100   58430 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 13:44:59.126125   58430 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0816 13:44:59.154048   58430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 13:44:59.162623   58430 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 13:44:59.162649   58430 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 13:44:59.213614   58430 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 13:44:59.213635   58430 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 13:44:59.233653   58430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 13:44:59.485000   58430 main.go:141] libmachine: Making call to close driver server
	I0816 13:44:59.485030   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .Close
	I0816 13:44:59.485329   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | Closing plugin on server side
	I0816 13:44:59.485384   58430 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:44:59.485397   58430 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:44:59.485406   58430 main.go:141] libmachine: Making call to close driver server
	I0816 13:44:59.485414   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .Close
	I0816 13:44:59.485736   58430 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:44:59.485777   58430 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:44:59.485741   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | Closing plugin on server side
	I0816 13:44:59.491692   58430 main.go:141] libmachine: Making call to close driver server
	I0816 13:44:59.491711   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .Close
	I0816 13:44:59.491938   58430 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:44:59.491957   58430 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:45:00.273964   58430 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.04027784s)
	I0816 13:45:00.274018   58430 main.go:141] libmachine: Making call to close driver server
	I0816 13:45:00.274036   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .Close
	I0816 13:45:00.274032   58430 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.119945545s)
	I0816 13:45:00.274065   58430 main.go:141] libmachine: Making call to close driver server
	I0816 13:45:00.274080   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .Close
	I0816 13:45:00.274373   58430 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:45:00.274388   58430 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:45:00.274398   58430 main.go:141] libmachine: Making call to close driver server
	I0816 13:45:00.274406   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .Close
	I0816 13:45:00.274441   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | Closing plugin on server side
	I0816 13:45:00.274481   58430 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:45:00.274499   58430 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:45:00.274513   58430 main.go:141] libmachine: Making call to close driver server
	I0816 13:45:00.274526   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) Calling .Close
	I0816 13:45:00.274620   58430 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:45:00.274633   58430 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:45:00.274643   58430 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-893736"
	I0816 13:45:00.274749   58430 main.go:141] libmachine: (default-k8s-diff-port-893736) DBG | Closing plugin on server side
	I0816 13:45:00.274842   58430 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:45:00.274851   58430 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:45:00.276747   58430 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0816 13:45:00.278150   58430 addons.go:510] duration metric: took 1.506994649s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0816 13:44:57.858846   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:00.357028   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:03.253913   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.254379   57240 main.go:141] libmachine: (embed-certs-302520) Found IP for machine: 192.168.39.125
	I0816 13:45:03.254401   57240 main.go:141] libmachine: (embed-certs-302520) Reserving static IP address...
	I0816 13:45:03.254418   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has current primary IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.254776   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "embed-certs-302520", mac: "52:54:00:15:a3:1b", ip: "192.168.39.125"} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:03.254804   57240 main.go:141] libmachine: (embed-certs-302520) Reserved static IP address: 192.168.39.125
	I0816 13:45:03.254822   57240 main.go:141] libmachine: (embed-certs-302520) DBG | skip adding static IP to network mk-embed-certs-302520 - found existing host DHCP lease matching {name: "embed-certs-302520", mac: "52:54:00:15:a3:1b", ip: "192.168.39.125"}
	I0816 13:45:03.254840   57240 main.go:141] libmachine: (embed-certs-302520) DBG | Getting to WaitForSSH function...
	I0816 13:45:03.254848   57240 main.go:141] libmachine: (embed-certs-302520) Waiting for SSH to be available...
	I0816 13:45:03.256951   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.257302   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:03.257327   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.257462   57240 main.go:141] libmachine: (embed-certs-302520) DBG | Using SSH client type: external
	I0816 13:45:03.257483   57240 main.go:141] libmachine: (embed-certs-302520) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/embed-certs-302520/id_rsa (-rw-------)
	I0816 13:45:03.257519   57240 main.go:141] libmachine: (embed-certs-302520) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.125 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-3966/.minikube/machines/embed-certs-302520/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 13:45:03.257528   57240 main.go:141] libmachine: (embed-certs-302520) DBG | About to run SSH command:
	I0816 13:45:03.257537   57240 main.go:141] libmachine: (embed-certs-302520) DBG | exit 0
	I0816 13:45:03.389262   57240 main.go:141] libmachine: (embed-certs-302520) DBG | SSH cmd err, output: <nil>: 
	I0816 13:45:03.389630   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetConfigRaw
	I0816 13:45:03.390305   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetIP
	I0816 13:45:03.392462   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.392767   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:03.392795   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.393012   57240 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/embed-certs-302520/config.json ...
	I0816 13:45:03.393212   57240 machine.go:93] provisionDockerMachine start ...
	I0816 13:45:03.393230   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	I0816 13:45:03.393453   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:45:03.395589   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.395949   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:03.395971   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.396086   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:45:03.396258   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:03.396447   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:03.396589   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:45:03.396785   57240 main.go:141] libmachine: Using SSH client type: native
	I0816 13:45:03.397004   57240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0816 13:45:03.397042   57240 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 13:45:03.513624   57240 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 13:45:03.513655   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetMachineName
	I0816 13:45:03.513954   57240 buildroot.go:166] provisioning hostname "embed-certs-302520"
	I0816 13:45:03.513976   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetMachineName
	I0816 13:45:03.514199   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:45:03.517138   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.517499   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:03.517520   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.517672   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:45:03.517867   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:03.518007   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:03.518168   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:45:03.518364   57240 main.go:141] libmachine: Using SSH client type: native
	I0816 13:45:03.518583   57240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0816 13:45:03.518599   57240 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-302520 && echo "embed-certs-302520" | sudo tee /etc/hostname
	I0816 13:45:03.647799   57240 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-302520
	
	I0816 13:45:03.647840   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:45:03.650491   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.650846   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:03.650880   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.651103   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:45:03.651301   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:03.651469   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:03.651614   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:45:03.651778   57240 main.go:141] libmachine: Using SSH client type: native
	I0816 13:45:03.651935   57240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0816 13:45:03.651951   57240 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-302520' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-302520/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-302520' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 13:45:03.778350   57240 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 13:45:03.778381   57240 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-3966/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-3966/.minikube}
	I0816 13:45:03.778411   57240 buildroot.go:174] setting up certificates
	I0816 13:45:03.778423   57240 provision.go:84] configureAuth start
	I0816 13:45:03.778435   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetMachineName
	I0816 13:45:03.778689   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetIP
	I0816 13:45:03.781319   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.781673   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:03.781695   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.781829   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:45:03.783724   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.784035   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:03.784064   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.784180   57240 provision.go:143] copyHostCerts
	I0816 13:45:03.784243   57240 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem, removing ...
	I0816 13:45:03.784262   57240 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem
	I0816 13:45:03.784335   57240 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/ca.pem (1082 bytes)
	I0816 13:45:03.784462   57240 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem, removing ...
	I0816 13:45:03.784474   57240 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem
	I0816 13:45:03.784503   57240 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/cert.pem (1123 bytes)
	I0816 13:45:03.784568   57240 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem, removing ...
	I0816 13:45:03.784578   57240 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem
	I0816 13:45:03.784600   57240 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-3966/.minikube/key.pem (1675 bytes)
	I0816 13:45:03.784647   57240 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem org=jenkins.embed-certs-302520 san=[127.0.0.1 192.168.39.125 embed-certs-302520 localhost minikube]
	I0816 13:45:03.901261   57240 provision.go:177] copyRemoteCerts
	I0816 13:45:03.901314   57240 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 13:45:03.901339   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:45:03.904505   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.904893   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:03.904933   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:03.905118   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:45:03.905329   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:03.905499   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:45:03.905650   57240 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/embed-certs-302520/id_rsa Username:docker}
	I0816 13:45:03.996083   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 13:45:04.024594   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0816 13:45:04.054080   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 13:45:04.079810   57240 provision.go:87] duration metric: took 301.374056ms to configureAuth
	I0816 13:45:04.079865   57240 buildroot.go:189] setting minikube options for container-runtime
	I0816 13:45:04.080048   57240 config.go:182] Loaded profile config "embed-certs-302520": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 13:45:04.080116   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:45:04.082649   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.083037   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:04.083090   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.083239   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:45:04.083430   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:04.083598   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:04.083775   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:45:04.083951   57240 main.go:141] libmachine: Using SSH client type: native
	I0816 13:45:04.084149   57240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0816 13:45:04.084171   57240 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 13:45:04.404121   57240 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 13:45:04.404150   57240 machine.go:96] duration metric: took 1.010924979s to provisionDockerMachine
	I0816 13:45:04.404163   57240 start.go:293] postStartSetup for "embed-certs-302520" (driver="kvm2")
	I0816 13:45:04.404182   57240 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 13:45:04.404202   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	I0816 13:45:04.404542   57240 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 13:45:04.404574   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:45:04.407763   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.408118   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:04.408145   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.408311   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:45:04.408508   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:04.408685   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:45:04.408864   57240 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/embed-certs-302520/id_rsa Username:docker}
	I0816 13:45:04.496519   57240 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 13:45:04.501262   57240 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 13:45:04.501282   57240 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/addons for local assets ...
	I0816 13:45:04.501352   57240 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-3966/.minikube/files for local assets ...
	I0816 13:45:04.501440   57240 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem -> 111492.pem in /etc/ssl/certs
	I0816 13:45:04.501554   57240 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 13:45:04.511338   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /etc/ssl/certs/111492.pem (1708 bytes)
	I0816 13:45:04.535372   57240 start.go:296] duration metric: took 131.188411ms for postStartSetup
	I0816 13:45:04.535411   57240 fix.go:56] duration metric: took 21.301761751s for fixHost
	I0816 13:45:04.535435   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:45:04.538286   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.538651   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:04.538676   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.538868   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:45:04.539069   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:04.539208   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:04.539344   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:45:04.539504   57240 main.go:141] libmachine: Using SSH client type: native
	I0816 13:45:04.539702   57240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0816 13:45:04.539715   57240 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 13:45:04.653529   57240 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723815904.606422212
	
	I0816 13:45:04.653556   57240 fix.go:216] guest clock: 1723815904.606422212
	I0816 13:45:04.653566   57240 fix.go:229] Guest: 2024-08-16 13:45:04.606422212 +0000 UTC Remote: 2024-08-16 13:45:04.535416156 +0000 UTC m=+359.547804920 (delta=71.006056ms)
	I0816 13:45:04.653598   57240 fix.go:200] guest clock delta is within tolerance: 71.006056ms
	I0816 13:45:04.653605   57240 start.go:83] releasing machines lock for "embed-certs-302520", held for 21.419990329s
	I0816 13:45:04.653631   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	I0816 13:45:04.653922   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetIP
	I0816 13:45:04.656682   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.657009   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:04.657034   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.657211   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	I0816 13:45:04.657800   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	I0816 13:45:04.657981   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	I0816 13:45:04.658069   57240 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 13:45:04.658114   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:45:04.658172   57240 ssh_runner.go:195] Run: cat /version.json
	I0816 13:45:04.658193   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:45:04.660629   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.660942   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.661051   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:04.661076   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.661315   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:45:04.661433   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:04.661470   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:04.661474   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:04.661598   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:45:04.661646   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:45:04.661841   57240 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/embed-certs-302520/id_rsa Username:docker}
	I0816 13:45:04.661904   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:45:04.662054   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:45:04.662199   57240 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/embed-certs-302520/id_rsa Username:docker}
	I0816 13:45:04.767691   57240 ssh_runner.go:195] Run: systemctl --version
	I0816 13:45:04.773984   57240 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 13:45:04.925431   57240 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 13:45:04.931848   57240 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 13:45:04.931931   57240 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 13:45:04.951355   57240 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 13:45:04.951381   57240 start.go:495] detecting cgroup driver to use...
	I0816 13:45:04.951442   57240 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 13:45:04.972903   57240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 13:45:04.987531   57240 docker.go:217] disabling cri-docker service (if available) ...
	I0816 13:45:04.987600   57240 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 13:45:05.001880   57240 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 13:45:05.018403   57240 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 13:45:00.247513   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:00.748342   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:01.248179   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:01.747757   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:02.247789   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:02.748162   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:03.247936   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:03.747434   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:04.247832   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:04.747704   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:00.999833   58430 node_ready.go:53] node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:45:03.500652   58430 node_ready.go:53] node "default-k8s-diff-port-893736" has status "Ready":"False"
	I0816 13:45:05.143662   57240 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 13:45:05.297447   57240 docker.go:233] disabling docker service ...
	I0816 13:45:05.297527   57240 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 13:45:05.313382   57240 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 13:45:05.327116   57240 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 13:45:05.486443   57240 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 13:45:05.620465   57240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 13:45:05.634813   57240 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 13:45:05.653822   57240 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 13:45:05.653887   57240 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:45:05.664976   57240 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 13:45:05.665045   57240 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:45:05.676414   57240 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:45:05.688631   57240 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:45:05.700400   57240 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 13:45:05.712822   57240 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:45:05.724573   57240 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:45:05.742934   57240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 13:45:05.755669   57240 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 13:45:05.766837   57240 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 13:45:05.766890   57240 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 13:45:05.782296   57240 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 13:45:05.793695   57240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:45:05.919559   57240 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 13:45:06.057480   57240 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 13:45:06.057543   57240 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 13:45:06.062348   57240 start.go:563] Will wait 60s for crictl version
	I0816 13:45:06.062414   57240 ssh_runner.go:195] Run: which crictl
	I0816 13:45:06.066456   57240 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 13:45:06.104075   57240 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 13:45:06.104156   57240 ssh_runner.go:195] Run: crio --version
	I0816 13:45:06.132406   57240 ssh_runner.go:195] Run: crio --version
	I0816 13:45:06.161878   57240 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 13:45:02.357119   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:04.361365   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:06.859546   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:06.163233   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetIP
	I0816 13:45:06.165924   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:06.166310   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:45:06.166333   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:45:06.166529   57240 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0816 13:45:06.170722   57240 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 13:45:06.183152   57240 kubeadm.go:883] updating cluster {Name:embed-certs-302520 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-302520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 13:45:06.183256   57240 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 13:45:06.183306   57240 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 13:45:06.223405   57240 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 13:45:06.223489   57240 ssh_runner.go:195] Run: which lz4
	I0816 13:45:06.227851   57240 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 13:45:06.232132   57240 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 13:45:06.232156   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0816 13:45:07.642616   57240 crio.go:462] duration metric: took 1.414789512s to copy over tarball
	I0816 13:45:07.642698   57240 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 13:45:09.794329   57240 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.151601472s)
	I0816 13:45:09.794359   57240 crio.go:469] duration metric: took 2.151717024s to extract the tarball
	I0816 13:45:09.794369   57240 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 13:45:09.833609   57240 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 13:45:09.878781   57240 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 13:45:09.878806   57240 cache_images.go:84] Images are preloaded, skipping loading
	I0816 13:45:09.878815   57240 kubeadm.go:934] updating node { 192.168.39.125 8443 v1.31.0 crio true true} ...
	I0816 13:45:09.878944   57240 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-302520 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.125
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-302520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 13:45:09.879032   57240 ssh_runner.go:195] Run: crio config
	I0816 13:45:09.924876   57240 cni.go:84] Creating CNI manager for ""
	I0816 13:45:09.924900   57240 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:45:09.924927   57240 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 13:45:09.924958   57240 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.125 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-302520 NodeName:embed-certs-302520 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.125"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.125 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 13:45:09.925150   57240 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.125
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-302520"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.125
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.125"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 13:45:09.925226   57240 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 13:45:09.935204   57240 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 13:45:09.935280   57240 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 13:45:09.945366   57240 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0816 13:45:09.961881   57240 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 13:45:09.978495   57240 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0816 13:45:09.995664   57240 ssh_runner.go:195] Run: grep 192.168.39.125	control-plane.minikube.internal$ /etc/hosts
	I0816 13:45:10.000132   57240 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.125	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 13:45:10.013039   57240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:45:05.247343   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:05.747420   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:06.247801   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:06.747978   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:07.248393   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:07.747801   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:08.248388   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:08.747624   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:09.247530   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:09.748311   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:06.000553   58430 node_ready.go:49] node "default-k8s-diff-port-893736" has status "Ready":"True"
	I0816 13:45:06.000579   58430 node_ready.go:38] duration metric: took 7.004778161s for node "default-k8s-diff-port-893736" to be "Ready" ...
	I0816 13:45:06.000590   58430 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:45:06.006987   58430 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-xdwhx" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:06.012552   58430 pod_ready.go:93] pod "coredns-6f6b679f8f-xdwhx" in "kube-system" namespace has status "Ready":"True"
	I0816 13:45:06.012577   58430 pod_ready.go:82] duration metric: took 5.565882ms for pod "coredns-6f6b679f8f-xdwhx" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:06.012588   58430 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:06.519889   58430 pod_ready.go:93] pod "etcd-default-k8s-diff-port-893736" in "kube-system" namespace has status "Ready":"True"
	I0816 13:45:06.519919   58430 pod_ready.go:82] duration metric: took 507.322547ms for pod "etcd-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:06.519932   58430 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:08.527411   58430 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-893736" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:09.527923   58430 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-893736" in "kube-system" namespace has status "Ready":"True"
	I0816 13:45:09.527950   58430 pod_ready.go:82] duration metric: took 3.008009418s for pod "kube-apiserver-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:09.527963   58430 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:09.534422   58430 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-893736" in "kube-system" namespace has status "Ready":"True"
	I0816 13:45:09.534460   58430 pod_ready.go:82] duration metric: took 6.488169ms for pod "kube-controller-manager-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:09.534476   58430 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-btq6r" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:09.538660   58430 pod_ready.go:93] pod "kube-proxy-btq6r" in "kube-system" namespace has status "Ready":"True"
	I0816 13:45:09.538688   58430 pod_ready.go:82] duration metric: took 4.202597ms for pod "kube-proxy-btq6r" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:09.538700   58430 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:09.600350   58430 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-893736" in "kube-system" namespace has status "Ready":"True"
	I0816 13:45:09.600377   58430 pod_ready.go:82] duration metric: took 61.666987ms for pod "kube-scheduler-default-k8s-diff-port-893736" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:09.600391   58430 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:09.361968   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:11.859112   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:10.143519   57240 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 13:45:10.160358   57240 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/embed-certs-302520 for IP: 192.168.39.125
	I0816 13:45:10.160381   57240 certs.go:194] generating shared ca certs ...
	I0816 13:45:10.160400   57240 certs.go:226] acquiring lock for ca certs: {Name:mkdb46755373c135acf8239d2d4352f4b0b3d1d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:45:10.160591   57240 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key
	I0816 13:45:10.160646   57240 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key
	I0816 13:45:10.160656   57240 certs.go:256] generating profile certs ...
	I0816 13:45:10.160767   57240 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/embed-certs-302520/client.key
	I0816 13:45:10.160845   57240 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/embed-certs-302520/apiserver.key.f0c5f9ff
	I0816 13:45:10.160893   57240 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/embed-certs-302520/proxy-client.key
	I0816 13:45:10.161075   57240 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem (1338 bytes)
	W0816 13:45:10.161133   57240 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149_empty.pem, impossibly tiny 0 bytes
	I0816 13:45:10.161148   57240 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 13:45:10.161182   57240 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/ca.pem (1082 bytes)
	I0816 13:45:10.161213   57240 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/cert.pem (1123 bytes)
	I0816 13:45:10.161243   57240 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/certs/key.pem (1675 bytes)
	I0816 13:45:10.161298   57240 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem (1708 bytes)
	I0816 13:45:10.161944   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 13:45:10.202268   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 13:45:10.242684   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 13:45:10.287223   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 13:45:10.316762   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/embed-certs-302520/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0816 13:45:10.343352   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/embed-certs-302520/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 13:45:10.371042   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/embed-certs-302520/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 13:45:10.394922   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/embed-certs-302520/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 13:45:10.419358   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/ssl/certs/111492.pem --> /usr/share/ca-certificates/111492.pem (1708 bytes)
	I0816 13:45:10.442301   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 13:45:10.465266   57240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-3966/.minikube/certs/11149.pem --> /usr/share/ca-certificates/11149.pem (1338 bytes)
	I0816 13:45:10.487647   57240 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 13:45:10.504713   57240 ssh_runner.go:195] Run: openssl version
	I0816 13:45:10.510493   57240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111492.pem && ln -fs /usr/share/ca-certificates/111492.pem /etc/ssl/certs/111492.pem"
	I0816 13:45:10.521818   57240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111492.pem
	I0816 13:45:10.526637   57240 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 12:33 /usr/share/ca-certificates/111492.pem
	I0816 13:45:10.526681   57240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111492.pem
	I0816 13:45:10.532660   57240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111492.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 13:45:10.543403   57240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 13:45:10.554344   57240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:45:10.559089   57240 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 12:22 /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:45:10.559149   57240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 13:45:10.564982   57240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 13:45:10.576074   57240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11149.pem && ln -fs /usr/share/ca-certificates/11149.pem /etc/ssl/certs/11149.pem"
	I0816 13:45:10.586596   57240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11149.pem
	I0816 13:45:10.591586   57240 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 12:33 /usr/share/ca-certificates/11149.pem
	I0816 13:45:10.591637   57240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11149.pem
	I0816 13:45:10.597624   57240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11149.pem /etc/ssl/certs/51391683.0"
	I0816 13:45:10.608838   57240 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 13:45:10.613785   57240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 13:45:10.619902   57240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 13:45:10.625554   57240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 13:45:10.631526   57240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 13:45:10.637251   57240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 13:45:10.643210   57240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 13:45:10.649187   57240 kubeadm.go:392] StartCluster: {Name:embed-certs-302520 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-302520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 13:45:10.649298   57240 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 13:45:10.649349   57240 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 13:45:10.686074   57240 cri.go:89] found id: ""
	I0816 13:45:10.686153   57240 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 13:45:10.696504   57240 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 13:45:10.696527   57240 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 13:45:10.696581   57240 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 13:45:10.706447   57240 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 13:45:10.707413   57240 kubeconfig.go:125] found "embed-certs-302520" server: "https://192.168.39.125:8443"
	I0816 13:45:10.710045   57240 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 13:45:10.719563   57240 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.125
	I0816 13:45:10.719599   57240 kubeadm.go:1160] stopping kube-system containers ...
	I0816 13:45:10.719613   57240 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 13:45:10.719665   57240 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 13:45:10.759584   57240 cri.go:89] found id: ""
	I0816 13:45:10.759661   57240 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 13:45:10.776355   57240 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 13:45:10.786187   57240 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 13:45:10.786205   57240 kubeadm.go:157] found existing configuration files:
	
	I0816 13:45:10.786244   57240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 13:45:10.795644   57240 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 13:45:10.795723   57240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 13:45:10.807988   57240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 13:45:10.817234   57240 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 13:45:10.817299   57240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 13:45:10.826601   57240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 13:45:10.835702   57240 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 13:45:10.835763   57240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 13:45:10.845160   57240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 13:45:10.855522   57240 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 13:45:10.855578   57240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 13:45:10.865445   57240 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 13:45:10.875429   57240 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:45:10.988958   57240 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:45:12.195215   57240 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.206217359s)
	I0816 13:45:12.195241   57240 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:45:12.432322   57240 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:45:12.514631   57240 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:45:12.606133   57240 api_server.go:52] waiting for apiserver process to appear ...
	I0816 13:45:12.606238   57240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:13.106823   57240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:13.606856   57240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:13.624866   57240 api_server.go:72] duration metric: took 1.018743147s to wait for apiserver process to appear ...
	I0816 13:45:13.624897   57240 api_server.go:88] waiting for apiserver healthz status ...
	I0816 13:45:13.624930   57240 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0816 13:45:13.625953   57240 api_server.go:269] stopped: https://192.168.39.125:8443/healthz: Get "https://192.168.39.125:8443/healthz": dial tcp 192.168.39.125:8443: connect: connection refused
	I0816 13:45:14.124979   57240 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0816 13:45:10.247689   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:10.747756   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:11.247963   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:11.747523   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:12.247397   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:12.748146   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:13.247976   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:13.748109   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:14.247662   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:14.748041   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:11.607443   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:14.107647   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:14.357916   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:16.358986   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:16.404020   57240 api_server.go:279] https://192.168.39.125:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 13:45:16.404049   57240 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 13:45:16.404062   57240 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0816 13:45:16.462649   57240 api_server.go:279] https://192.168.39.125:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 13:45:16.462685   57240 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 13:45:16.625998   57240 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0816 13:45:16.632560   57240 api_server.go:279] https://192.168.39.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 13:45:16.632586   57240 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 13:45:17.124984   57240 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0816 13:45:17.133533   57240 api_server.go:279] https://192.168.39.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 13:45:17.133563   57240 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 13:45:17.624993   57240 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0816 13:45:17.629720   57240 api_server.go:279] https://192.168.39.125:8443/healthz returned 200:
	ok
	I0816 13:45:17.635848   57240 api_server.go:141] control plane version: v1.31.0
	I0816 13:45:17.635874   57240 api_server.go:131] duration metric: took 4.010970063s to wait for apiserver health ...
	I0816 13:45:17.635885   57240 cni.go:84] Creating CNI manager for ""
	I0816 13:45:17.635892   57240 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:45:17.637609   57240 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 13:45:17.638828   57240 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 13:45:17.650034   57240 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 13:45:17.681352   57240 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 13:45:17.691752   57240 system_pods.go:59] 8 kube-system pods found
	I0816 13:45:17.691784   57240 system_pods.go:61] "coredns-6f6b679f8f-phxht" [df7bd896-d1c6-4a0e-aead-e3db36e915da] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 13:45:17.691792   57240 system_pods.go:61] "etcd-embed-certs-302520" [ef7bae1c-7cd3-4d8e-b2fc-e5837f4c5a1e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 13:45:17.691801   57240 system_pods.go:61] "kube-apiserver-embed-certs-302520" [957ba8ec-91ae-4cea-902f-81a286e35659] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 13:45:17.691806   57240 system_pods.go:61] "kube-controller-manager-embed-certs-302520" [afbfc2da-5435-4ebb-ada0-e0edc9d09a7d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 13:45:17.691817   57240 system_pods.go:61] "kube-proxy-nnc6b" [ec8b820d-6f1d-4777-9f76-7efffb4e6e79] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0816 13:45:17.691824   57240 system_pods.go:61] "kube-scheduler-embed-certs-302520" [077024c8-3dfd-4e8c-850a-333b63d3f23b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 13:45:17.691832   57240 system_pods.go:61] "metrics-server-6867b74b74-9277d" [5d7ee9e5-b40c-4840-9fb4-0b7b8be9faca] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 13:45:17.691837   57240 system_pods.go:61] "storage-provisioner" [6f3dc7f6-a3e0-4bc3-b362-e1d97633d0eb] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0816 13:45:17.691854   57240 system_pods.go:74] duration metric: took 10.481601ms to wait for pod list to return data ...
	I0816 13:45:17.691861   57240 node_conditions.go:102] verifying NodePressure condition ...
	I0816 13:45:17.695253   57240 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 13:45:17.695278   57240 node_conditions.go:123] node cpu capacity is 2
	I0816 13:45:17.695292   57240 node_conditions.go:105] duration metric: took 3.4236ms to run NodePressure ...
	I0816 13:45:17.695311   57240 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 13:45:17.996024   57240 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0816 13:45:17.999887   57240 kubeadm.go:739] kubelet initialised
	I0816 13:45:17.999906   57240 kubeadm.go:740] duration metric: took 3.859222ms waiting for restarted kubelet to initialise ...
	I0816 13:45:17.999913   57240 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:45:18.004476   57240 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-phxht" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:18.009142   57240 pod_ready.go:98] node "embed-certs-302520" hosting pod "coredns-6f6b679f8f-phxht" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-302520" has status "Ready":"False"
	I0816 13:45:18.009162   57240 pod_ready.go:82] duration metric: took 4.665087ms for pod "coredns-6f6b679f8f-phxht" in "kube-system" namespace to be "Ready" ...
	E0816 13:45:18.009170   57240 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-302520" hosting pod "coredns-6f6b679f8f-phxht" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-302520" has status "Ready":"False"
	I0816 13:45:18.009175   57240 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:18.014083   57240 pod_ready.go:98] node "embed-certs-302520" hosting pod "etcd-embed-certs-302520" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-302520" has status "Ready":"False"
	I0816 13:45:18.014102   57240 pod_ready.go:82] duration metric: took 4.91913ms for pod "etcd-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	E0816 13:45:18.014118   57240 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-302520" hosting pod "etcd-embed-certs-302520" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-302520" has status "Ready":"False"
	I0816 13:45:18.014124   57240 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:18.018257   57240 pod_ready.go:98] node "embed-certs-302520" hosting pod "kube-apiserver-embed-certs-302520" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-302520" has status "Ready":"False"
	I0816 13:45:18.018276   57240 pod_ready.go:82] duration metric: took 4.14471ms for pod "kube-apiserver-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	E0816 13:45:18.018283   57240 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-302520" hosting pod "kube-apiserver-embed-certs-302520" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-302520" has status "Ready":"False"
	I0816 13:45:18.018288   57240 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:18.085229   57240 pod_ready.go:98] node "embed-certs-302520" hosting pod "kube-controller-manager-embed-certs-302520" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-302520" has status "Ready":"False"
	I0816 13:45:18.085257   57240 pod_ready.go:82] duration metric: took 66.95357ms for pod "kube-controller-manager-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	E0816 13:45:18.085269   57240 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-302520" hosting pod "kube-controller-manager-embed-certs-302520" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-302520" has status "Ready":"False"
	I0816 13:45:18.085276   57240 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-nnc6b" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:18.485094   57240 pod_ready.go:93] pod "kube-proxy-nnc6b" in "kube-system" namespace has status "Ready":"True"
	I0816 13:45:18.485124   57240 pod_ready.go:82] duration metric: took 399.831747ms for pod "kube-proxy-nnc6b" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:18.485135   57240 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:15.248141   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:15.747452   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:16.247654   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:16.747569   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:17.248203   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:17.747951   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:18.248147   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:18.747490   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:19.248135   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:19.748201   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:16.107986   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:18.606838   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:18.857109   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:20.858242   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:20.491635   57240 pod_ready.go:103] pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:22.492484   57240 pod_ready.go:103] pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:24.992054   57240 pod_ready.go:103] pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:20.247741   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:20.747432   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:21.247600   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:21.748309   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:22.247438   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:22.748379   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:23.247577   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:23.747950   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:24.247733   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:24.748079   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:21.107371   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:23.607589   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:23.357770   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:25.358102   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:26.992544   57240 pod_ready.go:103] pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:29.491552   57240 pod_ready.go:103] pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:25.247402   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:25.747623   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:26.248101   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:26.747403   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:27.248040   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:27.747380   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:28.247857   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:28.748374   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:29.247819   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:29.747331   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:26.106454   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:28.107564   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:30.115954   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:27.358671   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:29.857631   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:31.862487   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:30.491291   57240 pod_ready.go:93] pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace has status "Ready":"True"
	I0816 13:45:30.491320   57240 pod_ready.go:82] duration metric: took 12.006175772s for pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:30.491333   57240 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace to be "Ready" ...
	I0816 13:45:32.497481   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:34.500397   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:30.247771   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:30.747706   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:31.247762   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:31.748013   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:32.247551   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:32.748020   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:33.247432   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:33.747594   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:34.247750   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:45:34.247831   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:45:34.295412   57945 cri.go:89] found id: ""
	I0816 13:45:34.295439   57945 logs.go:276] 0 containers: []
	W0816 13:45:34.295461   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:45:34.295468   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:45:34.295529   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:45:34.332061   57945 cri.go:89] found id: ""
	I0816 13:45:34.332085   57945 logs.go:276] 0 containers: []
	W0816 13:45:34.332093   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:45:34.332100   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:45:34.332158   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:45:34.369512   57945 cri.go:89] found id: ""
	I0816 13:45:34.369535   57945 logs.go:276] 0 containers: []
	W0816 13:45:34.369546   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:45:34.369553   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:45:34.369617   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:45:34.406324   57945 cri.go:89] found id: ""
	I0816 13:45:34.406351   57945 logs.go:276] 0 containers: []
	W0816 13:45:34.406362   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:45:34.406370   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:45:34.406436   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:45:34.442193   57945 cri.go:89] found id: ""
	I0816 13:45:34.442229   57945 logs.go:276] 0 containers: []
	W0816 13:45:34.442239   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:45:34.442244   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:45:34.442301   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:45:34.476563   57945 cri.go:89] found id: ""
	I0816 13:45:34.476600   57945 logs.go:276] 0 containers: []
	W0816 13:45:34.476616   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:45:34.476622   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:45:34.476670   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:45:34.515841   57945 cri.go:89] found id: ""
	I0816 13:45:34.515869   57945 logs.go:276] 0 containers: []
	W0816 13:45:34.515877   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:45:34.515883   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:45:34.515940   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:45:34.551242   57945 cri.go:89] found id: ""
	I0816 13:45:34.551276   57945 logs.go:276] 0 containers: []
	W0816 13:45:34.551288   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:45:34.551305   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:45:34.551321   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:45:34.564902   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:45:34.564944   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:45:34.694323   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:45:34.694349   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:45:34.694366   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:45:34.770548   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:45:34.770589   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:45:34.818339   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:45:34.818366   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:45:32.606912   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:34.607600   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:34.358649   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:36.856727   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:37.003939   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:39.498178   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:37.370390   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:37.383474   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:45:37.383558   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:45:37.419911   57945 cri.go:89] found id: ""
	I0816 13:45:37.419943   57945 logs.go:276] 0 containers: []
	W0816 13:45:37.419954   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:45:37.419961   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:45:37.420027   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:45:37.453845   57945 cri.go:89] found id: ""
	I0816 13:45:37.453876   57945 logs.go:276] 0 containers: []
	W0816 13:45:37.453884   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:45:37.453889   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:45:37.453949   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:45:37.489053   57945 cri.go:89] found id: ""
	I0816 13:45:37.489088   57945 logs.go:276] 0 containers: []
	W0816 13:45:37.489099   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:45:37.489106   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:45:37.489176   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:45:37.525631   57945 cri.go:89] found id: ""
	I0816 13:45:37.525664   57945 logs.go:276] 0 containers: []
	W0816 13:45:37.525676   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:45:37.525684   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:45:37.525743   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:45:37.560064   57945 cri.go:89] found id: ""
	I0816 13:45:37.560089   57945 logs.go:276] 0 containers: []
	W0816 13:45:37.560101   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:45:37.560109   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:45:37.560168   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:45:37.593856   57945 cri.go:89] found id: ""
	I0816 13:45:37.593888   57945 logs.go:276] 0 containers: []
	W0816 13:45:37.593899   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:45:37.593907   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:45:37.593969   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:45:37.627775   57945 cri.go:89] found id: ""
	I0816 13:45:37.627808   57945 logs.go:276] 0 containers: []
	W0816 13:45:37.627818   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:45:37.627828   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:45:37.627888   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:45:37.660926   57945 cri.go:89] found id: ""
	I0816 13:45:37.660962   57945 logs.go:276] 0 containers: []
	W0816 13:45:37.660973   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:45:37.660991   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:45:37.661008   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:45:37.738954   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:45:37.738993   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:45:37.778976   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:45:37.779006   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:45:37.831361   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:45:37.831397   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:45:37.845096   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:45:37.845122   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:45:37.930797   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:45:37.106303   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:39.107343   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:38.857564   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:40.858908   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:41.998945   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:43.999474   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:40.431616   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:40.445298   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:45:40.445365   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:45:40.478229   57945 cri.go:89] found id: ""
	I0816 13:45:40.478252   57945 logs.go:276] 0 containers: []
	W0816 13:45:40.478259   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:45:40.478265   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:45:40.478313   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:45:40.514721   57945 cri.go:89] found id: ""
	I0816 13:45:40.514744   57945 logs.go:276] 0 containers: []
	W0816 13:45:40.514754   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:45:40.514761   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:45:40.514819   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:45:40.550604   57945 cri.go:89] found id: ""
	I0816 13:45:40.550629   57945 logs.go:276] 0 containers: []
	W0816 13:45:40.550637   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:45:40.550644   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:45:40.550700   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:45:40.589286   57945 cri.go:89] found id: ""
	I0816 13:45:40.589312   57945 logs.go:276] 0 containers: []
	W0816 13:45:40.589320   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:45:40.589326   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:45:40.589382   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:45:40.622689   57945 cri.go:89] found id: ""
	I0816 13:45:40.622709   57945 logs.go:276] 0 containers: []
	W0816 13:45:40.622717   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:45:40.622722   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:45:40.622778   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:45:40.660872   57945 cri.go:89] found id: ""
	I0816 13:45:40.660897   57945 logs.go:276] 0 containers: []
	W0816 13:45:40.660915   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:45:40.660925   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:45:40.660986   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:45:40.697369   57945 cri.go:89] found id: ""
	I0816 13:45:40.697395   57945 logs.go:276] 0 containers: []
	W0816 13:45:40.697404   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:45:40.697415   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:45:40.697521   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:45:40.733565   57945 cri.go:89] found id: ""
	I0816 13:45:40.733594   57945 logs.go:276] 0 containers: []
	W0816 13:45:40.733604   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:45:40.733615   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:45:40.733629   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:45:40.770951   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:45:40.770993   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:45:40.824983   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:45:40.825025   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:45:40.838846   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:45:40.838876   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:45:40.915687   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:45:40.915718   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:45:40.915733   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:45:43.496409   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:43.511419   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:45:43.511485   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:45:43.556996   57945 cri.go:89] found id: ""
	I0816 13:45:43.557031   57945 logs.go:276] 0 containers: []
	W0816 13:45:43.557042   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:45:43.557050   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:45:43.557102   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:45:43.609200   57945 cri.go:89] found id: ""
	I0816 13:45:43.609228   57945 logs.go:276] 0 containers: []
	W0816 13:45:43.609237   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:45:43.609244   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:45:43.609305   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:45:43.648434   57945 cri.go:89] found id: ""
	I0816 13:45:43.648458   57945 logs.go:276] 0 containers: []
	W0816 13:45:43.648467   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:45:43.648474   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:45:43.648538   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:45:43.687179   57945 cri.go:89] found id: ""
	I0816 13:45:43.687214   57945 logs.go:276] 0 containers: []
	W0816 13:45:43.687222   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:45:43.687228   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:45:43.687293   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:45:43.721723   57945 cri.go:89] found id: ""
	I0816 13:45:43.721751   57945 logs.go:276] 0 containers: []
	W0816 13:45:43.721762   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:45:43.721769   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:45:43.721847   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:45:43.756469   57945 cri.go:89] found id: ""
	I0816 13:45:43.756492   57945 logs.go:276] 0 containers: []
	W0816 13:45:43.756501   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:45:43.756506   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:45:43.756560   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:45:43.790241   57945 cri.go:89] found id: ""
	I0816 13:45:43.790267   57945 logs.go:276] 0 containers: []
	W0816 13:45:43.790275   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:45:43.790281   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:45:43.790329   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:45:43.828620   57945 cri.go:89] found id: ""
	I0816 13:45:43.828646   57945 logs.go:276] 0 containers: []
	W0816 13:45:43.828654   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:45:43.828662   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:45:43.828677   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:45:43.879573   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:45:43.879607   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:45:43.893813   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:45:43.893842   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:45:43.975188   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:45:43.975209   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:45:43.975220   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:45:44.054231   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:45:44.054266   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:45:41.609813   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:44.116781   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:43.358670   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:45.857710   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:46.497146   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:48.498302   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:46.593190   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:46.607472   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:45:46.607568   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:45:46.642764   57945 cri.go:89] found id: ""
	I0816 13:45:46.642787   57945 logs.go:276] 0 containers: []
	W0816 13:45:46.642795   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:45:46.642800   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:45:46.642848   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:45:46.678965   57945 cri.go:89] found id: ""
	I0816 13:45:46.678992   57945 logs.go:276] 0 containers: []
	W0816 13:45:46.679000   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:45:46.679005   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:45:46.679051   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:45:46.717632   57945 cri.go:89] found id: ""
	I0816 13:45:46.717657   57945 logs.go:276] 0 containers: []
	W0816 13:45:46.717666   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:45:46.717671   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:45:46.717720   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:45:46.758359   57945 cri.go:89] found id: ""
	I0816 13:45:46.758407   57945 logs.go:276] 0 containers: []
	W0816 13:45:46.758419   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:45:46.758427   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:45:46.758487   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:45:46.798405   57945 cri.go:89] found id: ""
	I0816 13:45:46.798437   57945 logs.go:276] 0 containers: []
	W0816 13:45:46.798448   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:45:46.798472   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:45:46.798547   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:45:46.834977   57945 cri.go:89] found id: ""
	I0816 13:45:46.835008   57945 logs.go:276] 0 containers: []
	W0816 13:45:46.835019   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:45:46.835026   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:45:46.835077   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:45:46.873589   57945 cri.go:89] found id: ""
	I0816 13:45:46.873622   57945 logs.go:276] 0 containers: []
	W0816 13:45:46.873631   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:45:46.873638   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:45:46.873689   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:45:46.912649   57945 cri.go:89] found id: ""
	I0816 13:45:46.912680   57945 logs.go:276] 0 containers: []
	W0816 13:45:46.912691   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:45:46.912701   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:45:46.912720   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:45:46.966998   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:45:46.967038   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:45:46.980897   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:45:46.980937   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:45:47.053055   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:45:47.053079   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:45:47.053091   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:45:47.136251   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:45:47.136291   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:45:49.678283   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:49.691134   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:45:49.691244   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:45:49.726598   57945 cri.go:89] found id: ""
	I0816 13:45:49.726644   57945 logs.go:276] 0 containers: []
	W0816 13:45:49.726656   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:45:49.726665   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:45:49.726729   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:45:49.760499   57945 cri.go:89] found id: ""
	I0816 13:45:49.760526   57945 logs.go:276] 0 containers: []
	W0816 13:45:49.760536   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:45:49.760543   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:45:49.760602   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:45:49.794064   57945 cri.go:89] found id: ""
	I0816 13:45:49.794087   57945 logs.go:276] 0 containers: []
	W0816 13:45:49.794094   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:45:49.794099   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:45:49.794162   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:45:49.830016   57945 cri.go:89] found id: ""
	I0816 13:45:49.830045   57945 logs.go:276] 0 containers: []
	W0816 13:45:49.830057   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:45:49.830071   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:45:49.830125   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:45:49.865230   57945 cri.go:89] found id: ""
	I0816 13:45:49.865248   57945 logs.go:276] 0 containers: []
	W0816 13:45:49.865255   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:45:49.865261   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:45:49.865310   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:45:49.898715   57945 cri.go:89] found id: ""
	I0816 13:45:49.898743   57945 logs.go:276] 0 containers: []
	W0816 13:45:49.898752   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:45:49.898758   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:45:49.898807   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:45:49.932831   57945 cri.go:89] found id: ""
	I0816 13:45:49.932857   57945 logs.go:276] 0 containers: []
	W0816 13:45:49.932868   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:45:49.932875   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:45:49.932948   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:45:49.965580   57945 cri.go:89] found id: ""
	I0816 13:45:49.965609   57945 logs.go:276] 0 containers: []
	W0816 13:45:49.965617   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:45:49.965626   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:45:49.965642   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:45:50.058462   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:45:50.058516   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:45:46.606815   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:49.107387   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:47.858274   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:49.861382   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:50.999007   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:53.497248   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:50.111179   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:45:50.111206   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:45:50.162529   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:45:50.162561   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:45:50.176552   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:45:50.176579   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:45:50.243818   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:45:52.744808   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:52.757430   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:45:52.757513   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:45:52.793177   57945 cri.go:89] found id: ""
	I0816 13:45:52.793209   57945 logs.go:276] 0 containers: []
	W0816 13:45:52.793217   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:45:52.793224   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:45:52.793276   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:45:52.827846   57945 cri.go:89] found id: ""
	I0816 13:45:52.827874   57945 logs.go:276] 0 containers: []
	W0816 13:45:52.827886   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:45:52.827894   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:45:52.827959   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:45:52.864662   57945 cri.go:89] found id: ""
	I0816 13:45:52.864693   57945 logs.go:276] 0 containers: []
	W0816 13:45:52.864705   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:45:52.864711   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:45:52.864761   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:45:52.901124   57945 cri.go:89] found id: ""
	I0816 13:45:52.901154   57945 logs.go:276] 0 containers: []
	W0816 13:45:52.901166   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:45:52.901174   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:45:52.901234   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:45:52.939763   57945 cri.go:89] found id: ""
	I0816 13:45:52.939791   57945 logs.go:276] 0 containers: []
	W0816 13:45:52.939799   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:45:52.939805   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:45:52.939858   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:45:52.975045   57945 cri.go:89] found id: ""
	I0816 13:45:52.975075   57945 logs.go:276] 0 containers: []
	W0816 13:45:52.975086   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:45:52.975092   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:45:52.975141   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:45:53.014686   57945 cri.go:89] found id: ""
	I0816 13:45:53.014714   57945 logs.go:276] 0 containers: []
	W0816 13:45:53.014725   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:45:53.014732   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:45:53.014794   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:45:53.049445   57945 cri.go:89] found id: ""
	I0816 13:45:53.049466   57945 logs.go:276] 0 containers: []
	W0816 13:45:53.049473   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:45:53.049482   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:45:53.049492   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:45:53.101819   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:45:53.101850   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:45:53.116165   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:45:53.116191   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:45:53.191022   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:45:53.191047   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:45:53.191062   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:45:53.268901   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:45:53.268952   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:45:51.607047   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:54.106991   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:52.363317   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:54.857924   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:55.497520   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:57.498597   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:59.997729   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:55.814862   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:55.828817   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:45:55.828875   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:45:55.877556   57945 cri.go:89] found id: ""
	I0816 13:45:55.877586   57945 logs.go:276] 0 containers: []
	W0816 13:45:55.877595   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:45:55.877606   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:45:55.877667   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:45:55.912820   57945 cri.go:89] found id: ""
	I0816 13:45:55.912848   57945 logs.go:276] 0 containers: []
	W0816 13:45:55.912855   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:45:55.912862   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:45:55.912918   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:45:55.947419   57945 cri.go:89] found id: ""
	I0816 13:45:55.947449   57945 logs.go:276] 0 containers: []
	W0816 13:45:55.947460   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:45:55.947467   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:45:55.947532   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:45:55.980964   57945 cri.go:89] found id: ""
	I0816 13:45:55.980990   57945 logs.go:276] 0 containers: []
	W0816 13:45:55.981001   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:45:55.981008   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:45:55.981068   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:45:56.019021   57945 cri.go:89] found id: ""
	I0816 13:45:56.019045   57945 logs.go:276] 0 containers: []
	W0816 13:45:56.019053   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:45:56.019059   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:45:56.019116   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:45:56.054950   57945 cri.go:89] found id: ""
	I0816 13:45:56.054974   57945 logs.go:276] 0 containers: []
	W0816 13:45:56.054985   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:45:56.054992   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:45:56.055057   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:45:56.091165   57945 cri.go:89] found id: ""
	I0816 13:45:56.091192   57945 logs.go:276] 0 containers: []
	W0816 13:45:56.091202   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:45:56.091211   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:45:56.091268   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:45:56.125748   57945 cri.go:89] found id: ""
	I0816 13:45:56.125775   57945 logs.go:276] 0 containers: []
	W0816 13:45:56.125787   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:45:56.125797   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:45:56.125811   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:45:56.174836   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:45:56.174870   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:45:56.188501   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:45:56.188529   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:45:56.266017   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:45:56.266038   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:45:56.266050   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:45:56.346482   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:45:56.346519   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:45:58.887176   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:45:58.900464   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:45:58.900531   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:45:58.939526   57945 cri.go:89] found id: ""
	I0816 13:45:58.939558   57945 logs.go:276] 0 containers: []
	W0816 13:45:58.939568   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:45:58.939576   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:45:58.939639   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:45:58.975256   57945 cri.go:89] found id: ""
	I0816 13:45:58.975281   57945 logs.go:276] 0 containers: []
	W0816 13:45:58.975289   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:45:58.975294   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:45:58.975350   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:45:59.012708   57945 cri.go:89] found id: ""
	I0816 13:45:59.012736   57945 logs.go:276] 0 containers: []
	W0816 13:45:59.012746   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:45:59.012754   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:45:59.012820   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:45:59.049385   57945 cri.go:89] found id: ""
	I0816 13:45:59.049417   57945 logs.go:276] 0 containers: []
	W0816 13:45:59.049430   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:45:59.049438   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:45:59.049505   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:45:59.084750   57945 cri.go:89] found id: ""
	I0816 13:45:59.084773   57945 logs.go:276] 0 containers: []
	W0816 13:45:59.084781   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:45:59.084786   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:45:59.084834   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:45:59.129464   57945 cri.go:89] found id: ""
	I0816 13:45:59.129495   57945 logs.go:276] 0 containers: []
	W0816 13:45:59.129506   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:45:59.129514   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:45:59.129578   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:45:59.166772   57945 cri.go:89] found id: ""
	I0816 13:45:59.166794   57945 logs.go:276] 0 containers: []
	W0816 13:45:59.166802   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:45:59.166808   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:45:59.166867   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:45:59.203843   57945 cri.go:89] found id: ""
	I0816 13:45:59.203876   57945 logs.go:276] 0 containers: []
	W0816 13:45:59.203886   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:45:59.203897   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:45:59.203911   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:45:59.285798   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:45:59.285837   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:45:59.324704   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:45:59.324729   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:45:59.377532   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:45:59.377566   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:45:59.391209   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:45:59.391236   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:45:59.463420   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:45:56.107187   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:58.606550   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:57.358875   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:45:59.857940   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:01.859677   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:01.998260   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:04.498473   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:01.964395   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:01.977380   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:01.977452   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:02.014480   57945 cri.go:89] found id: ""
	I0816 13:46:02.014504   57945 logs.go:276] 0 containers: []
	W0816 13:46:02.014511   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:02.014517   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:02.014578   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:02.057233   57945 cri.go:89] found id: ""
	I0816 13:46:02.057262   57945 logs.go:276] 0 containers: []
	W0816 13:46:02.057270   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:02.057277   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:02.057326   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:02.095936   57945 cri.go:89] found id: ""
	I0816 13:46:02.095962   57945 logs.go:276] 0 containers: []
	W0816 13:46:02.095970   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:02.095976   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:02.096020   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:02.136949   57945 cri.go:89] found id: ""
	I0816 13:46:02.136980   57945 logs.go:276] 0 containers: []
	W0816 13:46:02.136992   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:02.136998   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:02.137047   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:02.172610   57945 cri.go:89] found id: ""
	I0816 13:46:02.172648   57945 logs.go:276] 0 containers: []
	W0816 13:46:02.172658   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:02.172666   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:02.172729   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:02.211216   57945 cri.go:89] found id: ""
	I0816 13:46:02.211247   57945 logs.go:276] 0 containers: []
	W0816 13:46:02.211257   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:02.211266   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:02.211334   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:02.245705   57945 cri.go:89] found id: ""
	I0816 13:46:02.245735   57945 logs.go:276] 0 containers: []
	W0816 13:46:02.245746   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:02.245753   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:02.245823   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:02.281057   57945 cri.go:89] found id: ""
	I0816 13:46:02.281082   57945 logs.go:276] 0 containers: []
	W0816 13:46:02.281093   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:02.281103   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:02.281128   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:02.333334   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:02.333377   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:02.347520   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:02.347546   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:02.427543   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:02.427572   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:02.427587   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:02.514871   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:02.514908   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:05.057817   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:05.070491   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:05.070554   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:01.106533   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:03.107325   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:05.107629   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:04.359077   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:06.857557   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:06.997606   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:08.998915   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:05.108262   57945 cri.go:89] found id: ""
	I0816 13:46:05.108290   57945 logs.go:276] 0 containers: []
	W0816 13:46:05.108301   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:05.108308   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:05.108361   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:05.143962   57945 cri.go:89] found id: ""
	I0816 13:46:05.143995   57945 logs.go:276] 0 containers: []
	W0816 13:46:05.144005   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:05.144011   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:05.144067   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:05.180032   57945 cri.go:89] found id: ""
	I0816 13:46:05.180058   57945 logs.go:276] 0 containers: []
	W0816 13:46:05.180068   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:05.180076   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:05.180128   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:05.214077   57945 cri.go:89] found id: ""
	I0816 13:46:05.214107   57945 logs.go:276] 0 containers: []
	W0816 13:46:05.214115   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:05.214121   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:05.214171   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:05.250887   57945 cri.go:89] found id: ""
	I0816 13:46:05.250920   57945 logs.go:276] 0 containers: []
	W0816 13:46:05.250930   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:05.250937   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:05.251000   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:05.285592   57945 cri.go:89] found id: ""
	I0816 13:46:05.285615   57945 logs.go:276] 0 containers: []
	W0816 13:46:05.285623   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:05.285629   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:05.285675   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:05.325221   57945 cri.go:89] found id: ""
	I0816 13:46:05.325248   57945 logs.go:276] 0 containers: []
	W0816 13:46:05.325258   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:05.325264   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:05.325307   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:05.364025   57945 cri.go:89] found id: ""
	I0816 13:46:05.364047   57945 logs.go:276] 0 containers: []
	W0816 13:46:05.364055   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:05.364062   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:05.364074   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:05.413364   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:05.413395   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:05.427328   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:05.427358   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:05.504040   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:05.504067   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:05.504086   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:05.580975   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:05.581010   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:08.123111   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:08.136822   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:08.136902   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:08.169471   57945 cri.go:89] found id: ""
	I0816 13:46:08.169495   57945 logs.go:276] 0 containers: []
	W0816 13:46:08.169503   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:08.169508   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:08.169556   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:08.211041   57945 cri.go:89] found id: ""
	I0816 13:46:08.211069   57945 logs.go:276] 0 containers: []
	W0816 13:46:08.211081   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:08.211087   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:08.211148   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:08.247564   57945 cri.go:89] found id: ""
	I0816 13:46:08.247590   57945 logs.go:276] 0 containers: []
	W0816 13:46:08.247600   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:08.247607   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:08.247670   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:08.284283   57945 cri.go:89] found id: ""
	I0816 13:46:08.284312   57945 logs.go:276] 0 containers: []
	W0816 13:46:08.284324   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:08.284332   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:08.284384   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:08.320287   57945 cri.go:89] found id: ""
	I0816 13:46:08.320311   57945 logs.go:276] 0 containers: []
	W0816 13:46:08.320319   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:08.320325   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:08.320371   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:08.358294   57945 cri.go:89] found id: ""
	I0816 13:46:08.358324   57945 logs.go:276] 0 containers: []
	W0816 13:46:08.358342   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:08.358356   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:08.358423   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:08.394386   57945 cri.go:89] found id: ""
	I0816 13:46:08.394414   57945 logs.go:276] 0 containers: []
	W0816 13:46:08.394424   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:08.394432   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:08.394502   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:08.439608   57945 cri.go:89] found id: ""
	I0816 13:46:08.439635   57945 logs.go:276] 0 containers: []
	W0816 13:46:08.439643   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:08.439653   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:08.439668   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:08.493878   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:08.493918   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:08.508080   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:08.508114   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:08.584703   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:08.584727   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:08.584745   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:08.663741   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:08.663776   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:07.606112   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:09.608137   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:09.357201   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:11.359055   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:11.497851   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:13.998849   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:11.204946   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:11.218720   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:11.218800   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:11.257825   57945 cri.go:89] found id: ""
	I0816 13:46:11.257852   57945 logs.go:276] 0 containers: []
	W0816 13:46:11.257862   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:11.257870   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:11.257930   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:11.293910   57945 cri.go:89] found id: ""
	I0816 13:46:11.293946   57945 logs.go:276] 0 containers: []
	W0816 13:46:11.293958   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:11.293966   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:11.294023   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:11.330005   57945 cri.go:89] found id: ""
	I0816 13:46:11.330031   57945 logs.go:276] 0 containers: []
	W0816 13:46:11.330039   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:11.330045   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:11.330101   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:11.365057   57945 cri.go:89] found id: ""
	I0816 13:46:11.365083   57945 logs.go:276] 0 containers: []
	W0816 13:46:11.365093   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:11.365101   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:11.365159   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:11.401440   57945 cri.go:89] found id: ""
	I0816 13:46:11.401467   57945 logs.go:276] 0 containers: []
	W0816 13:46:11.401475   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:11.401481   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:11.401532   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:11.435329   57945 cri.go:89] found id: ""
	I0816 13:46:11.435354   57945 logs.go:276] 0 containers: []
	W0816 13:46:11.435361   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:11.435368   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:11.435427   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:11.468343   57945 cri.go:89] found id: ""
	I0816 13:46:11.468373   57945 logs.go:276] 0 containers: []
	W0816 13:46:11.468393   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:11.468401   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:11.468465   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:11.503326   57945 cri.go:89] found id: ""
	I0816 13:46:11.503347   57945 logs.go:276] 0 containers: []
	W0816 13:46:11.503362   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:11.503370   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:11.503386   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:11.554681   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:11.554712   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:11.568056   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:11.568087   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:11.646023   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:11.646049   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:11.646060   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:11.726154   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:11.726191   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:14.266008   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:14.280328   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:14.280408   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:14.316359   57945 cri.go:89] found id: ""
	I0816 13:46:14.316388   57945 logs.go:276] 0 containers: []
	W0816 13:46:14.316398   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:14.316406   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:14.316470   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:14.360143   57945 cri.go:89] found id: ""
	I0816 13:46:14.360165   57945 logs.go:276] 0 containers: []
	W0816 13:46:14.360172   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:14.360183   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:14.360234   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:14.394692   57945 cri.go:89] found id: ""
	I0816 13:46:14.394717   57945 logs.go:276] 0 containers: []
	W0816 13:46:14.394724   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:14.394730   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:14.394789   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:14.431928   57945 cri.go:89] found id: ""
	I0816 13:46:14.431957   57945 logs.go:276] 0 containers: []
	W0816 13:46:14.431968   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:14.431975   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:14.432041   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:14.469223   57945 cri.go:89] found id: ""
	I0816 13:46:14.469253   57945 logs.go:276] 0 containers: []
	W0816 13:46:14.469265   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:14.469272   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:14.469334   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:14.506893   57945 cri.go:89] found id: ""
	I0816 13:46:14.506917   57945 logs.go:276] 0 containers: []
	W0816 13:46:14.506925   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:14.506931   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:14.506991   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:14.544801   57945 cri.go:89] found id: ""
	I0816 13:46:14.544825   57945 logs.go:276] 0 containers: []
	W0816 13:46:14.544833   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:14.544839   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:14.544891   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:14.579489   57945 cri.go:89] found id: ""
	I0816 13:46:14.579528   57945 logs.go:276] 0 containers: []
	W0816 13:46:14.579541   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:14.579556   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:14.579572   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:14.656527   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:14.656551   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:14.656573   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:14.736792   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:14.736823   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:14.775976   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:14.776010   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:14.827804   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:14.827836   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:12.106330   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:14.106732   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:13.857302   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:15.858233   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:16.497347   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:18.497948   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:17.341506   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:17.357136   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:17.357214   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:17.397810   57945 cri.go:89] found id: ""
	I0816 13:46:17.397839   57945 logs.go:276] 0 containers: []
	W0816 13:46:17.397867   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:17.397874   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:17.397936   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:17.435170   57945 cri.go:89] found id: ""
	I0816 13:46:17.435198   57945 logs.go:276] 0 containers: []
	W0816 13:46:17.435208   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:17.435214   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:17.435260   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:17.468837   57945 cri.go:89] found id: ""
	I0816 13:46:17.468871   57945 logs.go:276] 0 containers: []
	W0816 13:46:17.468882   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:17.468891   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:17.468962   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:17.503884   57945 cri.go:89] found id: ""
	I0816 13:46:17.503910   57945 logs.go:276] 0 containers: []
	W0816 13:46:17.503921   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:17.503930   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:17.503977   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:17.541204   57945 cri.go:89] found id: ""
	I0816 13:46:17.541232   57945 logs.go:276] 0 containers: []
	W0816 13:46:17.541244   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:17.541251   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:17.541312   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:17.577007   57945 cri.go:89] found id: ""
	I0816 13:46:17.577031   57945 logs.go:276] 0 containers: []
	W0816 13:46:17.577038   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:17.577045   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:17.577092   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:17.611352   57945 cri.go:89] found id: ""
	I0816 13:46:17.611373   57945 logs.go:276] 0 containers: []
	W0816 13:46:17.611380   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:17.611386   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:17.611433   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:17.648108   57945 cri.go:89] found id: ""
	I0816 13:46:17.648147   57945 logs.go:276] 0 containers: []
	W0816 13:46:17.648155   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:17.648164   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:17.648176   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:17.720475   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:17.720500   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:17.720512   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:17.797602   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:17.797636   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:17.842985   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:17.843019   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:17.893581   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:17.893617   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:16.107456   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:18.107650   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:20.608155   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:18.357472   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:20.857964   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:20.498563   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:22.998319   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:20.408415   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:20.423303   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:20.423384   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:20.459057   57945 cri.go:89] found id: ""
	I0816 13:46:20.459083   57945 logs.go:276] 0 containers: []
	W0816 13:46:20.459091   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:20.459096   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:20.459152   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:20.496447   57945 cri.go:89] found id: ""
	I0816 13:46:20.496471   57945 logs.go:276] 0 containers: []
	W0816 13:46:20.496479   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:20.496485   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:20.496532   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:20.538508   57945 cri.go:89] found id: ""
	I0816 13:46:20.538531   57945 logs.go:276] 0 containers: []
	W0816 13:46:20.538539   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:20.538544   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:20.538600   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:20.579350   57945 cri.go:89] found id: ""
	I0816 13:46:20.579382   57945 logs.go:276] 0 containers: []
	W0816 13:46:20.579390   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:20.579396   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:20.579465   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:20.615088   57945 cri.go:89] found id: ""
	I0816 13:46:20.615118   57945 logs.go:276] 0 containers: []
	W0816 13:46:20.615130   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:20.615137   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:20.615203   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:20.650849   57945 cri.go:89] found id: ""
	I0816 13:46:20.650877   57945 logs.go:276] 0 containers: []
	W0816 13:46:20.650884   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:20.650890   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:20.650950   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:20.691439   57945 cri.go:89] found id: ""
	I0816 13:46:20.691471   57945 logs.go:276] 0 containers: []
	W0816 13:46:20.691482   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:20.691490   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:20.691553   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:20.727795   57945 cri.go:89] found id: ""
	I0816 13:46:20.727820   57945 logs.go:276] 0 containers: []
	W0816 13:46:20.727829   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:20.727836   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:20.727847   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:20.806369   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:20.806390   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:20.806402   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:20.886313   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:20.886345   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:20.926079   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:20.926104   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:20.981052   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:20.981088   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:23.496179   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:23.509918   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:23.509983   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:23.546175   57945 cri.go:89] found id: ""
	I0816 13:46:23.546214   57945 logs.go:276] 0 containers: []
	W0816 13:46:23.546224   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:23.546231   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:23.546293   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:23.581553   57945 cri.go:89] found id: ""
	I0816 13:46:23.581581   57945 logs.go:276] 0 containers: []
	W0816 13:46:23.581594   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:23.581600   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:23.581648   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:23.614559   57945 cri.go:89] found id: ""
	I0816 13:46:23.614584   57945 logs.go:276] 0 containers: []
	W0816 13:46:23.614592   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:23.614597   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:23.614651   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:23.649239   57945 cri.go:89] found id: ""
	I0816 13:46:23.649272   57945 logs.go:276] 0 containers: []
	W0816 13:46:23.649283   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:23.649291   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:23.649354   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:23.688017   57945 cri.go:89] found id: ""
	I0816 13:46:23.688044   57945 logs.go:276] 0 containers: []
	W0816 13:46:23.688054   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:23.688062   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:23.688126   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:23.723475   57945 cri.go:89] found id: ""
	I0816 13:46:23.723507   57945 logs.go:276] 0 containers: []
	W0816 13:46:23.723517   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:23.723525   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:23.723585   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:23.756028   57945 cri.go:89] found id: ""
	I0816 13:46:23.756055   57945 logs.go:276] 0 containers: []
	W0816 13:46:23.756063   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:23.756069   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:23.756121   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:23.789965   57945 cri.go:89] found id: ""
	I0816 13:46:23.789993   57945 logs.go:276] 0 containers: []
	W0816 13:46:23.790000   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:23.790009   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:23.790029   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:23.803669   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:23.803696   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:23.882614   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:23.882642   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:23.882659   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:23.957733   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:23.957773   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:23.994270   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:23.994298   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:23.106190   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:25.106765   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:23.356443   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:25.356705   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:25.496930   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:27.497933   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:29.500639   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:26.546600   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:26.560153   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:26.560221   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:26.594482   57945 cri.go:89] found id: ""
	I0816 13:46:26.594506   57945 logs.go:276] 0 containers: []
	W0816 13:46:26.594520   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:26.594528   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:26.594585   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:26.628020   57945 cri.go:89] found id: ""
	I0816 13:46:26.628051   57945 logs.go:276] 0 containers: []
	W0816 13:46:26.628061   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:26.628068   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:26.628126   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:26.664248   57945 cri.go:89] found id: ""
	I0816 13:46:26.664277   57945 logs.go:276] 0 containers: []
	W0816 13:46:26.664288   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:26.664295   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:26.664357   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:26.700365   57945 cri.go:89] found id: ""
	I0816 13:46:26.700389   57945 logs.go:276] 0 containers: []
	W0816 13:46:26.700397   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:26.700403   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:26.700464   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:26.736170   57945 cri.go:89] found id: ""
	I0816 13:46:26.736204   57945 logs.go:276] 0 containers: []
	W0816 13:46:26.736212   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:26.736219   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:26.736268   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:26.773411   57945 cri.go:89] found id: ""
	I0816 13:46:26.773441   57945 logs.go:276] 0 containers: []
	W0816 13:46:26.773449   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:26.773455   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:26.773514   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:26.811994   57945 cri.go:89] found id: ""
	I0816 13:46:26.812022   57945 logs.go:276] 0 containers: []
	W0816 13:46:26.812030   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:26.812036   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:26.812087   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:26.846621   57945 cri.go:89] found id: ""
	I0816 13:46:26.846647   57945 logs.go:276] 0 containers: []
	W0816 13:46:26.846654   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:26.846662   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:26.846680   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:26.902255   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:26.902293   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:26.916117   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:26.916148   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:26.986755   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:26.986786   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:26.986802   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:27.069607   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:27.069644   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:29.610859   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:29.624599   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:29.624654   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:29.660421   57945 cri.go:89] found id: ""
	I0816 13:46:29.660454   57945 logs.go:276] 0 containers: []
	W0816 13:46:29.660465   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:29.660474   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:29.660534   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:29.694828   57945 cri.go:89] found id: ""
	I0816 13:46:29.694853   57945 logs.go:276] 0 containers: []
	W0816 13:46:29.694861   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:29.694867   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:29.694933   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:29.734054   57945 cri.go:89] found id: ""
	I0816 13:46:29.734083   57945 logs.go:276] 0 containers: []
	W0816 13:46:29.734093   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:29.734100   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:29.734159   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:29.771358   57945 cri.go:89] found id: ""
	I0816 13:46:29.771383   57945 logs.go:276] 0 containers: []
	W0816 13:46:29.771391   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:29.771397   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:29.771464   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:29.806781   57945 cri.go:89] found id: ""
	I0816 13:46:29.806804   57945 logs.go:276] 0 containers: []
	W0816 13:46:29.806812   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:29.806819   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:29.806879   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:29.841716   57945 cri.go:89] found id: ""
	I0816 13:46:29.841743   57945 logs.go:276] 0 containers: []
	W0816 13:46:29.841754   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:29.841762   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:29.841827   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:29.880115   57945 cri.go:89] found id: ""
	I0816 13:46:29.880144   57945 logs.go:276] 0 containers: []
	W0816 13:46:29.880152   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:29.880158   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:29.880226   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:29.916282   57945 cri.go:89] found id: ""
	I0816 13:46:29.916311   57945 logs.go:276] 0 containers: []
	W0816 13:46:29.916321   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:29.916331   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:29.916347   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:29.996027   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:29.996059   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:30.035284   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:30.035315   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:30.085336   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:30.085368   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:30.099534   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:30.099562   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 13:46:27.606739   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:29.606870   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:27.357970   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:29.861012   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:31.998584   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:34.497511   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	W0816 13:46:30.174105   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:32.674746   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:32.688631   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:32.688699   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:32.722967   57945 cri.go:89] found id: ""
	I0816 13:46:32.722997   57945 logs.go:276] 0 containers: []
	W0816 13:46:32.723007   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:32.723014   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:32.723075   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:32.757223   57945 cri.go:89] found id: ""
	I0816 13:46:32.757257   57945 logs.go:276] 0 containers: []
	W0816 13:46:32.757267   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:32.757275   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:32.757342   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:32.793773   57945 cri.go:89] found id: ""
	I0816 13:46:32.793795   57945 logs.go:276] 0 containers: []
	W0816 13:46:32.793804   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:32.793811   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:32.793879   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:32.829541   57945 cri.go:89] found id: ""
	I0816 13:46:32.829565   57945 logs.go:276] 0 containers: []
	W0816 13:46:32.829573   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:32.829578   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:32.829626   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:32.864053   57945 cri.go:89] found id: ""
	I0816 13:46:32.864079   57945 logs.go:276] 0 containers: []
	W0816 13:46:32.864090   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:32.864097   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:32.864155   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:32.901420   57945 cri.go:89] found id: ""
	I0816 13:46:32.901451   57945 logs.go:276] 0 containers: []
	W0816 13:46:32.901459   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:32.901466   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:32.901522   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:32.933082   57945 cri.go:89] found id: ""
	I0816 13:46:32.933110   57945 logs.go:276] 0 containers: []
	W0816 13:46:32.933118   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:32.933125   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:32.933186   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:32.966640   57945 cri.go:89] found id: ""
	I0816 13:46:32.966664   57945 logs.go:276] 0 containers: []
	W0816 13:46:32.966672   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:32.966680   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:32.966692   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:33.048593   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:33.048627   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:33.089329   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:33.089366   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:33.144728   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:33.144764   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:33.158639   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:33.158666   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:33.227076   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:32.106718   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:34.606961   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:32.357555   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:34.857062   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:36.857679   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:36.997085   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:38.999741   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:35.727465   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:35.740850   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:35.740940   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:35.777294   57945 cri.go:89] found id: ""
	I0816 13:46:35.777317   57945 logs.go:276] 0 containers: []
	W0816 13:46:35.777325   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:35.777330   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:35.777394   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:35.815582   57945 cri.go:89] found id: ""
	I0816 13:46:35.815604   57945 logs.go:276] 0 containers: []
	W0816 13:46:35.815612   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:35.815618   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:35.815672   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:35.848338   57945 cri.go:89] found id: ""
	I0816 13:46:35.848363   57945 logs.go:276] 0 containers: []
	W0816 13:46:35.848370   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:35.848376   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:35.848432   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:35.884834   57945 cri.go:89] found id: ""
	I0816 13:46:35.884862   57945 logs.go:276] 0 containers: []
	W0816 13:46:35.884870   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:35.884876   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:35.884953   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:35.919022   57945 cri.go:89] found id: ""
	I0816 13:46:35.919046   57945 logs.go:276] 0 containers: []
	W0816 13:46:35.919058   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:35.919063   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:35.919150   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:35.953087   57945 cri.go:89] found id: ""
	I0816 13:46:35.953111   57945 logs.go:276] 0 containers: []
	W0816 13:46:35.953119   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:35.953124   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:35.953182   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:35.984776   57945 cri.go:89] found id: ""
	I0816 13:46:35.984804   57945 logs.go:276] 0 containers: []
	W0816 13:46:35.984814   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:35.984821   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:35.984882   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:36.028921   57945 cri.go:89] found id: ""
	I0816 13:46:36.028946   57945 logs.go:276] 0 containers: []
	W0816 13:46:36.028954   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:36.028964   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:36.028976   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:36.091313   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:36.091342   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:36.116881   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:36.116915   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:36.186758   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:36.186778   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:36.186791   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:36.268618   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:36.268653   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:38.808419   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:38.821646   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:38.821708   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:38.860623   57945 cri.go:89] found id: ""
	I0816 13:46:38.860647   57945 logs.go:276] 0 containers: []
	W0816 13:46:38.860655   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:38.860660   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:38.860712   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:38.894728   57945 cri.go:89] found id: ""
	I0816 13:46:38.894782   57945 logs.go:276] 0 containers: []
	W0816 13:46:38.894795   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:38.894804   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:38.894870   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:38.928945   57945 cri.go:89] found id: ""
	I0816 13:46:38.928974   57945 logs.go:276] 0 containers: []
	W0816 13:46:38.928988   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:38.928994   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:38.929048   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:38.966450   57945 cri.go:89] found id: ""
	I0816 13:46:38.966474   57945 logs.go:276] 0 containers: []
	W0816 13:46:38.966482   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:38.966487   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:38.966548   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:39.001554   57945 cri.go:89] found id: ""
	I0816 13:46:39.001577   57945 logs.go:276] 0 containers: []
	W0816 13:46:39.001589   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:39.001595   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:39.001656   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:39.036621   57945 cri.go:89] found id: ""
	I0816 13:46:39.036646   57945 logs.go:276] 0 containers: []
	W0816 13:46:39.036654   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:39.036660   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:39.036725   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:39.071244   57945 cri.go:89] found id: ""
	I0816 13:46:39.071271   57945 logs.go:276] 0 containers: []
	W0816 13:46:39.071281   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:39.071289   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:39.071355   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:39.107325   57945 cri.go:89] found id: ""
	I0816 13:46:39.107352   57945 logs.go:276] 0 containers: []
	W0816 13:46:39.107361   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:39.107371   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:39.107401   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:39.189172   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:39.189208   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:39.229060   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:39.229094   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:39.281983   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:39.282025   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:39.296515   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:39.296545   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:39.368488   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:37.113026   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:39.606526   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:38.857809   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:41.358047   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:41.497724   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:43.498815   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:41.868721   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:41.883796   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:41.883869   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:41.922181   57945 cri.go:89] found id: ""
	I0816 13:46:41.922211   57945 logs.go:276] 0 containers: []
	W0816 13:46:41.922222   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:41.922232   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:41.922297   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:41.962213   57945 cri.go:89] found id: ""
	I0816 13:46:41.962239   57945 logs.go:276] 0 containers: []
	W0816 13:46:41.962249   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:41.962257   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:41.962321   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:42.003214   57945 cri.go:89] found id: ""
	I0816 13:46:42.003243   57945 logs.go:276] 0 containers: []
	W0816 13:46:42.003251   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:42.003257   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:42.003316   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:42.038594   57945 cri.go:89] found id: ""
	I0816 13:46:42.038622   57945 logs.go:276] 0 containers: []
	W0816 13:46:42.038635   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:42.038641   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:42.038691   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:42.071377   57945 cri.go:89] found id: ""
	I0816 13:46:42.071409   57945 logs.go:276] 0 containers: []
	W0816 13:46:42.071421   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:42.071429   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:42.071489   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:42.104777   57945 cri.go:89] found id: ""
	I0816 13:46:42.104804   57945 logs.go:276] 0 containers: []
	W0816 13:46:42.104815   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:42.104823   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:42.104879   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:42.140292   57945 cri.go:89] found id: ""
	I0816 13:46:42.140324   57945 logs.go:276] 0 containers: []
	W0816 13:46:42.140335   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:42.140342   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:42.140404   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:42.174823   57945 cri.go:89] found id: ""
	I0816 13:46:42.174861   57945 logs.go:276] 0 containers: []
	W0816 13:46:42.174870   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:42.174887   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:42.174906   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:42.216308   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:42.216337   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:42.269277   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:42.269304   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:42.282347   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:42.282374   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:42.358776   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:42.358796   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:42.358807   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:44.942195   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:44.955384   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:44.955465   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:44.994181   57945 cri.go:89] found id: ""
	I0816 13:46:44.994212   57945 logs.go:276] 0 containers: []
	W0816 13:46:44.994223   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:44.994230   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:44.994286   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:45.028937   57945 cri.go:89] found id: ""
	I0816 13:46:45.028972   57945 logs.go:276] 0 containers: []
	W0816 13:46:45.028984   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:45.028991   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:45.029049   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:45.068193   57945 cri.go:89] found id: ""
	I0816 13:46:45.068223   57945 logs.go:276] 0 containers: []
	W0816 13:46:45.068237   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:45.068249   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:45.068309   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:42.108651   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:44.606597   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:43.856419   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:45.858360   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:45.998195   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:48.497584   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:45.100553   57945 cri.go:89] found id: ""
	I0816 13:46:45.100653   57945 logs.go:276] 0 containers: []
	W0816 13:46:45.100667   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:45.100674   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:45.100734   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:45.135676   57945 cri.go:89] found id: ""
	I0816 13:46:45.135704   57945 logs.go:276] 0 containers: []
	W0816 13:46:45.135714   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:45.135721   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:45.135784   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:45.174611   57945 cri.go:89] found id: ""
	I0816 13:46:45.174642   57945 logs.go:276] 0 containers: []
	W0816 13:46:45.174653   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:45.174660   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:45.174713   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:45.209544   57945 cri.go:89] found id: ""
	I0816 13:46:45.209573   57945 logs.go:276] 0 containers: []
	W0816 13:46:45.209582   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:45.209588   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:45.209649   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:45.245622   57945 cri.go:89] found id: ""
	I0816 13:46:45.245654   57945 logs.go:276] 0 containers: []
	W0816 13:46:45.245664   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:45.245677   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:45.245692   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:45.284294   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:45.284322   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:45.335720   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:45.335751   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:45.350014   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:45.350039   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:45.419816   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:45.419839   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:45.419854   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:48.005991   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:48.019754   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:48.019814   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:48.053269   57945 cri.go:89] found id: ""
	I0816 13:46:48.053331   57945 logs.go:276] 0 containers: []
	W0816 13:46:48.053344   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:48.053351   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:48.053404   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:48.086992   57945 cri.go:89] found id: ""
	I0816 13:46:48.087024   57945 logs.go:276] 0 containers: []
	W0816 13:46:48.087032   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:48.087037   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:48.087098   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:48.123008   57945 cri.go:89] found id: ""
	I0816 13:46:48.123037   57945 logs.go:276] 0 containers: []
	W0816 13:46:48.123046   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:48.123053   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:48.123110   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:48.158035   57945 cri.go:89] found id: ""
	I0816 13:46:48.158064   57945 logs.go:276] 0 containers: []
	W0816 13:46:48.158075   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:48.158082   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:48.158146   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:48.194576   57945 cri.go:89] found id: ""
	I0816 13:46:48.194605   57945 logs.go:276] 0 containers: []
	W0816 13:46:48.194616   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:48.194624   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:48.194687   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:48.232844   57945 cri.go:89] found id: ""
	I0816 13:46:48.232870   57945 logs.go:276] 0 containers: []
	W0816 13:46:48.232878   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:48.232883   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:48.232955   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:48.267525   57945 cri.go:89] found id: ""
	I0816 13:46:48.267551   57945 logs.go:276] 0 containers: []
	W0816 13:46:48.267559   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:48.267564   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:48.267629   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:48.305436   57945 cri.go:89] found id: ""
	I0816 13:46:48.305465   57945 logs.go:276] 0 containers: []
	W0816 13:46:48.305477   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:48.305487   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:48.305502   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:48.357755   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:48.357781   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:48.372672   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:48.372703   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:48.439076   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:48.439099   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:48.439114   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:48.524142   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:48.524181   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:47.106288   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:49.108117   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:48.357517   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:50.857069   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:50.501014   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:52.998618   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:51.065770   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:51.078797   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:51.078868   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:51.118864   57945 cri.go:89] found id: ""
	I0816 13:46:51.118891   57945 logs.go:276] 0 containers: []
	W0816 13:46:51.118899   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:51.118905   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:51.118964   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:51.153024   57945 cri.go:89] found id: ""
	I0816 13:46:51.153049   57945 logs.go:276] 0 containers: []
	W0816 13:46:51.153057   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:51.153062   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:51.153111   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:51.189505   57945 cri.go:89] found id: ""
	I0816 13:46:51.189531   57945 logs.go:276] 0 containers: []
	W0816 13:46:51.189542   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:51.189550   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:51.189611   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:51.228456   57945 cri.go:89] found id: ""
	I0816 13:46:51.228483   57945 logs.go:276] 0 containers: []
	W0816 13:46:51.228494   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:51.228502   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:51.228565   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:51.264436   57945 cri.go:89] found id: ""
	I0816 13:46:51.264463   57945 logs.go:276] 0 containers: []
	W0816 13:46:51.264474   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:51.264482   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:51.264542   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:51.300291   57945 cri.go:89] found id: ""
	I0816 13:46:51.300315   57945 logs.go:276] 0 containers: []
	W0816 13:46:51.300323   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:51.300329   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:51.300379   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:51.334878   57945 cri.go:89] found id: ""
	I0816 13:46:51.334902   57945 logs.go:276] 0 containers: []
	W0816 13:46:51.334909   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:51.334917   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:51.334969   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:51.376467   57945 cri.go:89] found id: ""
	I0816 13:46:51.376491   57945 logs.go:276] 0 containers: []
	W0816 13:46:51.376499   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:51.376507   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:51.376518   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:51.420168   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:51.420194   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:51.470869   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:51.470900   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:51.484877   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:51.484903   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:51.557587   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:51.557614   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:51.557631   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:54.141123   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:54.154790   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:54.154864   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:54.189468   57945 cri.go:89] found id: ""
	I0816 13:46:54.189495   57945 logs.go:276] 0 containers: []
	W0816 13:46:54.189503   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:54.189509   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:54.189562   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:54.223774   57945 cri.go:89] found id: ""
	I0816 13:46:54.223805   57945 logs.go:276] 0 containers: []
	W0816 13:46:54.223817   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:54.223826   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:54.223883   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:54.257975   57945 cri.go:89] found id: ""
	I0816 13:46:54.258004   57945 logs.go:276] 0 containers: []
	W0816 13:46:54.258014   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:54.258022   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:54.258078   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:54.296144   57945 cri.go:89] found id: ""
	I0816 13:46:54.296174   57945 logs.go:276] 0 containers: []
	W0816 13:46:54.296193   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:54.296201   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:54.296276   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:54.336734   57945 cri.go:89] found id: ""
	I0816 13:46:54.336760   57945 logs.go:276] 0 containers: []
	W0816 13:46:54.336770   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:54.336775   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:54.336839   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:54.370572   57945 cri.go:89] found id: ""
	I0816 13:46:54.370602   57945 logs.go:276] 0 containers: []
	W0816 13:46:54.370609   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:54.370615   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:54.370676   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:54.405703   57945 cri.go:89] found id: ""
	I0816 13:46:54.405735   57945 logs.go:276] 0 containers: []
	W0816 13:46:54.405745   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:54.405753   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:54.405816   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:54.441466   57945 cri.go:89] found id: ""
	I0816 13:46:54.441492   57945 logs.go:276] 0 containers: []
	W0816 13:46:54.441500   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:54.441509   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:54.441521   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:54.492539   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:54.492570   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:54.506313   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:54.506341   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:54.580127   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:54.580151   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:54.580172   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:54.658597   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:54.658633   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:51.607335   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:54.106631   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:53.357847   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:55.857456   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:55.497897   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:57.999173   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:57.198267   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:46:57.213292   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:46:57.213354   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:46:57.248838   57945 cri.go:89] found id: ""
	I0816 13:46:57.248862   57945 logs.go:276] 0 containers: []
	W0816 13:46:57.248870   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:46:57.248876   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:46:57.248951   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:46:57.283868   57945 cri.go:89] found id: ""
	I0816 13:46:57.283895   57945 logs.go:276] 0 containers: []
	W0816 13:46:57.283903   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:46:57.283908   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:46:57.283958   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:46:57.319363   57945 cri.go:89] found id: ""
	I0816 13:46:57.319392   57945 logs.go:276] 0 containers: []
	W0816 13:46:57.319405   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:46:57.319412   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:46:57.319465   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:46:57.359895   57945 cri.go:89] found id: ""
	I0816 13:46:57.359937   57945 logs.go:276] 0 containers: []
	W0816 13:46:57.359949   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:46:57.359957   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:46:57.360024   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:46:57.398025   57945 cri.go:89] found id: ""
	I0816 13:46:57.398057   57945 logs.go:276] 0 containers: []
	W0816 13:46:57.398068   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:46:57.398075   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:46:57.398140   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:46:57.436101   57945 cri.go:89] found id: ""
	I0816 13:46:57.436132   57945 logs.go:276] 0 containers: []
	W0816 13:46:57.436140   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:46:57.436147   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:46:57.436223   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:46:57.471737   57945 cri.go:89] found id: ""
	I0816 13:46:57.471767   57945 logs.go:276] 0 containers: []
	W0816 13:46:57.471778   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:46:57.471785   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:46:57.471845   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:46:57.508664   57945 cri.go:89] found id: ""
	I0816 13:46:57.508694   57945 logs.go:276] 0 containers: []
	W0816 13:46:57.508705   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:46:57.508716   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:46:57.508730   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:46:57.559122   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:46:57.559155   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:46:57.572504   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:46:57.572529   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:46:57.646721   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:46:57.646743   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:46:57.646756   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:46:57.725107   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:46:57.725153   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:46:56.107168   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:58.606805   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:00.607098   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:46:57.857681   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:00.357433   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:00.497738   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:02.998036   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:04.998316   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:00.269137   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:00.284285   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:00.284363   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:00.325613   57945 cri.go:89] found id: ""
	I0816 13:47:00.325645   57945 logs.go:276] 0 containers: []
	W0816 13:47:00.325654   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:00.325662   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:00.325721   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:00.361706   57945 cri.go:89] found id: ""
	I0816 13:47:00.361732   57945 logs.go:276] 0 containers: []
	W0816 13:47:00.361742   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:00.361750   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:00.361808   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:00.398453   57945 cri.go:89] found id: ""
	I0816 13:47:00.398478   57945 logs.go:276] 0 containers: []
	W0816 13:47:00.398486   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:00.398491   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:00.398544   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:00.434233   57945 cri.go:89] found id: ""
	I0816 13:47:00.434265   57945 logs.go:276] 0 containers: []
	W0816 13:47:00.434278   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:00.434286   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:00.434391   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:00.473020   57945 cri.go:89] found id: ""
	I0816 13:47:00.473042   57945 logs.go:276] 0 containers: []
	W0816 13:47:00.473050   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:00.473056   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:00.473117   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:00.511480   57945 cri.go:89] found id: ""
	I0816 13:47:00.511507   57945 logs.go:276] 0 containers: []
	W0816 13:47:00.511518   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:00.511525   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:00.511595   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:00.546166   57945 cri.go:89] found id: ""
	I0816 13:47:00.546202   57945 logs.go:276] 0 containers: []
	W0816 13:47:00.546209   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:00.546216   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:00.546263   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:00.585285   57945 cri.go:89] found id: ""
	I0816 13:47:00.585310   57945 logs.go:276] 0 containers: []
	W0816 13:47:00.585320   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:00.585329   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:00.585348   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:00.633346   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:00.633373   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:00.687904   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:00.687937   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:00.703773   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:00.703801   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:00.775179   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:00.775210   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:00.775226   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:03.354676   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:03.370107   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:03.370178   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:03.406212   57945 cri.go:89] found id: ""
	I0816 13:47:03.406245   57945 logs.go:276] 0 containers: []
	W0816 13:47:03.406256   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:03.406263   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:03.406333   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:03.442887   57945 cri.go:89] found id: ""
	I0816 13:47:03.442925   57945 logs.go:276] 0 containers: []
	W0816 13:47:03.442937   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:03.442943   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:03.443000   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:03.479225   57945 cri.go:89] found id: ""
	I0816 13:47:03.479259   57945 logs.go:276] 0 containers: []
	W0816 13:47:03.479270   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:03.479278   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:03.479340   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:03.516145   57945 cri.go:89] found id: ""
	I0816 13:47:03.516181   57945 logs.go:276] 0 containers: []
	W0816 13:47:03.516192   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:03.516203   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:03.516265   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:03.548225   57945 cri.go:89] found id: ""
	I0816 13:47:03.548252   57945 logs.go:276] 0 containers: []
	W0816 13:47:03.548260   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:03.548267   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:03.548324   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:03.582038   57945 cri.go:89] found id: ""
	I0816 13:47:03.582071   57945 logs.go:276] 0 containers: []
	W0816 13:47:03.582082   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:03.582089   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:03.582160   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:03.618693   57945 cri.go:89] found id: ""
	I0816 13:47:03.618720   57945 logs.go:276] 0 containers: []
	W0816 13:47:03.618730   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:03.618737   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:03.618793   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:03.653717   57945 cri.go:89] found id: ""
	I0816 13:47:03.653742   57945 logs.go:276] 0 containers: []
	W0816 13:47:03.653751   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:03.653759   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:03.653771   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:03.705909   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:03.705942   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:03.720727   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:03.720751   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:03.795064   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:03.795089   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:03.795104   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:03.874061   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:03.874105   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:02.607546   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:05.106955   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:02.358368   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:04.359618   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:06.858437   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:06.999109   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:09.498087   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:06.420149   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:06.437062   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:06.437124   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:06.473620   57945 cri.go:89] found id: ""
	I0816 13:47:06.473651   57945 logs.go:276] 0 containers: []
	W0816 13:47:06.473659   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:06.473664   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:06.473720   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:06.510281   57945 cri.go:89] found id: ""
	I0816 13:47:06.510307   57945 logs.go:276] 0 containers: []
	W0816 13:47:06.510315   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:06.510321   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:06.510372   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:06.546589   57945 cri.go:89] found id: ""
	I0816 13:47:06.546623   57945 logs.go:276] 0 containers: []
	W0816 13:47:06.546634   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:06.546642   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:06.546702   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:06.580629   57945 cri.go:89] found id: ""
	I0816 13:47:06.580652   57945 logs.go:276] 0 containers: []
	W0816 13:47:06.580665   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:06.580671   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:06.580718   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:06.617411   57945 cri.go:89] found id: ""
	I0816 13:47:06.617439   57945 logs.go:276] 0 containers: []
	W0816 13:47:06.617459   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:06.617468   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:06.617533   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:06.654017   57945 cri.go:89] found id: ""
	I0816 13:47:06.654045   57945 logs.go:276] 0 containers: []
	W0816 13:47:06.654057   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:06.654064   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:06.654124   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:06.695109   57945 cri.go:89] found id: ""
	I0816 13:47:06.695139   57945 logs.go:276] 0 containers: []
	W0816 13:47:06.695147   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:06.695153   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:06.695205   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:06.731545   57945 cri.go:89] found id: ""
	I0816 13:47:06.731620   57945 logs.go:276] 0 containers: []
	W0816 13:47:06.731635   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:06.731647   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:06.731668   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:06.782862   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:06.782900   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:06.797524   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:06.797550   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:06.877445   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:06.877476   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:06.877493   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:06.957932   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:06.957965   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:09.498843   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:09.513398   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:09.513468   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:09.551246   57945 cri.go:89] found id: ""
	I0816 13:47:09.551275   57945 logs.go:276] 0 containers: []
	W0816 13:47:09.551284   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:09.551290   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:09.551339   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:09.585033   57945 cri.go:89] found id: ""
	I0816 13:47:09.585059   57945 logs.go:276] 0 containers: []
	W0816 13:47:09.585066   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:09.585072   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:09.585120   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:09.623498   57945 cri.go:89] found id: ""
	I0816 13:47:09.623524   57945 logs.go:276] 0 containers: []
	W0816 13:47:09.623531   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:09.623537   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:09.623584   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:09.657476   57945 cri.go:89] found id: ""
	I0816 13:47:09.657504   57945 logs.go:276] 0 containers: []
	W0816 13:47:09.657515   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:09.657523   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:09.657578   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:09.693715   57945 cri.go:89] found id: ""
	I0816 13:47:09.693746   57945 logs.go:276] 0 containers: []
	W0816 13:47:09.693757   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:09.693765   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:09.693825   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:09.727396   57945 cri.go:89] found id: ""
	I0816 13:47:09.727426   57945 logs.go:276] 0 containers: []
	W0816 13:47:09.727437   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:09.727451   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:09.727511   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:09.764334   57945 cri.go:89] found id: ""
	I0816 13:47:09.764361   57945 logs.go:276] 0 containers: []
	W0816 13:47:09.764368   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:09.764374   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:09.764428   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:09.799460   57945 cri.go:89] found id: ""
	I0816 13:47:09.799485   57945 logs.go:276] 0 containers: []
	W0816 13:47:09.799497   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:09.799508   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:09.799521   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:09.849637   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:09.849678   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:09.869665   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:09.869702   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:09.954878   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:09.954907   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:09.954922   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:10.032473   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:10.032507   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:07.107809   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:09.606867   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:09.358384   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:11.359451   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:11.997273   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:13.998709   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:12.574303   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:12.587684   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:12.587746   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:12.625568   57945 cri.go:89] found id: ""
	I0816 13:47:12.625593   57945 logs.go:276] 0 containers: []
	W0816 13:47:12.625604   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:12.625611   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:12.625719   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:12.665018   57945 cri.go:89] found id: ""
	I0816 13:47:12.665048   57945 logs.go:276] 0 containers: []
	W0816 13:47:12.665059   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:12.665067   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:12.665128   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:12.701125   57945 cri.go:89] found id: ""
	I0816 13:47:12.701150   57945 logs.go:276] 0 containers: []
	W0816 13:47:12.701158   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:12.701163   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:12.701218   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:12.740613   57945 cri.go:89] found id: ""
	I0816 13:47:12.740644   57945 logs.go:276] 0 containers: []
	W0816 13:47:12.740654   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:12.740662   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:12.740727   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:12.779620   57945 cri.go:89] found id: ""
	I0816 13:47:12.779652   57945 logs.go:276] 0 containers: []
	W0816 13:47:12.779664   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:12.779678   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:12.779743   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:12.816222   57945 cri.go:89] found id: ""
	I0816 13:47:12.816248   57945 logs.go:276] 0 containers: []
	W0816 13:47:12.816269   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:12.816278   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:12.816327   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:12.853083   57945 cri.go:89] found id: ""
	I0816 13:47:12.853113   57945 logs.go:276] 0 containers: []
	W0816 13:47:12.853125   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:12.853133   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:12.853192   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:12.888197   57945 cri.go:89] found id: ""
	I0816 13:47:12.888223   57945 logs.go:276] 0 containers: []
	W0816 13:47:12.888232   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:12.888240   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:12.888255   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:12.941464   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:12.941502   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:12.955423   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:12.955456   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:13.025515   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:13.025537   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:13.025550   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:13.112409   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:13.112452   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:12.107421   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:14.606538   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:13.857389   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:15.857870   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:16.498127   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:18.498877   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:15.656240   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:15.669505   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:15.669568   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:15.703260   57945 cri.go:89] found id: ""
	I0816 13:47:15.703288   57945 logs.go:276] 0 containers: []
	W0816 13:47:15.703299   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:15.703306   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:15.703368   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:15.740555   57945 cri.go:89] found id: ""
	I0816 13:47:15.740580   57945 logs.go:276] 0 containers: []
	W0816 13:47:15.740590   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:15.740596   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:15.740660   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:15.776207   57945 cri.go:89] found id: ""
	I0816 13:47:15.776233   57945 logs.go:276] 0 containers: []
	W0816 13:47:15.776241   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:15.776247   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:15.776302   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:15.816845   57945 cri.go:89] found id: ""
	I0816 13:47:15.816871   57945 logs.go:276] 0 containers: []
	W0816 13:47:15.816879   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:15.816884   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:15.816953   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:15.851279   57945 cri.go:89] found id: ""
	I0816 13:47:15.851306   57945 logs.go:276] 0 containers: []
	W0816 13:47:15.851318   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:15.851325   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:15.851391   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:15.884960   57945 cri.go:89] found id: ""
	I0816 13:47:15.884987   57945 logs.go:276] 0 containers: []
	W0816 13:47:15.884997   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:15.885004   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:15.885063   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:15.922027   57945 cri.go:89] found id: ""
	I0816 13:47:15.922051   57945 logs.go:276] 0 containers: []
	W0816 13:47:15.922060   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:15.922067   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:15.922130   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:15.956774   57945 cri.go:89] found id: ""
	I0816 13:47:15.956799   57945 logs.go:276] 0 containers: []
	W0816 13:47:15.956806   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:15.956814   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:15.956828   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:16.036342   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:16.036375   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:16.079006   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:16.079033   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:16.130374   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:16.130409   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:16.144707   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:16.144740   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:16.216466   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:18.716696   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:18.729670   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:18.729731   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:18.764481   57945 cri.go:89] found id: ""
	I0816 13:47:18.764513   57945 logs.go:276] 0 containers: []
	W0816 13:47:18.764521   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:18.764527   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:18.764574   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:18.803141   57945 cri.go:89] found id: ""
	I0816 13:47:18.803172   57945 logs.go:276] 0 containers: []
	W0816 13:47:18.803183   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:18.803192   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:18.803257   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:18.847951   57945 cri.go:89] found id: ""
	I0816 13:47:18.847977   57945 logs.go:276] 0 containers: []
	W0816 13:47:18.847985   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:18.847991   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:18.848038   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:18.881370   57945 cri.go:89] found id: ""
	I0816 13:47:18.881402   57945 logs.go:276] 0 containers: []
	W0816 13:47:18.881420   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:18.881434   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:18.881491   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:18.916206   57945 cri.go:89] found id: ""
	I0816 13:47:18.916237   57945 logs.go:276] 0 containers: []
	W0816 13:47:18.916247   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:18.916253   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:18.916314   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:18.946851   57945 cri.go:89] found id: ""
	I0816 13:47:18.946873   57945 logs.go:276] 0 containers: []
	W0816 13:47:18.946883   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:18.946891   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:18.946944   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:18.980684   57945 cri.go:89] found id: ""
	I0816 13:47:18.980710   57945 logs.go:276] 0 containers: []
	W0816 13:47:18.980718   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:18.980724   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:18.980789   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:19.015762   57945 cri.go:89] found id: ""
	I0816 13:47:19.015794   57945 logs.go:276] 0 containers: []
	W0816 13:47:19.015805   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:19.015817   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:19.015837   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:19.101544   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:19.101582   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:19.143587   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:19.143621   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:19.198788   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:19.198826   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:19.212697   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:19.212723   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:19.282719   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:16.607841   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:19.107952   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:18.358184   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:20.857525   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:20.499116   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:22.996642   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:24.998888   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:21.783729   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:21.797977   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:21.798056   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:21.833944   57945 cri.go:89] found id: ""
	I0816 13:47:21.833976   57945 logs.go:276] 0 containers: []
	W0816 13:47:21.833987   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:21.833996   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:21.834053   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:21.870079   57945 cri.go:89] found id: ""
	I0816 13:47:21.870110   57945 logs.go:276] 0 containers: []
	W0816 13:47:21.870120   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:21.870128   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:21.870191   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:21.905834   57945 cri.go:89] found id: ""
	I0816 13:47:21.905864   57945 logs.go:276] 0 containers: []
	W0816 13:47:21.905872   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:21.905878   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:21.905932   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:21.943319   57945 cri.go:89] found id: ""
	I0816 13:47:21.943341   57945 logs.go:276] 0 containers: []
	W0816 13:47:21.943349   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:21.943354   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:21.943412   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:21.982065   57945 cri.go:89] found id: ""
	I0816 13:47:21.982094   57945 logs.go:276] 0 containers: []
	W0816 13:47:21.982103   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:21.982110   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:21.982268   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:22.035131   57945 cri.go:89] found id: ""
	I0816 13:47:22.035167   57945 logs.go:276] 0 containers: []
	W0816 13:47:22.035179   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:22.035186   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:22.035250   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:22.082619   57945 cri.go:89] found id: ""
	I0816 13:47:22.082647   57945 logs.go:276] 0 containers: []
	W0816 13:47:22.082655   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:22.082661   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:22.082720   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:22.128521   57945 cri.go:89] found id: ""
	I0816 13:47:22.128550   57945 logs.go:276] 0 containers: []
	W0816 13:47:22.128559   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:22.128568   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:22.128581   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:22.182794   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:22.182824   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:22.196602   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:22.196628   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:22.264434   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:22.264457   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:22.264472   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:22.343796   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:22.343832   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:24.891164   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:24.904170   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:24.904244   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:24.941046   57945 cri.go:89] found id: ""
	I0816 13:47:24.941082   57945 logs.go:276] 0 containers: []
	W0816 13:47:24.941093   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:24.941101   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:24.941177   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:24.976520   57945 cri.go:89] found id: ""
	I0816 13:47:24.976553   57945 logs.go:276] 0 containers: []
	W0816 13:47:24.976564   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:24.976572   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:24.976635   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:25.024663   57945 cri.go:89] found id: ""
	I0816 13:47:25.024692   57945 logs.go:276] 0 containers: []
	W0816 13:47:25.024704   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:25.024712   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:25.024767   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:25.063892   57945 cri.go:89] found id: ""
	I0816 13:47:25.063920   57945 logs.go:276] 0 containers: []
	W0816 13:47:25.063928   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:25.063934   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:25.064014   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:21.607247   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:23.608388   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:22.857995   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:24.858506   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:27.497595   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:29.997611   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:25.105565   57945 cri.go:89] found id: ""
	I0816 13:47:25.105600   57945 logs.go:276] 0 containers: []
	W0816 13:47:25.105612   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:25.105619   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:25.105676   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:25.150965   57945 cri.go:89] found id: ""
	I0816 13:47:25.150995   57945 logs.go:276] 0 containers: []
	W0816 13:47:25.151006   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:25.151014   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:25.151074   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:25.191170   57945 cri.go:89] found id: ""
	I0816 13:47:25.191202   57945 logs.go:276] 0 containers: []
	W0816 13:47:25.191213   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:25.191220   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:25.191280   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:25.226614   57945 cri.go:89] found id: ""
	I0816 13:47:25.226643   57945 logs.go:276] 0 containers: []
	W0816 13:47:25.226653   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:25.226664   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:25.226680   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:25.239478   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:25.239516   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:25.315450   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:25.315478   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:25.315494   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:25.394755   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:25.394792   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:25.434737   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:25.434768   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:27.984829   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:28.000304   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:28.000378   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:28.042396   57945 cri.go:89] found id: ""
	I0816 13:47:28.042430   57945 logs.go:276] 0 containers: []
	W0816 13:47:28.042447   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:28.042455   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:28.042514   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:28.094491   57945 cri.go:89] found id: ""
	I0816 13:47:28.094515   57945 logs.go:276] 0 containers: []
	W0816 13:47:28.094523   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:28.094528   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:28.094586   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:28.146228   57945 cri.go:89] found id: ""
	I0816 13:47:28.146254   57945 logs.go:276] 0 containers: []
	W0816 13:47:28.146262   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:28.146267   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:28.146314   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:28.179302   57945 cri.go:89] found id: ""
	I0816 13:47:28.179335   57945 logs.go:276] 0 containers: []
	W0816 13:47:28.179347   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:28.179355   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:28.179417   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:28.216707   57945 cri.go:89] found id: ""
	I0816 13:47:28.216737   57945 logs.go:276] 0 containers: []
	W0816 13:47:28.216749   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:28.216757   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:28.216808   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:28.253800   57945 cri.go:89] found id: ""
	I0816 13:47:28.253832   57945 logs.go:276] 0 containers: []
	W0816 13:47:28.253843   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:28.253851   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:28.253906   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:28.289403   57945 cri.go:89] found id: ""
	I0816 13:47:28.289438   57945 logs.go:276] 0 containers: []
	W0816 13:47:28.289450   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:28.289458   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:28.289520   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:28.325174   57945 cri.go:89] found id: ""
	I0816 13:47:28.325206   57945 logs.go:276] 0 containers: []
	W0816 13:47:28.325214   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:28.325222   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:28.325233   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:28.377043   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:28.377077   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:28.390991   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:28.391028   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:28.463563   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:28.463584   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:28.463598   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:28.546593   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:28.546628   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:26.107830   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:28.607294   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:30.613619   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:27.356723   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:29.358026   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:31.857750   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:32.497685   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:34.500214   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:31.084932   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:31.100742   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:31.100809   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:31.134888   57945 cri.go:89] found id: ""
	I0816 13:47:31.134914   57945 logs.go:276] 0 containers: []
	W0816 13:47:31.134921   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:31.134929   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:31.134979   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:31.169533   57945 cri.go:89] found id: ""
	I0816 13:47:31.169558   57945 logs.go:276] 0 containers: []
	W0816 13:47:31.169566   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:31.169572   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:31.169630   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:31.203888   57945 cri.go:89] found id: ""
	I0816 13:47:31.203914   57945 logs.go:276] 0 containers: []
	W0816 13:47:31.203924   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:31.203931   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:31.203993   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:31.239346   57945 cri.go:89] found id: ""
	I0816 13:47:31.239374   57945 logs.go:276] 0 containers: []
	W0816 13:47:31.239387   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:31.239393   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:31.239443   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:31.274011   57945 cri.go:89] found id: ""
	I0816 13:47:31.274038   57945 logs.go:276] 0 containers: []
	W0816 13:47:31.274046   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:31.274053   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:31.274117   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:31.308812   57945 cri.go:89] found id: ""
	I0816 13:47:31.308845   57945 logs.go:276] 0 containers: []
	W0816 13:47:31.308856   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:31.308863   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:31.308950   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:31.343041   57945 cri.go:89] found id: ""
	I0816 13:47:31.343067   57945 logs.go:276] 0 containers: []
	W0816 13:47:31.343075   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:31.343082   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:31.343143   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:31.380969   57945 cri.go:89] found id: ""
	I0816 13:47:31.380998   57945 logs.go:276] 0 containers: []
	W0816 13:47:31.381006   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:31.381015   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:31.381028   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:31.434431   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:31.434465   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:31.449374   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:31.449404   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:31.522134   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:31.522159   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:31.522174   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:31.602707   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:31.602736   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:34.142413   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:34.155531   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:34.155595   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:34.195926   57945 cri.go:89] found id: ""
	I0816 13:47:34.195953   57945 logs.go:276] 0 containers: []
	W0816 13:47:34.195964   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:34.195972   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:34.196040   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:34.230064   57945 cri.go:89] found id: ""
	I0816 13:47:34.230092   57945 logs.go:276] 0 containers: []
	W0816 13:47:34.230103   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:34.230109   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:34.230163   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:34.263973   57945 cri.go:89] found id: ""
	I0816 13:47:34.263998   57945 logs.go:276] 0 containers: []
	W0816 13:47:34.264005   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:34.264012   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:34.264069   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:34.298478   57945 cri.go:89] found id: ""
	I0816 13:47:34.298523   57945 logs.go:276] 0 containers: []
	W0816 13:47:34.298532   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:34.298539   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:34.298597   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:34.337196   57945 cri.go:89] found id: ""
	I0816 13:47:34.337225   57945 logs.go:276] 0 containers: []
	W0816 13:47:34.337233   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:34.337239   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:34.337291   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:34.374716   57945 cri.go:89] found id: ""
	I0816 13:47:34.374751   57945 logs.go:276] 0 containers: []
	W0816 13:47:34.374763   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:34.374771   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:34.374830   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:34.413453   57945 cri.go:89] found id: ""
	I0816 13:47:34.413480   57945 logs.go:276] 0 containers: []
	W0816 13:47:34.413491   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:34.413498   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:34.413563   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:34.450074   57945 cri.go:89] found id: ""
	I0816 13:47:34.450107   57945 logs.go:276] 0 containers: []
	W0816 13:47:34.450119   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:34.450156   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:34.450176   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:34.490214   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:34.490239   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:34.542861   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:34.542895   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:34.557371   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:34.557400   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:34.627976   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:34.627995   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:34.628011   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:33.106665   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:35.107026   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:34.358059   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:36.858347   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:36.998289   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:39.499047   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:37.205741   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:37.219207   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:37.219286   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:37.258254   57945 cri.go:89] found id: ""
	I0816 13:47:37.258288   57945 logs.go:276] 0 containers: []
	W0816 13:47:37.258300   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:37.258307   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:37.258359   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:37.293604   57945 cri.go:89] found id: ""
	I0816 13:47:37.293635   57945 logs.go:276] 0 containers: []
	W0816 13:47:37.293647   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:37.293654   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:37.293715   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:37.334043   57945 cri.go:89] found id: ""
	I0816 13:47:37.334072   57945 logs.go:276] 0 containers: []
	W0816 13:47:37.334084   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:37.334091   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:37.334153   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:37.369745   57945 cri.go:89] found id: ""
	I0816 13:47:37.369770   57945 logs.go:276] 0 containers: []
	W0816 13:47:37.369777   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:37.369784   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:37.369835   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:37.406277   57945 cri.go:89] found id: ""
	I0816 13:47:37.406305   57945 logs.go:276] 0 containers: []
	W0816 13:47:37.406317   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:37.406325   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:37.406407   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:37.440418   57945 cri.go:89] found id: ""
	I0816 13:47:37.440449   57945 logs.go:276] 0 containers: []
	W0816 13:47:37.440456   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:37.440463   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:37.440515   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:37.474527   57945 cri.go:89] found id: ""
	I0816 13:47:37.474561   57945 logs.go:276] 0 containers: []
	W0816 13:47:37.474572   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:37.474580   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:37.474642   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:37.513959   57945 cri.go:89] found id: ""
	I0816 13:47:37.513987   57945 logs.go:276] 0 containers: []
	W0816 13:47:37.513995   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:37.514004   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:37.514020   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:37.569561   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:37.569597   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:37.584095   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:37.584127   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:37.652289   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:37.652317   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:37.652333   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:37.737388   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:37.737434   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:37.107091   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:39.108555   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:39.358316   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:41.858946   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:41.998041   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:44.498467   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:40.281872   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:40.295704   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:40.295763   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:40.336641   57945 cri.go:89] found id: ""
	I0816 13:47:40.336667   57945 logs.go:276] 0 containers: []
	W0816 13:47:40.336678   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:40.336686   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:40.336748   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:40.373500   57945 cri.go:89] found id: ""
	I0816 13:47:40.373524   57945 logs.go:276] 0 containers: []
	W0816 13:47:40.373531   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:40.373536   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:40.373593   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:40.417553   57945 cri.go:89] found id: ""
	I0816 13:47:40.417575   57945 logs.go:276] 0 containers: []
	W0816 13:47:40.417583   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:40.417589   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:40.417645   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:40.452778   57945 cri.go:89] found id: ""
	I0816 13:47:40.452809   57945 logs.go:276] 0 containers: []
	W0816 13:47:40.452819   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:40.452827   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:40.452896   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:40.491389   57945 cri.go:89] found id: ""
	I0816 13:47:40.491424   57945 logs.go:276] 0 containers: []
	W0816 13:47:40.491436   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:40.491445   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:40.491505   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:40.529780   57945 cri.go:89] found id: ""
	I0816 13:47:40.529815   57945 logs.go:276] 0 containers: []
	W0816 13:47:40.529826   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:40.529835   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:40.529903   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:40.567724   57945 cri.go:89] found id: ""
	I0816 13:47:40.567751   57945 logs.go:276] 0 containers: []
	W0816 13:47:40.567761   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:40.567768   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:40.567825   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:40.604260   57945 cri.go:89] found id: ""
	I0816 13:47:40.604299   57945 logs.go:276] 0 containers: []
	W0816 13:47:40.604309   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:40.604319   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:40.604335   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:40.676611   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:40.676642   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:40.676659   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:40.755779   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:40.755815   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:40.793780   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:40.793811   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:40.845869   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:40.845902   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:43.361766   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:43.376247   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:43.376309   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:43.416527   57945 cri.go:89] found id: ""
	I0816 13:47:43.416559   57945 logs.go:276] 0 containers: []
	W0816 13:47:43.416567   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:43.416573   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:43.416621   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:43.458203   57945 cri.go:89] found id: ""
	I0816 13:47:43.458228   57945 logs.go:276] 0 containers: []
	W0816 13:47:43.458239   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:43.458246   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:43.458312   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:43.498122   57945 cri.go:89] found id: ""
	I0816 13:47:43.498146   57945 logs.go:276] 0 containers: []
	W0816 13:47:43.498158   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:43.498166   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:43.498231   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:43.533392   57945 cri.go:89] found id: ""
	I0816 13:47:43.533418   57945 logs.go:276] 0 containers: []
	W0816 13:47:43.533428   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:43.533436   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:43.533510   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:43.569258   57945 cri.go:89] found id: ""
	I0816 13:47:43.569294   57945 logs.go:276] 0 containers: []
	W0816 13:47:43.569301   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:43.569309   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:43.569368   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:43.603599   57945 cri.go:89] found id: ""
	I0816 13:47:43.603624   57945 logs.go:276] 0 containers: []
	W0816 13:47:43.603633   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:43.603639   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:43.603696   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:43.643204   57945 cri.go:89] found id: ""
	I0816 13:47:43.643236   57945 logs.go:276] 0 containers: []
	W0816 13:47:43.643248   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:43.643256   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:43.643343   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:43.678365   57945 cri.go:89] found id: ""
	I0816 13:47:43.678393   57945 logs.go:276] 0 containers: []
	W0816 13:47:43.678412   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:43.678424   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:43.678440   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:43.729472   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:43.729522   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:43.743714   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:43.743749   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:43.819210   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:43.819237   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:43.819252   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:43.899800   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:43.899835   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:41.606734   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:43.608097   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:44.357080   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:46.357589   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:46.503576   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:48.998084   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:46.437795   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:46.450756   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:46.450828   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:46.487036   57945 cri.go:89] found id: ""
	I0816 13:47:46.487059   57945 logs.go:276] 0 containers: []
	W0816 13:47:46.487067   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:46.487073   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:46.487119   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:46.524268   57945 cri.go:89] found id: ""
	I0816 13:47:46.524294   57945 logs.go:276] 0 containers: []
	W0816 13:47:46.524303   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:46.524308   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:46.524360   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:46.561202   57945 cri.go:89] found id: ""
	I0816 13:47:46.561232   57945 logs.go:276] 0 containers: []
	W0816 13:47:46.561244   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:46.561251   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:46.561311   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:46.596006   57945 cri.go:89] found id: ""
	I0816 13:47:46.596032   57945 logs.go:276] 0 containers: []
	W0816 13:47:46.596039   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:46.596045   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:46.596094   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:46.632279   57945 cri.go:89] found id: ""
	I0816 13:47:46.632306   57945 logs.go:276] 0 containers: []
	W0816 13:47:46.632313   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:46.632319   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:46.632372   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:46.669139   57945 cri.go:89] found id: ""
	I0816 13:47:46.669166   57945 logs.go:276] 0 containers: []
	W0816 13:47:46.669174   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:46.669179   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:46.669237   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:46.704084   57945 cri.go:89] found id: ""
	I0816 13:47:46.704115   57945 logs.go:276] 0 containers: []
	W0816 13:47:46.704126   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:46.704134   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:46.704207   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:46.740275   57945 cri.go:89] found id: ""
	I0816 13:47:46.740303   57945 logs.go:276] 0 containers: []
	W0816 13:47:46.740314   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:46.740325   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:46.740341   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:46.792777   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:46.792811   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:46.807390   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:46.807429   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:46.877563   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:46.877589   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:46.877605   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:46.954703   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:46.954737   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:49.497506   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:49.510913   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:49.511007   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:49.547461   57945 cri.go:89] found id: ""
	I0816 13:47:49.547491   57945 logs.go:276] 0 containers: []
	W0816 13:47:49.547503   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:49.547517   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:49.547579   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:49.581972   57945 cri.go:89] found id: ""
	I0816 13:47:49.582005   57945 logs.go:276] 0 containers: []
	W0816 13:47:49.582014   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:49.582021   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:49.582084   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:49.617148   57945 cri.go:89] found id: ""
	I0816 13:47:49.617176   57945 logs.go:276] 0 containers: []
	W0816 13:47:49.617185   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:49.617193   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:49.617260   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:49.652546   57945 cri.go:89] found id: ""
	I0816 13:47:49.652569   57945 logs.go:276] 0 containers: []
	W0816 13:47:49.652578   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:49.652584   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:49.652631   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:49.688040   57945 cri.go:89] found id: ""
	I0816 13:47:49.688071   57945 logs.go:276] 0 containers: []
	W0816 13:47:49.688079   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:49.688084   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:49.688154   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:49.721779   57945 cri.go:89] found id: ""
	I0816 13:47:49.721809   57945 logs.go:276] 0 containers: []
	W0816 13:47:49.721819   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:49.721827   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:49.721890   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:49.758926   57945 cri.go:89] found id: ""
	I0816 13:47:49.758953   57945 logs.go:276] 0 containers: []
	W0816 13:47:49.758960   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:49.758966   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:49.759020   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:49.796328   57945 cri.go:89] found id: ""
	I0816 13:47:49.796358   57945 logs.go:276] 0 containers: []
	W0816 13:47:49.796368   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:49.796378   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:49.796393   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:49.851818   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:49.851855   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:49.867320   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:49.867350   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:49.934885   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:49.934907   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:49.934921   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:50.018012   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:50.018055   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:46.105523   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:48.107122   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:50.606969   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:48.357769   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:50.859617   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:50.998256   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:53.498046   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:52.563101   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:52.576817   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:52.576879   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:52.613425   57945 cri.go:89] found id: ""
	I0816 13:47:52.613459   57945 logs.go:276] 0 containers: []
	W0816 13:47:52.613469   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:52.613475   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:52.613522   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:52.650086   57945 cri.go:89] found id: ""
	I0816 13:47:52.650109   57945 logs.go:276] 0 containers: []
	W0816 13:47:52.650117   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:52.650123   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:52.650186   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:52.686993   57945 cri.go:89] found id: ""
	I0816 13:47:52.687020   57945 logs.go:276] 0 containers: []
	W0816 13:47:52.687028   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:52.687034   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:52.687080   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:52.724307   57945 cri.go:89] found id: ""
	I0816 13:47:52.724337   57945 logs.go:276] 0 containers: []
	W0816 13:47:52.724349   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:52.724357   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:52.724421   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:52.759250   57945 cri.go:89] found id: ""
	I0816 13:47:52.759281   57945 logs.go:276] 0 containers: []
	W0816 13:47:52.759290   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:52.759295   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:52.759350   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:52.798634   57945 cri.go:89] found id: ""
	I0816 13:47:52.798660   57945 logs.go:276] 0 containers: []
	W0816 13:47:52.798670   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:52.798677   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:52.798741   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:52.833923   57945 cri.go:89] found id: ""
	I0816 13:47:52.833946   57945 logs.go:276] 0 containers: []
	W0816 13:47:52.833954   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:52.833960   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:52.834005   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:52.873647   57945 cri.go:89] found id: ""
	I0816 13:47:52.873671   57945 logs.go:276] 0 containers: []
	W0816 13:47:52.873679   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:52.873687   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:52.873701   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:52.887667   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:52.887697   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:52.960494   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:52.960516   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:52.960529   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:53.037132   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:53.037167   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:53.076769   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:53.076799   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:52.607529   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:55.107256   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:53.357315   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:55.357380   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:55.498193   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:57.498238   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:59.997582   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:55.625565   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:55.639296   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:55.639367   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:55.675104   57945 cri.go:89] found id: ""
	I0816 13:47:55.675137   57945 logs.go:276] 0 containers: []
	W0816 13:47:55.675149   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:55.675156   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:55.675220   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:55.710108   57945 cri.go:89] found id: ""
	I0816 13:47:55.710137   57945 logs.go:276] 0 containers: []
	W0816 13:47:55.710149   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:55.710156   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:55.710218   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:55.744190   57945 cri.go:89] found id: ""
	I0816 13:47:55.744212   57945 logs.go:276] 0 containers: []
	W0816 13:47:55.744220   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:55.744225   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:55.744288   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:55.781775   57945 cri.go:89] found id: ""
	I0816 13:47:55.781806   57945 logs.go:276] 0 containers: []
	W0816 13:47:55.781815   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:55.781821   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:55.781879   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:55.818877   57945 cri.go:89] found id: ""
	I0816 13:47:55.818907   57945 logs.go:276] 0 containers: []
	W0816 13:47:55.818915   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:55.818921   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:55.818973   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:55.858751   57945 cri.go:89] found id: ""
	I0816 13:47:55.858773   57945 logs.go:276] 0 containers: []
	W0816 13:47:55.858782   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:55.858790   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:55.858852   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:55.894745   57945 cri.go:89] found id: ""
	I0816 13:47:55.894776   57945 logs.go:276] 0 containers: []
	W0816 13:47:55.894787   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:55.894796   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:55.894854   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:55.928805   57945 cri.go:89] found id: ""
	I0816 13:47:55.928832   57945 logs.go:276] 0 containers: []
	W0816 13:47:55.928843   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:55.928853   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:55.928872   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:55.982684   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:55.982717   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:55.997319   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:55.997354   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:56.063016   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:56.063043   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:56.063059   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:56.147138   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:56.147177   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:58.686160   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:47:58.699135   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:47:58.699260   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:47:58.737566   57945 cri.go:89] found id: ""
	I0816 13:47:58.737597   57945 logs.go:276] 0 containers: []
	W0816 13:47:58.737606   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:47:58.737613   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:47:58.737662   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:47:58.778119   57945 cri.go:89] found id: ""
	I0816 13:47:58.778149   57945 logs.go:276] 0 containers: []
	W0816 13:47:58.778164   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:47:58.778173   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:47:58.778243   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:47:58.815003   57945 cri.go:89] found id: ""
	I0816 13:47:58.815031   57945 logs.go:276] 0 containers: []
	W0816 13:47:58.815040   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:47:58.815046   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:47:58.815094   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:47:58.847912   57945 cri.go:89] found id: ""
	I0816 13:47:58.847941   57945 logs.go:276] 0 containers: []
	W0816 13:47:58.847949   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:47:58.847955   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:47:58.848005   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:47:58.882600   57945 cri.go:89] found id: ""
	I0816 13:47:58.882623   57945 logs.go:276] 0 containers: []
	W0816 13:47:58.882631   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:47:58.882637   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:47:58.882686   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:47:58.920459   57945 cri.go:89] found id: ""
	I0816 13:47:58.920489   57945 logs.go:276] 0 containers: []
	W0816 13:47:58.920500   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:47:58.920507   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:47:58.920571   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:47:58.952411   57945 cri.go:89] found id: ""
	I0816 13:47:58.952445   57945 logs.go:276] 0 containers: []
	W0816 13:47:58.952453   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:47:58.952460   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:47:58.952570   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:47:58.985546   57945 cri.go:89] found id: ""
	I0816 13:47:58.985573   57945 logs.go:276] 0 containers: []
	W0816 13:47:58.985581   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:47:58.985589   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:47:58.985600   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:47:59.067406   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:47:59.067439   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:47:59.108076   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:47:59.108107   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:47:59.162698   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:47:59.162734   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:47:59.178734   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:47:59.178759   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:47:59.255267   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:47:57.606146   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:59.606603   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:57.358416   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:47:59.861332   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:01.998633   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:04.498646   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:01.756248   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:01.768940   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:01.769009   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:01.804884   57945 cri.go:89] found id: ""
	I0816 13:48:01.804924   57945 logs.go:276] 0 containers: []
	W0816 13:48:01.804936   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:01.804946   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:01.805000   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:01.844010   57945 cri.go:89] found id: ""
	I0816 13:48:01.844035   57945 logs.go:276] 0 containers: []
	W0816 13:48:01.844042   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:01.844051   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:01.844104   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:01.882450   57945 cri.go:89] found id: ""
	I0816 13:48:01.882488   57945 logs.go:276] 0 containers: []
	W0816 13:48:01.882500   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:01.882507   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:01.882568   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:01.916995   57945 cri.go:89] found id: ""
	I0816 13:48:01.917028   57945 logs.go:276] 0 containers: []
	W0816 13:48:01.917039   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:01.917048   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:01.917109   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:01.956289   57945 cri.go:89] found id: ""
	I0816 13:48:01.956312   57945 logs.go:276] 0 containers: []
	W0816 13:48:01.956319   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:01.956325   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:01.956378   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:01.991823   57945 cri.go:89] found id: ""
	I0816 13:48:01.991862   57945 logs.go:276] 0 containers: []
	W0816 13:48:01.991875   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:01.991882   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:01.991953   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:02.034244   57945 cri.go:89] found id: ""
	I0816 13:48:02.034272   57945 logs.go:276] 0 containers: []
	W0816 13:48:02.034282   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:02.034290   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:02.034357   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:02.067902   57945 cri.go:89] found id: ""
	I0816 13:48:02.067930   57945 logs.go:276] 0 containers: []
	W0816 13:48:02.067942   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:02.067953   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:02.067971   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:02.121170   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:02.121196   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:02.177468   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:02.177498   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:02.191721   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:02.191757   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:02.270433   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:02.270463   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:02.270500   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:04.855768   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:04.869098   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:04.869175   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:04.907817   57945 cri.go:89] found id: ""
	I0816 13:48:04.907848   57945 logs.go:276] 0 containers: []
	W0816 13:48:04.907856   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:04.907863   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:04.907919   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:04.943307   57945 cri.go:89] found id: ""
	I0816 13:48:04.943339   57945 logs.go:276] 0 containers: []
	W0816 13:48:04.943349   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:04.943356   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:04.943416   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:04.979884   57945 cri.go:89] found id: ""
	I0816 13:48:04.979914   57945 logs.go:276] 0 containers: []
	W0816 13:48:04.979922   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:04.979929   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:04.979978   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:05.021400   57945 cri.go:89] found id: ""
	I0816 13:48:05.021442   57945 logs.go:276] 0 containers: []
	W0816 13:48:05.021453   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:05.021463   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:05.021542   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:05.057780   57945 cri.go:89] found id: ""
	I0816 13:48:05.057800   57945 logs.go:276] 0 containers: []
	W0816 13:48:05.057808   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:05.057814   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:05.057864   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:05.091947   57945 cri.go:89] found id: ""
	I0816 13:48:05.091976   57945 logs.go:276] 0 containers: []
	W0816 13:48:05.091987   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:05.091995   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:05.092058   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:01.607315   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:04.107759   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:02.358142   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:04.857766   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:06.998437   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:09.496888   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:05.129740   57945 cri.go:89] found id: ""
	I0816 13:48:05.129771   57945 logs.go:276] 0 containers: []
	W0816 13:48:05.129781   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:05.129788   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:05.129857   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:05.163020   57945 cri.go:89] found id: ""
	I0816 13:48:05.163049   57945 logs.go:276] 0 containers: []
	W0816 13:48:05.163060   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:05.163070   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:05.163087   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:05.236240   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:05.236266   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:05.236281   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:05.310559   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:05.310595   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:05.351614   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:05.351646   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:05.404938   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:05.404971   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:07.921010   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:07.934181   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:07.934255   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:07.969474   57945 cri.go:89] found id: ""
	I0816 13:48:07.969502   57945 logs.go:276] 0 containers: []
	W0816 13:48:07.969512   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:07.969520   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:07.969575   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:08.007423   57945 cri.go:89] found id: ""
	I0816 13:48:08.007447   57945 logs.go:276] 0 containers: []
	W0816 13:48:08.007454   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:08.007460   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:08.007515   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:08.043981   57945 cri.go:89] found id: ""
	I0816 13:48:08.044010   57945 logs.go:276] 0 containers: []
	W0816 13:48:08.044021   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:08.044027   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:08.044076   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:08.078631   57945 cri.go:89] found id: ""
	I0816 13:48:08.078656   57945 logs.go:276] 0 containers: []
	W0816 13:48:08.078664   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:08.078669   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:08.078720   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:08.114970   57945 cri.go:89] found id: ""
	I0816 13:48:08.114998   57945 logs.go:276] 0 containers: []
	W0816 13:48:08.115010   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:08.115020   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:08.115081   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:08.149901   57945 cri.go:89] found id: ""
	I0816 13:48:08.149936   57945 logs.go:276] 0 containers: []
	W0816 13:48:08.149944   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:08.149951   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:08.150007   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:08.183104   57945 cri.go:89] found id: ""
	I0816 13:48:08.183128   57945 logs.go:276] 0 containers: []
	W0816 13:48:08.183136   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:08.183141   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:08.183189   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:08.216972   57945 cri.go:89] found id: ""
	I0816 13:48:08.217005   57945 logs.go:276] 0 containers: []
	W0816 13:48:08.217016   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:08.217027   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:08.217043   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:08.231192   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:08.231223   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:08.306779   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:08.306807   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:08.306823   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:08.388235   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:08.388274   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:08.429040   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:08.429071   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:06.110473   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:08.606467   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:07.356589   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:09.357419   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:11.357839   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:11.497754   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:13.997641   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:10.983867   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:10.997649   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:10.997722   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:11.033315   57945 cri.go:89] found id: ""
	I0816 13:48:11.033351   57945 logs.go:276] 0 containers: []
	W0816 13:48:11.033362   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:11.033370   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:11.033437   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:11.069000   57945 cri.go:89] found id: ""
	I0816 13:48:11.069030   57945 logs.go:276] 0 containers: []
	W0816 13:48:11.069038   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:11.069044   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:11.069102   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:11.100668   57945 cri.go:89] found id: ""
	I0816 13:48:11.100691   57945 logs.go:276] 0 containers: []
	W0816 13:48:11.100698   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:11.100704   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:11.100755   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:11.134753   57945 cri.go:89] found id: ""
	I0816 13:48:11.134782   57945 logs.go:276] 0 containers: []
	W0816 13:48:11.134792   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:11.134800   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:11.134857   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:11.169691   57945 cri.go:89] found id: ""
	I0816 13:48:11.169717   57945 logs.go:276] 0 containers: []
	W0816 13:48:11.169726   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:11.169734   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:11.169797   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:11.204048   57945 cri.go:89] found id: ""
	I0816 13:48:11.204077   57945 logs.go:276] 0 containers: []
	W0816 13:48:11.204088   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:11.204095   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:11.204147   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:11.237659   57945 cri.go:89] found id: ""
	I0816 13:48:11.237687   57945 logs.go:276] 0 containers: []
	W0816 13:48:11.237698   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:11.237706   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:11.237768   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:11.271886   57945 cri.go:89] found id: ""
	I0816 13:48:11.271911   57945 logs.go:276] 0 containers: []
	W0816 13:48:11.271922   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:11.271932   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:11.271946   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:11.327237   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:11.327274   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:11.343215   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:11.343256   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:11.419725   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:11.419752   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:11.419768   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:11.498221   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:11.498252   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:14.044619   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:14.057479   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:14.057537   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:14.093405   57945 cri.go:89] found id: ""
	I0816 13:48:14.093439   57945 logs.go:276] 0 containers: []
	W0816 13:48:14.093450   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:14.093459   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:14.093516   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:14.127089   57945 cri.go:89] found id: ""
	I0816 13:48:14.127111   57945 logs.go:276] 0 containers: []
	W0816 13:48:14.127120   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:14.127127   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:14.127190   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:14.165676   57945 cri.go:89] found id: ""
	I0816 13:48:14.165708   57945 logs.go:276] 0 containers: []
	W0816 13:48:14.165719   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:14.165726   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:14.165791   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:14.198630   57945 cri.go:89] found id: ""
	I0816 13:48:14.198652   57945 logs.go:276] 0 containers: []
	W0816 13:48:14.198660   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:14.198665   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:14.198717   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:14.246679   57945 cri.go:89] found id: ""
	I0816 13:48:14.246706   57945 logs.go:276] 0 containers: []
	W0816 13:48:14.246714   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:14.246719   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:14.246774   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:14.290928   57945 cri.go:89] found id: ""
	I0816 13:48:14.290960   57945 logs.go:276] 0 containers: []
	W0816 13:48:14.290972   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:14.290979   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:14.291043   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:14.342499   57945 cri.go:89] found id: ""
	I0816 13:48:14.342527   57945 logs.go:276] 0 containers: []
	W0816 13:48:14.342537   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:14.342544   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:14.342613   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:14.377858   57945 cri.go:89] found id: ""
	I0816 13:48:14.377891   57945 logs.go:276] 0 containers: []
	W0816 13:48:14.377899   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:14.377913   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:14.377928   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:14.431180   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:14.431218   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:14.445355   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:14.445381   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:14.513970   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:14.513991   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:14.514006   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:14.591381   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:14.591416   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:11.108299   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:13.612816   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:13.856979   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:15.857269   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:15.999100   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:18.497473   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:17.133406   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:17.146647   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:17.146703   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:17.180991   57945 cri.go:89] found id: ""
	I0816 13:48:17.181022   57945 logs.go:276] 0 containers: []
	W0816 13:48:17.181032   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:17.181041   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:17.181103   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:17.214862   57945 cri.go:89] found id: ""
	I0816 13:48:17.214892   57945 logs.go:276] 0 containers: []
	W0816 13:48:17.214903   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:17.214910   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:17.214971   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:17.250316   57945 cri.go:89] found id: ""
	I0816 13:48:17.250344   57945 logs.go:276] 0 containers: []
	W0816 13:48:17.250355   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:17.250362   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:17.250425   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:17.282959   57945 cri.go:89] found id: ""
	I0816 13:48:17.282991   57945 logs.go:276] 0 containers: []
	W0816 13:48:17.283001   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:17.283008   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:17.283070   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:17.316185   57945 cri.go:89] found id: ""
	I0816 13:48:17.316213   57945 logs.go:276] 0 containers: []
	W0816 13:48:17.316224   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:17.316232   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:17.316292   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:17.353383   57945 cri.go:89] found id: ""
	I0816 13:48:17.353410   57945 logs.go:276] 0 containers: []
	W0816 13:48:17.353420   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:17.353428   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:17.353487   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:17.390808   57945 cri.go:89] found id: ""
	I0816 13:48:17.390836   57945 logs.go:276] 0 containers: []
	W0816 13:48:17.390844   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:17.390850   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:17.390898   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:17.425484   57945 cri.go:89] found id: ""
	I0816 13:48:17.425517   57945 logs.go:276] 0 containers: []
	W0816 13:48:17.425529   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:17.425539   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:17.425556   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:17.439184   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:17.439220   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:17.511813   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:17.511838   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:17.511853   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:17.597415   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:17.597447   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:17.636703   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:17.636738   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:16.105992   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:18.606940   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:20.607532   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:18.357812   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:20.358351   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:20.498644   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:22.998103   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:24.999122   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:20.193694   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:20.207488   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:20.207549   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:20.246584   57945 cri.go:89] found id: ""
	I0816 13:48:20.246610   57945 logs.go:276] 0 containers: []
	W0816 13:48:20.246618   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:20.246624   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:20.246678   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:20.282030   57945 cri.go:89] found id: ""
	I0816 13:48:20.282060   57945 logs.go:276] 0 containers: []
	W0816 13:48:20.282071   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:20.282078   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:20.282142   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:20.317530   57945 cri.go:89] found id: ""
	I0816 13:48:20.317562   57945 logs.go:276] 0 containers: []
	W0816 13:48:20.317571   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:20.317578   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:20.317630   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:20.352964   57945 cri.go:89] found id: ""
	I0816 13:48:20.352990   57945 logs.go:276] 0 containers: []
	W0816 13:48:20.353000   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:20.353008   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:20.353066   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:20.388108   57945 cri.go:89] found id: ""
	I0816 13:48:20.388138   57945 logs.go:276] 0 containers: []
	W0816 13:48:20.388148   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:20.388156   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:20.388224   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:20.423627   57945 cri.go:89] found id: ""
	I0816 13:48:20.423660   57945 logs.go:276] 0 containers: []
	W0816 13:48:20.423672   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:20.423680   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:20.423741   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:20.460975   57945 cri.go:89] found id: ""
	I0816 13:48:20.461003   57945 logs.go:276] 0 containers: []
	W0816 13:48:20.461011   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:20.461017   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:20.461081   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:20.497707   57945 cri.go:89] found id: ""
	I0816 13:48:20.497728   57945 logs.go:276] 0 containers: []
	W0816 13:48:20.497735   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:20.497743   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:20.497758   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:20.584887   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:20.584939   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:20.627020   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:20.627054   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:20.680716   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:20.680756   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:20.694945   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:20.694973   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:20.770900   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:23.271654   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:23.284709   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:23.284788   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:23.324342   57945 cri.go:89] found id: ""
	I0816 13:48:23.324374   57945 logs.go:276] 0 containers: []
	W0816 13:48:23.324384   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:23.324393   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:23.324453   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:23.358846   57945 cri.go:89] found id: ""
	I0816 13:48:23.358869   57945 logs.go:276] 0 containers: []
	W0816 13:48:23.358879   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:23.358885   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:23.358943   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:23.392580   57945 cri.go:89] found id: ""
	I0816 13:48:23.392607   57945 logs.go:276] 0 containers: []
	W0816 13:48:23.392618   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:23.392626   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:23.392686   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:23.428035   57945 cri.go:89] found id: ""
	I0816 13:48:23.428066   57945 logs.go:276] 0 containers: []
	W0816 13:48:23.428076   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:23.428083   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:23.428164   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:23.470027   57945 cri.go:89] found id: ""
	I0816 13:48:23.470054   57945 logs.go:276] 0 containers: []
	W0816 13:48:23.470066   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:23.470076   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:23.470242   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:23.506497   57945 cri.go:89] found id: ""
	I0816 13:48:23.506522   57945 logs.go:276] 0 containers: []
	W0816 13:48:23.506530   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:23.506536   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:23.506588   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:23.542571   57945 cri.go:89] found id: ""
	I0816 13:48:23.542600   57945 logs.go:276] 0 containers: []
	W0816 13:48:23.542611   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:23.542619   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:23.542683   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:23.578552   57945 cri.go:89] found id: ""
	I0816 13:48:23.578584   57945 logs.go:276] 0 containers: []
	W0816 13:48:23.578592   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:23.578601   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:23.578612   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:23.633145   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:23.633181   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:23.648089   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:23.648129   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:23.724645   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:23.724663   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:23.724675   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:23.812979   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:23.813013   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:23.107986   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:25.607110   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:22.858674   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:25.358411   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:27.497538   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:29.498345   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:26.353455   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:26.367433   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:26.367504   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:26.406717   57945 cri.go:89] found id: ""
	I0816 13:48:26.406746   57945 logs.go:276] 0 containers: []
	W0816 13:48:26.406756   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:26.406764   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:26.406825   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:26.440267   57945 cri.go:89] found id: ""
	I0816 13:48:26.440298   57945 logs.go:276] 0 containers: []
	W0816 13:48:26.440309   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:26.440317   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:26.440379   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:26.479627   57945 cri.go:89] found id: ""
	I0816 13:48:26.479653   57945 logs.go:276] 0 containers: []
	W0816 13:48:26.479662   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:26.479667   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:26.479714   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:26.516608   57945 cri.go:89] found id: ""
	I0816 13:48:26.516638   57945 logs.go:276] 0 containers: []
	W0816 13:48:26.516646   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:26.516653   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:26.516713   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:26.553474   57945 cri.go:89] found id: ""
	I0816 13:48:26.553496   57945 logs.go:276] 0 containers: []
	W0816 13:48:26.553505   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:26.553510   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:26.553566   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:26.586090   57945 cri.go:89] found id: ""
	I0816 13:48:26.586147   57945 logs.go:276] 0 containers: []
	W0816 13:48:26.586160   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:26.586167   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:26.586217   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:26.621874   57945 cri.go:89] found id: ""
	I0816 13:48:26.621903   57945 logs.go:276] 0 containers: []
	W0816 13:48:26.621914   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:26.621923   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:26.621999   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:26.656643   57945 cri.go:89] found id: ""
	I0816 13:48:26.656668   57945 logs.go:276] 0 containers: []
	W0816 13:48:26.656676   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:26.656684   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:26.656694   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:26.710589   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:26.710628   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:26.724403   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:26.724423   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:26.795530   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:26.795550   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:26.795568   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:26.879670   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:26.879709   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:29.420540   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:29.434301   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:29.434368   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:29.471409   57945 cri.go:89] found id: ""
	I0816 13:48:29.471438   57945 logs.go:276] 0 containers: []
	W0816 13:48:29.471455   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:29.471464   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:29.471527   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:29.510841   57945 cri.go:89] found id: ""
	I0816 13:48:29.510865   57945 logs.go:276] 0 containers: []
	W0816 13:48:29.510873   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:29.510880   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:29.510928   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:29.546300   57945 cri.go:89] found id: ""
	I0816 13:48:29.546331   57945 logs.go:276] 0 containers: []
	W0816 13:48:29.546342   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:29.546349   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:29.546409   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:29.579324   57945 cri.go:89] found id: ""
	I0816 13:48:29.579349   57945 logs.go:276] 0 containers: []
	W0816 13:48:29.579357   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:29.579363   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:29.579414   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:29.613729   57945 cri.go:89] found id: ""
	I0816 13:48:29.613755   57945 logs.go:276] 0 containers: []
	W0816 13:48:29.613765   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:29.613772   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:29.613831   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:29.649401   57945 cri.go:89] found id: ""
	I0816 13:48:29.649428   57945 logs.go:276] 0 containers: []
	W0816 13:48:29.649439   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:29.649447   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:29.649510   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:29.685391   57945 cri.go:89] found id: ""
	I0816 13:48:29.685420   57945 logs.go:276] 0 containers: []
	W0816 13:48:29.685428   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:29.685436   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:29.685504   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:29.720954   57945 cri.go:89] found id: ""
	I0816 13:48:29.720981   57945 logs.go:276] 0 containers: []
	W0816 13:48:29.720993   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:29.721004   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:29.721019   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:29.791602   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:29.791625   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:29.791637   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:29.876595   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:29.876633   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:29.917172   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:29.917203   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:29.969511   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:29.969548   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:27.607276   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:30.106660   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:27.856585   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:29.857836   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:31.498615   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:33.999039   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:32.484186   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:32.499320   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:32.499386   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:32.537301   57945 cri.go:89] found id: ""
	I0816 13:48:32.537351   57945 logs.go:276] 0 containers: []
	W0816 13:48:32.537365   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:48:32.537373   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:32.537441   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:32.574324   57945 cri.go:89] found id: ""
	I0816 13:48:32.574350   57945 logs.go:276] 0 containers: []
	W0816 13:48:32.574360   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:48:32.574367   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:32.574445   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:32.610672   57945 cri.go:89] found id: ""
	I0816 13:48:32.610697   57945 logs.go:276] 0 containers: []
	W0816 13:48:32.610704   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:48:32.610709   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:32.610760   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:32.649916   57945 cri.go:89] found id: ""
	I0816 13:48:32.649941   57945 logs.go:276] 0 containers: []
	W0816 13:48:32.649949   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:48:32.649955   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:32.650010   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:32.684204   57945 cri.go:89] found id: ""
	I0816 13:48:32.684234   57945 logs.go:276] 0 containers: []
	W0816 13:48:32.684245   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:48:32.684257   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:32.684319   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:32.723735   57945 cri.go:89] found id: ""
	I0816 13:48:32.723764   57945 logs.go:276] 0 containers: []
	W0816 13:48:32.723772   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:48:32.723778   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:32.723838   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:32.759709   57945 cri.go:89] found id: ""
	I0816 13:48:32.759732   57945 logs.go:276] 0 containers: []
	W0816 13:48:32.759740   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:32.759746   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:48:32.759798   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:48:32.798782   57945 cri.go:89] found id: ""
	I0816 13:48:32.798807   57945 logs.go:276] 0 containers: []
	W0816 13:48:32.798815   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:48:32.798823   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:32.798835   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:48:32.876166   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:48:32.876188   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:32.876199   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:32.956218   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:48:32.956253   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:32.996625   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:32.996662   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:33.050093   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:33.050128   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:32.107363   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:34.607045   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:32.357801   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:34.856980   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:36.857321   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:36.497064   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:38.498666   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:35.565097   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:35.578582   57945 kubeadm.go:597] duration metric: took 4m3.330349632s to restartPrimaryControlPlane
	W0816 13:48:35.578670   57945 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0816 13:48:35.578704   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 13:48:36.655625   57945 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.076898816s)
	I0816 13:48:36.655703   57945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 13:48:36.670273   57945 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 13:48:36.681600   57945 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 13:48:36.691816   57945 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 13:48:36.691835   57945 kubeadm.go:157] found existing configuration files:
	
	I0816 13:48:36.691877   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 13:48:36.701841   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 13:48:36.701901   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 13:48:36.711571   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 13:48:36.720990   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 13:48:36.721055   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 13:48:36.730948   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 13:48:36.740294   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 13:48:36.740361   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 13:48:36.750725   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 13:48:36.761936   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 13:48:36.762009   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 13:48:36.772572   57945 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 13:48:37.001184   57945 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 13:48:36.608364   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:39.106585   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:38.857386   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:41.358217   57440 pod_ready.go:103] pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:40.997776   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:42.998819   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:44.999474   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:41.106806   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:43.107007   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:45.606716   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:42.357715   57440 pod_ready.go:82] duration metric: took 4m0.006671881s for pod "metrics-server-6867b74b74-mgxhv" in "kube-system" namespace to be "Ready" ...
	E0816 13:48:42.357741   57440 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0816 13:48:42.357749   57440 pod_ready.go:39] duration metric: took 4m4.542302811s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:48:42.357762   57440 api_server.go:52] waiting for apiserver process to appear ...
	I0816 13:48:42.357787   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:42.357834   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:42.415231   57440 cri.go:89] found id: "17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1"
	I0816 13:48:42.415255   57440 cri.go:89] found id: ""
	I0816 13:48:42.415265   57440 logs.go:276] 1 containers: [17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1]
	I0816 13:48:42.415324   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:42.421713   57440 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:42.421779   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:42.462840   57440 cri.go:89] found id: "43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453"
	I0816 13:48:42.462867   57440 cri.go:89] found id: ""
	I0816 13:48:42.462878   57440 logs.go:276] 1 containers: [43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453]
	I0816 13:48:42.462940   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:42.467260   57440 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:42.467321   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:42.505423   57440 cri.go:89] found id: "1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc"
	I0816 13:48:42.505449   57440 cri.go:89] found id: ""
	I0816 13:48:42.505458   57440 logs.go:276] 1 containers: [1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc]
	I0816 13:48:42.505517   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:42.510072   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:42.510124   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:42.551873   57440 cri.go:89] found id: "db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4"
	I0816 13:48:42.551894   57440 cri.go:89] found id: ""
	I0816 13:48:42.551902   57440 logs.go:276] 1 containers: [db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4]
	I0816 13:48:42.551949   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:42.556735   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:42.556783   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:42.595853   57440 cri.go:89] found id: "ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5"
	I0816 13:48:42.595884   57440 cri.go:89] found id: ""
	I0816 13:48:42.595895   57440 logs.go:276] 1 containers: [ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5]
	I0816 13:48:42.595948   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:42.600951   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:42.601003   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:42.639288   57440 cri.go:89] found id: "d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd"
	I0816 13:48:42.639311   57440 cri.go:89] found id: ""
	I0816 13:48:42.639320   57440 logs.go:276] 1 containers: [d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd]
	I0816 13:48:42.639367   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:42.644502   57440 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:42.644554   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:42.685041   57440 cri.go:89] found id: ""
	I0816 13:48:42.685065   57440 logs.go:276] 0 containers: []
	W0816 13:48:42.685074   57440 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:42.685079   57440 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 13:48:42.685137   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 13:48:42.722485   57440 cri.go:89] found id: "b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae"
	I0816 13:48:42.722506   57440 cri.go:89] found id: "35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f"
	I0816 13:48:42.722510   57440 cri.go:89] found id: ""
	I0816 13:48:42.722519   57440 logs.go:276] 2 containers: [b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae 35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f]
	I0816 13:48:42.722590   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:42.727136   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:42.731169   57440 logs.go:123] Gathering logs for kube-controller-manager [d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd] ...
	I0816 13:48:42.731189   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd"
	I0816 13:48:42.794303   57440 logs.go:123] Gathering logs for storage-provisioner [b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae] ...
	I0816 13:48:42.794334   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae"
	I0816 13:48:42.833686   57440 logs.go:123] Gathering logs for container status ...
	I0816 13:48:42.833715   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:42.874606   57440 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:42.874632   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:42.948074   57440 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:42.948111   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:42.963546   57440 logs.go:123] Gathering logs for etcd [43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453] ...
	I0816 13:48:42.963571   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453"
	I0816 13:48:43.027410   57440 logs.go:123] Gathering logs for kube-scheduler [db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4] ...
	I0816 13:48:43.027446   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4"
	I0816 13:48:43.067643   57440 logs.go:123] Gathering logs for kube-proxy [ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5] ...
	I0816 13:48:43.067670   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5"
	I0816 13:48:43.115156   57440 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:43.115183   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 13:48:43.246588   57440 logs.go:123] Gathering logs for kube-apiserver [17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1] ...
	I0816 13:48:43.246618   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1"
	I0816 13:48:43.291042   57440 logs.go:123] Gathering logs for coredns [1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc] ...
	I0816 13:48:43.291069   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc"
	I0816 13:48:43.330741   57440 logs.go:123] Gathering logs for storage-provisioner [35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f] ...
	I0816 13:48:43.330771   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f"
	I0816 13:48:43.371970   57440 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:43.371999   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:46.357313   57440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:48:46.373368   57440 api_server.go:72] duration metric: took 4m16.32601859s to wait for apiserver process to appear ...
	I0816 13:48:46.373396   57440 api_server.go:88] waiting for apiserver healthz status ...
	I0816 13:48:46.373426   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:46.373473   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:46.411034   57440 cri.go:89] found id: "17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1"
	I0816 13:48:46.411059   57440 cri.go:89] found id: ""
	I0816 13:48:46.411067   57440 logs.go:276] 1 containers: [17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1]
	I0816 13:48:46.411121   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:46.415948   57440 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:46.416009   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:46.458648   57440 cri.go:89] found id: "43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453"
	I0816 13:48:46.458673   57440 cri.go:89] found id: ""
	I0816 13:48:46.458684   57440 logs.go:276] 1 containers: [43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453]
	I0816 13:48:46.458735   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:46.463268   57440 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:46.463332   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:46.502120   57440 cri.go:89] found id: "1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc"
	I0816 13:48:46.502139   57440 cri.go:89] found id: ""
	I0816 13:48:46.502149   57440 logs.go:276] 1 containers: [1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc]
	I0816 13:48:46.502319   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:46.508632   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:46.508692   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:46.552732   57440 cri.go:89] found id: "db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4"
	I0816 13:48:46.552757   57440 cri.go:89] found id: ""
	I0816 13:48:46.552765   57440 logs.go:276] 1 containers: [db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4]
	I0816 13:48:46.552812   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:46.557459   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:46.557524   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:46.598286   57440 cri.go:89] found id: "ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5"
	I0816 13:48:46.598308   57440 cri.go:89] found id: ""
	I0816 13:48:46.598330   57440 logs.go:276] 1 containers: [ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5]
	I0816 13:48:46.598403   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:46.603050   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:46.603110   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:46.641616   57440 cri.go:89] found id: "d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd"
	I0816 13:48:46.641638   57440 cri.go:89] found id: ""
	I0816 13:48:46.641648   57440 logs.go:276] 1 containers: [d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd]
	I0816 13:48:46.641712   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:46.646008   57440 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:46.646076   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:46.682259   57440 cri.go:89] found id: ""
	I0816 13:48:46.682290   57440 logs.go:276] 0 containers: []
	W0816 13:48:46.682302   57440 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:46.682310   57440 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 13:48:46.682366   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 13:48:46.718955   57440 cri.go:89] found id: "b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae"
	I0816 13:48:46.718979   57440 cri.go:89] found id: "35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f"
	I0816 13:48:46.718985   57440 cri.go:89] found id: ""
	I0816 13:48:46.718993   57440 logs.go:276] 2 containers: [b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae 35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f]
	I0816 13:48:46.719049   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:46.723519   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:46.727942   57440 logs.go:123] Gathering logs for kube-scheduler [db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4] ...
	I0816 13:48:46.727968   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4"
	I0816 13:48:46.771942   57440 logs.go:123] Gathering logs for storage-provisioner [b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae] ...
	I0816 13:48:46.771971   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae"
	I0816 13:48:46.818294   57440 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:46.818319   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:46.887977   57440 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:46.888021   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:46.903567   57440 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:46.903599   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 13:48:47.010715   57440 logs.go:123] Gathering logs for kube-apiserver [17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1] ...
	I0816 13:48:47.010747   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1"
	I0816 13:48:47.056317   57440 logs.go:123] Gathering logs for etcd [43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453] ...
	I0816 13:48:47.056346   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453"
	I0816 13:48:47.114669   57440 logs.go:123] Gathering logs for coredns [1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc] ...
	I0816 13:48:47.114696   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc"
	I0816 13:48:47.498472   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:49.998541   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:47.606991   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:49.607458   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:47.157046   57440 logs.go:123] Gathering logs for storage-provisioner [35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f] ...
	I0816 13:48:47.157073   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f"
	I0816 13:48:47.199364   57440 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:47.199393   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:47.640964   57440 logs.go:123] Gathering logs for kube-proxy [ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5] ...
	I0816 13:48:47.641003   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5"
	I0816 13:48:47.683503   57440 logs.go:123] Gathering logs for kube-controller-manager [d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd] ...
	I0816 13:48:47.683541   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd"
	I0816 13:48:47.746748   57440 logs.go:123] Gathering logs for container status ...
	I0816 13:48:47.746798   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:50.296176   57440 api_server.go:253] Checking apiserver healthz at https://192.168.61.116:8443/healthz ...
	I0816 13:48:50.300482   57440 api_server.go:279] https://192.168.61.116:8443/healthz returned 200:
	ok
	I0816 13:48:50.301550   57440 api_server.go:141] control plane version: v1.31.0
	I0816 13:48:50.301570   57440 api_server.go:131] duration metric: took 3.928168044s to wait for apiserver health ...
	I0816 13:48:50.301578   57440 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 13:48:50.301599   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:48:50.301653   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:48:50.343199   57440 cri.go:89] found id: "17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1"
	I0816 13:48:50.343223   57440 cri.go:89] found id: ""
	I0816 13:48:50.343231   57440 logs.go:276] 1 containers: [17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1]
	I0816 13:48:50.343276   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:50.347576   57440 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:48:50.347651   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:48:50.387912   57440 cri.go:89] found id: "43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453"
	I0816 13:48:50.387937   57440 cri.go:89] found id: ""
	I0816 13:48:50.387947   57440 logs.go:276] 1 containers: [43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453]
	I0816 13:48:50.388004   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:50.392120   57440 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:48:50.392188   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:48:50.428655   57440 cri.go:89] found id: "1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc"
	I0816 13:48:50.428680   57440 cri.go:89] found id: ""
	I0816 13:48:50.428688   57440 logs.go:276] 1 containers: [1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc]
	I0816 13:48:50.428734   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:50.432863   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:48:50.432941   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:48:50.472269   57440 cri.go:89] found id: "db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4"
	I0816 13:48:50.472295   57440 cri.go:89] found id: ""
	I0816 13:48:50.472304   57440 logs.go:276] 1 containers: [db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4]
	I0816 13:48:50.472351   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:50.476961   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:48:50.477006   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:48:50.514772   57440 cri.go:89] found id: "ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5"
	I0816 13:48:50.514793   57440 cri.go:89] found id: ""
	I0816 13:48:50.514801   57440 logs.go:276] 1 containers: [ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5]
	I0816 13:48:50.514857   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:50.520430   57440 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:48:50.520492   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:48:50.564708   57440 cri.go:89] found id: "d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd"
	I0816 13:48:50.564733   57440 cri.go:89] found id: ""
	I0816 13:48:50.564741   57440 logs.go:276] 1 containers: [d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd]
	I0816 13:48:50.564788   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:50.569255   57440 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:48:50.569306   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:48:50.607803   57440 cri.go:89] found id: ""
	I0816 13:48:50.607823   57440 logs.go:276] 0 containers: []
	W0816 13:48:50.607829   57440 logs.go:278] No container was found matching "kindnet"
	I0816 13:48:50.607835   57440 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 13:48:50.607888   57440 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 13:48:50.643909   57440 cri.go:89] found id: "b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae"
	I0816 13:48:50.643934   57440 cri.go:89] found id: "35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f"
	I0816 13:48:50.643940   57440 cri.go:89] found id: ""
	I0816 13:48:50.643949   57440 logs.go:276] 2 containers: [b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae 35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f]
	I0816 13:48:50.643994   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:50.648575   57440 ssh_runner.go:195] Run: which crictl
	I0816 13:48:50.653322   57440 logs.go:123] Gathering logs for dmesg ...
	I0816 13:48:50.653354   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:48:50.667847   57440 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:48:50.667878   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 13:48:50.774932   57440 logs.go:123] Gathering logs for kube-apiserver [17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1] ...
	I0816 13:48:50.774969   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17b3d9ea47cdf479bb9a8b0166f4c6b54be2f71810863ea2991356c0cb511aa1"
	I0816 13:48:50.823473   57440 logs.go:123] Gathering logs for etcd [43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453] ...
	I0816 13:48:50.823503   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43c9169b2abc20b16077d621a13b124656873a73d3844a89204569652c4f0453"
	I0816 13:48:50.884009   57440 logs.go:123] Gathering logs for storage-provisioner [b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae] ...
	I0816 13:48:50.884044   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9150d56b0778170ec4de2c99889de10ea2e06fb50c81233cebb07af832e1aae"
	I0816 13:48:50.925187   57440 logs.go:123] Gathering logs for container status ...
	I0816 13:48:50.925219   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:48:50.965019   57440 logs.go:123] Gathering logs for kubelet ...
	I0816 13:48:50.965046   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:48:51.033614   57440 logs.go:123] Gathering logs for coredns [1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc] ...
	I0816 13:48:51.033651   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c89ddcb90aa2a537b4187a6a0e066b08d7f0a8afde31dea8ac3af21844febcc"
	I0816 13:48:51.068360   57440 logs.go:123] Gathering logs for kube-scheduler [db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4] ...
	I0816 13:48:51.068387   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db946a59711675764668a52b15f1052ef91c798fd26a76142ea4f1ce05ba05d4"
	I0816 13:48:51.107768   57440 logs.go:123] Gathering logs for kube-proxy [ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5] ...
	I0816 13:48:51.107792   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca2c017b0b7fc552e18fe487e45381dab2cbffbdfdc8f0c5b91c393233cb88c5"
	I0816 13:48:51.163637   57440 logs.go:123] Gathering logs for kube-controller-manager [d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd] ...
	I0816 13:48:51.163673   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8cda792253cd656ab3d2ba0efa4471a5643f28b3af821d5d5f725ef3bdb0afd"
	I0816 13:48:51.227436   57440 logs.go:123] Gathering logs for storage-provisioner [35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f] ...
	I0816 13:48:51.227462   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35ef9517598da1e759d590014dd197fd2b31a27a1737ed5cb557c8b4b620a40f"
	I0816 13:48:51.265505   57440 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:48:51.265531   57440 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:48:54.130801   57440 system_pods.go:59] 8 kube-system pods found
	I0816 13:48:54.130828   57440 system_pods.go:61] "coredns-6f6b679f8f-8kbs6" [e732183e-3b22-4a11-909a-246de5fc1c8a] Running
	I0816 13:48:54.130833   57440 system_pods.go:61] "etcd-no-preload-311070" [e05deb95-06ff-4a8d-b520-0350b2332814] Running
	I0816 13:48:54.130837   57440 system_pods.go:61] "kube-apiserver-no-preload-311070" [06cb4989-0511-4c14-a3ec-e966829cf2ec] Running
	I0816 13:48:54.130840   57440 system_pods.go:61] "kube-controller-manager-no-preload-311070" [baa9f379-4e14-4873-a456-42e90a816b0b] Running
	I0816 13:48:54.130843   57440 system_pods.go:61] "kube-proxy-b8d5b" [9ed1c33b-903f-43e8-880c-b9a49c658806] Running
	I0816 13:48:54.130846   57440 system_pods.go:61] "kube-scheduler-no-preload-311070" [166c0f64-ebf6-413f-82d5-f1c32991c63a] Running
	I0816 13:48:54.130852   57440 system_pods.go:61] "metrics-server-6867b74b74-mgxhv" [e9654a8e-4db2-494d-93a7-a134b0e2bb50] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 13:48:54.130855   57440 system_pods.go:61] "storage-provisioner" [f340d2e3-2889-4200-b477-830494b827c6] Running
	I0816 13:48:54.130862   57440 system_pods.go:74] duration metric: took 3.829279192s to wait for pod list to return data ...
	I0816 13:48:54.130868   57440 default_sa.go:34] waiting for default service account to be created ...
	I0816 13:48:54.133253   57440 default_sa.go:45] found service account: "default"
	I0816 13:48:54.133282   57440 default_sa.go:55] duration metric: took 2.407297ms for default service account to be created ...
	I0816 13:48:54.133292   57440 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 13:48:54.138812   57440 system_pods.go:86] 8 kube-system pods found
	I0816 13:48:54.138835   57440 system_pods.go:89] "coredns-6f6b679f8f-8kbs6" [e732183e-3b22-4a11-909a-246de5fc1c8a] Running
	I0816 13:48:54.138841   57440 system_pods.go:89] "etcd-no-preload-311070" [e05deb95-06ff-4a8d-b520-0350b2332814] Running
	I0816 13:48:54.138845   57440 system_pods.go:89] "kube-apiserver-no-preload-311070" [06cb4989-0511-4c14-a3ec-e966829cf2ec] Running
	I0816 13:48:54.138849   57440 system_pods.go:89] "kube-controller-manager-no-preload-311070" [baa9f379-4e14-4873-a456-42e90a816b0b] Running
	I0816 13:48:54.138853   57440 system_pods.go:89] "kube-proxy-b8d5b" [9ed1c33b-903f-43e8-880c-b9a49c658806] Running
	I0816 13:48:54.138856   57440 system_pods.go:89] "kube-scheduler-no-preload-311070" [166c0f64-ebf6-413f-82d5-f1c32991c63a] Running
	I0816 13:48:54.138863   57440 system_pods.go:89] "metrics-server-6867b74b74-mgxhv" [e9654a8e-4db2-494d-93a7-a134b0e2bb50] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 13:48:54.138868   57440 system_pods.go:89] "storage-provisioner" [f340d2e3-2889-4200-b477-830494b827c6] Running
	I0816 13:48:54.138874   57440 system_pods.go:126] duration metric: took 5.576801ms to wait for k8s-apps to be running ...
	I0816 13:48:54.138879   57440 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 13:48:54.138922   57440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 13:48:54.154406   57440 system_svc.go:56] duration metric: took 15.507123ms WaitForService to wait for kubelet
	I0816 13:48:54.154438   57440 kubeadm.go:582] duration metric: took 4m24.107091364s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 13:48:54.154463   57440 node_conditions.go:102] verifying NodePressure condition ...
	I0816 13:48:54.156991   57440 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 13:48:54.157012   57440 node_conditions.go:123] node cpu capacity is 2
	I0816 13:48:54.157027   57440 node_conditions.go:105] duration metric: took 2.558338ms to run NodePressure ...
	I0816 13:48:54.157041   57440 start.go:241] waiting for startup goroutines ...
	I0816 13:48:54.157052   57440 start.go:246] waiting for cluster config update ...
	I0816 13:48:54.157070   57440 start.go:255] writing updated cluster config ...
	I0816 13:48:54.157381   57440 ssh_runner.go:195] Run: rm -f paused
	I0816 13:48:54.205583   57440 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 13:48:54.207845   57440 out.go:177] * Done! kubectl is now configured to use "no-preload-311070" cluster and "default" namespace by default
	I0816 13:48:51.999301   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:54.498057   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:52.107465   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:54.606735   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:56.498967   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:58.997311   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:56.606925   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:48:58.606970   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:00.607943   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:00.997760   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:02.998653   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:03.107555   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:05.606363   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:05.497723   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:07.498572   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:09.997905   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:07.607916   58430 pod_ready.go:103] pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:09.606579   58430 pod_ready.go:82] duration metric: took 4m0.00617652s for pod "metrics-server-6867b74b74-j9tqh" in "kube-system" namespace to be "Ready" ...
	E0816 13:49:09.606602   58430 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0816 13:49:09.606612   58430 pod_ready.go:39] duration metric: took 4m3.606005486s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:49:09.606627   58430 api_server.go:52] waiting for apiserver process to appear ...
	I0816 13:49:09.606652   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:49:09.606698   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:49:09.660442   58430 cri.go:89] found id: "4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190"
	I0816 13:49:09.660461   58430 cri.go:89] found id: ""
	I0816 13:49:09.660469   58430 logs.go:276] 1 containers: [4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190]
	I0816 13:49:09.660519   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:09.664752   58430 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:49:09.664813   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:49:09.701589   58430 cri.go:89] found id: "83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176"
	I0816 13:49:09.701615   58430 cri.go:89] found id: ""
	I0816 13:49:09.701625   58430 logs.go:276] 1 containers: [83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176]
	I0816 13:49:09.701681   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:09.706048   58430 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:49:09.706114   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:49:09.743810   58430 cri.go:89] found id: "8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910"
	I0816 13:49:09.743832   58430 cri.go:89] found id: ""
	I0816 13:49:09.743841   58430 logs.go:276] 1 containers: [8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910]
	I0816 13:49:09.743898   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:09.748197   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:49:09.748271   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:49:09.783730   58430 cri.go:89] found id: "ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9"
	I0816 13:49:09.783752   58430 cri.go:89] found id: ""
	I0816 13:49:09.783765   58430 logs.go:276] 1 containers: [ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9]
	I0816 13:49:09.783828   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:09.787845   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:49:09.787909   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:49:09.828449   58430 cri.go:89] found id: "99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb"
	I0816 13:49:09.828472   58430 cri.go:89] found id: ""
	I0816 13:49:09.828481   58430 logs.go:276] 1 containers: [99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb]
	I0816 13:49:09.828546   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:09.832890   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:49:09.832963   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:49:09.880136   58430 cri.go:89] found id: "590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239"
	I0816 13:49:09.880164   58430 cri.go:89] found id: ""
	I0816 13:49:09.880175   58430 logs.go:276] 1 containers: [590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239]
	I0816 13:49:09.880232   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:09.884533   58430 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:49:09.884599   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:49:09.924776   58430 cri.go:89] found id: ""
	I0816 13:49:09.924805   58430 logs.go:276] 0 containers: []
	W0816 13:49:09.924816   58430 logs.go:278] No container was found matching "kindnet"
	I0816 13:49:09.924828   58430 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 13:49:09.924889   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 13:49:09.971663   58430 cri.go:89] found id: "7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b"
	I0816 13:49:09.971689   58430 cri.go:89] found id: "17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825"
	I0816 13:49:09.971695   58430 cri.go:89] found id: ""
	I0816 13:49:09.971705   58430 logs.go:276] 2 containers: [7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b 17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825]
	I0816 13:49:09.971770   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:09.976297   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:09.980815   58430 logs.go:123] Gathering logs for coredns [8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910] ...
	I0816 13:49:09.980844   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910"
	I0816 13:49:10.020287   58430 logs.go:123] Gathering logs for kube-scheduler [ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9] ...
	I0816 13:49:10.020317   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9"
	I0816 13:49:10.060266   58430 logs.go:123] Gathering logs for kube-controller-manager [590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239] ...
	I0816 13:49:10.060291   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239"
	I0816 13:49:10.113574   58430 logs.go:123] Gathering logs for storage-provisioner [7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b] ...
	I0816 13:49:10.113608   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b"
	I0816 13:49:10.153457   58430 logs.go:123] Gathering logs for storage-provisioner [17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825] ...
	I0816 13:49:10.153482   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825"
	I0816 13:49:10.191530   58430 logs.go:123] Gathering logs for dmesg ...
	I0816 13:49:10.191559   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:49:10.206267   58430 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:49:10.206296   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 13:49:10.326723   58430 logs.go:123] Gathering logs for etcd [83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176] ...
	I0816 13:49:10.326753   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176"
	I0816 13:49:10.377541   58430 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:49:10.377574   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:49:10.895387   58430 logs.go:123] Gathering logs for container status ...
	I0816 13:49:10.895445   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:49:10.947447   58430 logs.go:123] Gathering logs for kubelet ...
	I0816 13:49:10.947475   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:49:11.997943   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:13.998932   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:11.020745   58430 logs.go:123] Gathering logs for kube-apiserver [4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190] ...
	I0816 13:49:11.020786   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190"
	I0816 13:49:11.081224   58430 logs.go:123] Gathering logs for kube-proxy [99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb] ...
	I0816 13:49:11.081257   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb"
	I0816 13:49:13.632726   58430 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:49:13.651185   58430 api_server.go:72] duration metric: took 4m14.880109274s to wait for apiserver process to appear ...
	I0816 13:49:13.651214   58430 api_server.go:88] waiting for apiserver healthz status ...
	I0816 13:49:13.651254   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:49:13.651308   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:49:13.691473   58430 cri.go:89] found id: "4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190"
	I0816 13:49:13.691495   58430 cri.go:89] found id: ""
	I0816 13:49:13.691503   58430 logs.go:276] 1 containers: [4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190]
	I0816 13:49:13.691582   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:13.695945   58430 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:49:13.695998   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:49:13.730798   58430 cri.go:89] found id: "83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176"
	I0816 13:49:13.730830   58430 cri.go:89] found id: ""
	I0816 13:49:13.730840   58430 logs.go:276] 1 containers: [83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176]
	I0816 13:49:13.730913   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:13.735156   58430 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:49:13.735222   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:49:13.769612   58430 cri.go:89] found id: "8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910"
	I0816 13:49:13.769639   58430 cri.go:89] found id: ""
	I0816 13:49:13.769650   58430 logs.go:276] 1 containers: [8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910]
	I0816 13:49:13.769710   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:13.773690   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:49:13.773745   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:49:13.815417   58430 cri.go:89] found id: "ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9"
	I0816 13:49:13.815444   58430 cri.go:89] found id: ""
	I0816 13:49:13.815454   58430 logs.go:276] 1 containers: [ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9]
	I0816 13:49:13.815515   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:13.819596   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:49:13.819666   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:49:13.852562   58430 cri.go:89] found id: "99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb"
	I0816 13:49:13.852587   58430 cri.go:89] found id: ""
	I0816 13:49:13.852597   58430 logs.go:276] 1 containers: [99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb]
	I0816 13:49:13.852657   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:13.856697   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:49:13.856757   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:49:13.902327   58430 cri.go:89] found id: "590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239"
	I0816 13:49:13.902346   58430 cri.go:89] found id: ""
	I0816 13:49:13.902353   58430 logs.go:276] 1 containers: [590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239]
	I0816 13:49:13.902416   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:13.906789   58430 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:49:13.906840   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:49:13.943401   58430 cri.go:89] found id: ""
	I0816 13:49:13.943430   58430 logs.go:276] 0 containers: []
	W0816 13:49:13.943438   58430 logs.go:278] No container was found matching "kindnet"
	I0816 13:49:13.943443   58430 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 13:49:13.943490   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 13:49:13.979154   58430 cri.go:89] found id: "7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b"
	I0816 13:49:13.979178   58430 cri.go:89] found id: "17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825"
	I0816 13:49:13.979182   58430 cri.go:89] found id: ""
	I0816 13:49:13.979189   58430 logs.go:276] 2 containers: [7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b 17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825]
	I0816 13:49:13.979235   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:13.983301   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:13.988522   58430 logs.go:123] Gathering logs for dmesg ...
	I0816 13:49:13.988545   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:49:14.005891   58430 logs.go:123] Gathering logs for kube-apiserver [4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190] ...
	I0816 13:49:14.005916   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190"
	I0816 13:49:14.055686   58430 logs.go:123] Gathering logs for etcd [83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176] ...
	I0816 13:49:14.055713   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176"
	I0816 13:49:14.104975   58430 logs.go:123] Gathering logs for kube-proxy [99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb] ...
	I0816 13:49:14.105010   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb"
	I0816 13:49:14.145761   58430 logs.go:123] Gathering logs for kube-controller-manager [590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239] ...
	I0816 13:49:14.145786   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239"
	I0816 13:49:14.198935   58430 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:49:14.198966   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:49:14.662287   58430 logs.go:123] Gathering logs for container status ...
	I0816 13:49:14.662323   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:49:14.717227   58430 logs.go:123] Gathering logs for kubelet ...
	I0816 13:49:14.717256   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:49:14.789824   58430 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:49:14.789868   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 13:49:14.902892   58430 logs.go:123] Gathering logs for coredns [8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910] ...
	I0816 13:49:14.902922   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910"
	I0816 13:49:14.946711   58430 logs.go:123] Gathering logs for kube-scheduler [ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9] ...
	I0816 13:49:14.946736   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9"
	I0816 13:49:14.986143   58430 logs.go:123] Gathering logs for storage-provisioner [7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b] ...
	I0816 13:49:14.986175   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b"
	I0816 13:49:15.022107   58430 logs.go:123] Gathering logs for storage-provisioner [17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825] ...
	I0816 13:49:15.022138   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825"
	I0816 13:49:16.497493   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:18.497979   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:17.556820   58430 api_server.go:253] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 13:49:17.562249   58430 api_server.go:279] https://192.168.50.186:8444/healthz returned 200:
	ok
	I0816 13:49:17.563264   58430 api_server.go:141] control plane version: v1.31.0
	I0816 13:49:17.563280   58430 api_server.go:131] duration metric: took 3.912060569s to wait for apiserver health ...
	I0816 13:49:17.563288   58430 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 13:49:17.563312   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:49:17.563377   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:49:17.604072   58430 cri.go:89] found id: "4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190"
	I0816 13:49:17.604099   58430 cri.go:89] found id: ""
	I0816 13:49:17.604109   58430 logs.go:276] 1 containers: [4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190]
	I0816 13:49:17.604163   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:17.608623   58430 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:49:17.608678   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:49:17.650241   58430 cri.go:89] found id: "83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176"
	I0816 13:49:17.650267   58430 cri.go:89] found id: ""
	I0816 13:49:17.650275   58430 logs.go:276] 1 containers: [83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176]
	I0816 13:49:17.650328   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:17.654928   58430 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:49:17.655000   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:49:17.690057   58430 cri.go:89] found id: "8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910"
	I0816 13:49:17.690085   58430 cri.go:89] found id: ""
	I0816 13:49:17.690095   58430 logs.go:276] 1 containers: [8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910]
	I0816 13:49:17.690164   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:17.694636   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:49:17.694692   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:49:17.730134   58430 cri.go:89] found id: "ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9"
	I0816 13:49:17.730167   58430 cri.go:89] found id: ""
	I0816 13:49:17.730177   58430 logs.go:276] 1 containers: [ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9]
	I0816 13:49:17.730238   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:17.734364   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:49:17.734420   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:49:17.769579   58430 cri.go:89] found id: "99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb"
	I0816 13:49:17.769595   58430 cri.go:89] found id: ""
	I0816 13:49:17.769603   58430 logs.go:276] 1 containers: [99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb]
	I0816 13:49:17.769643   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:17.773543   58430 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:49:17.773601   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:49:17.814287   58430 cri.go:89] found id: "590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239"
	I0816 13:49:17.814310   58430 cri.go:89] found id: ""
	I0816 13:49:17.814319   58430 logs.go:276] 1 containers: [590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239]
	I0816 13:49:17.814393   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:17.818904   58430 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:49:17.818977   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:49:17.858587   58430 cri.go:89] found id: ""
	I0816 13:49:17.858614   58430 logs.go:276] 0 containers: []
	W0816 13:49:17.858622   58430 logs.go:278] No container was found matching "kindnet"
	I0816 13:49:17.858627   58430 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 13:49:17.858674   58430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 13:49:17.901759   58430 cri.go:89] found id: "7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b"
	I0816 13:49:17.901784   58430 cri.go:89] found id: "17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825"
	I0816 13:49:17.901788   58430 cri.go:89] found id: ""
	I0816 13:49:17.901796   58430 logs.go:276] 2 containers: [7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b 17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825]
	I0816 13:49:17.901853   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:17.906139   58430 ssh_runner.go:195] Run: which crictl
	I0816 13:49:17.910273   58430 logs.go:123] Gathering logs for dmesg ...
	I0816 13:49:17.910293   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:49:17.924565   58430 logs.go:123] Gathering logs for etcd [83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176] ...
	I0816 13:49:17.924590   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83bd481c9871b46b4df2bd824fc42fc58090bdad47482b518854370a4ceb8176"
	I0816 13:49:17.971895   58430 logs.go:123] Gathering logs for coredns [8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910] ...
	I0816 13:49:17.971927   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8922cc9760a0ea4c13db4295a1c7686a3bb392faa01a121a44ddf127ac030910"
	I0816 13:49:18.011332   58430 logs.go:123] Gathering logs for kube-scheduler [ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9] ...
	I0816 13:49:18.011364   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec5ec870d772b2cde61cab6f8f7bbd1732db838256578b9114309511373d1bb9"
	I0816 13:49:18.049264   58430 logs.go:123] Gathering logs for storage-provisioner [7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b] ...
	I0816 13:49:18.049292   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7f296429e678f138980649d45caa244ed82e1daa6430ef645488eee5a4ba438b"
	I0816 13:49:18.084004   58430 logs.go:123] Gathering logs for container status ...
	I0816 13:49:18.084030   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:49:18.136961   58430 logs.go:123] Gathering logs for kubelet ...
	I0816 13:49:18.137000   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 13:49:18.210452   58430 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:49:18.210483   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 13:49:18.327398   58430 logs.go:123] Gathering logs for kube-apiserver [4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190] ...
	I0816 13:49:18.327429   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f1bf38f05e6949d9635e8924be49105e64beea7b0bd26ffb747078b1ee91190"
	I0816 13:49:18.378777   58430 logs.go:123] Gathering logs for kube-proxy [99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb] ...
	I0816 13:49:18.378809   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99545c4e9a57a563af77c24070ef7b32e45ff0c46b472ae1d72f538a1be77cbb"
	I0816 13:49:18.430052   58430 logs.go:123] Gathering logs for kube-controller-manager [590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239] ...
	I0816 13:49:18.430088   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 590cecb818b97a5af4447d5f0e22084e553369415a72a459274e40ed41f90239"
	I0816 13:49:18.496775   58430 logs.go:123] Gathering logs for storage-provisioner [17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825] ...
	I0816 13:49:18.496806   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17df9b5cc9f16bf689dfa4827257c4ed0629952869e0b83fe1fa484a03267825"
	I0816 13:49:18.540493   58430 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:49:18.540523   58430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:49:21.451644   58430 system_pods.go:59] 8 kube-system pods found
	I0816 13:49:21.451673   58430 system_pods.go:61] "coredns-6f6b679f8f-xdwhx" [66987c52-9a8c-4ddd-a6cf-ac84172d8c8c] Running
	I0816 13:49:21.451679   58430 system_pods.go:61] "etcd-default-k8s-diff-port-893736" [54f5bb47-00f5-4c65-88ae-460571872fd1] Running
	I0816 13:49:21.451682   58430 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-893736" [c8c57619-3a4c-4cc9-b637-7842194071ad] Running
	I0816 13:49:21.451687   58430 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-893736" [e553aced-34bd-4d30-b473-082001fe2948] Running
	I0816 13:49:21.451691   58430 system_pods.go:61] "kube-proxy-btq6r" [a2b7b283-da62-4cb8-a039-07a509491e5e] Running
	I0816 13:49:21.451694   58430 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-893736" [0a30cbd0-a2ac-4ca9-bddc-674e6fc9ae77] Running
	I0816 13:49:21.451701   58430 system_pods.go:61] "metrics-server-6867b74b74-j9tqh" [ef077e6d-f368-4872-bb87-9e031d3ea764] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 13:49:21.451705   58430 system_pods.go:61] "storage-provisioner" [e2fbf16a-3bc7-4300-8023-5dbb20ba70bc] Running
	I0816 13:49:21.451713   58430 system_pods.go:74] duration metric: took 3.888418707s to wait for pod list to return data ...
	I0816 13:49:21.451719   58430 default_sa.go:34] waiting for default service account to be created ...
	I0816 13:49:21.454558   58430 default_sa.go:45] found service account: "default"
	I0816 13:49:21.454578   58430 default_sa.go:55] duration metric: took 2.853068ms for default service account to be created ...
	I0816 13:49:21.454585   58430 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 13:49:21.458906   58430 system_pods.go:86] 8 kube-system pods found
	I0816 13:49:21.458930   58430 system_pods.go:89] "coredns-6f6b679f8f-xdwhx" [66987c52-9a8c-4ddd-a6cf-ac84172d8c8c] Running
	I0816 13:49:21.458935   58430 system_pods.go:89] "etcd-default-k8s-diff-port-893736" [54f5bb47-00f5-4c65-88ae-460571872fd1] Running
	I0816 13:49:21.458941   58430 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-893736" [c8c57619-3a4c-4cc9-b637-7842194071ad] Running
	I0816 13:49:21.458944   58430 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-893736" [e553aced-34bd-4d30-b473-082001fe2948] Running
	I0816 13:49:21.458948   58430 system_pods.go:89] "kube-proxy-btq6r" [a2b7b283-da62-4cb8-a039-07a509491e5e] Running
	I0816 13:49:21.458951   58430 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-893736" [0a30cbd0-a2ac-4ca9-bddc-674e6fc9ae77] Running
	I0816 13:49:21.458958   58430 system_pods.go:89] "metrics-server-6867b74b74-j9tqh" [ef077e6d-f368-4872-bb87-9e031d3ea764] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 13:49:21.458961   58430 system_pods.go:89] "storage-provisioner" [e2fbf16a-3bc7-4300-8023-5dbb20ba70bc] Running
	I0816 13:49:21.458968   58430 system_pods.go:126] duration metric: took 4.378971ms to wait for k8s-apps to be running ...
	I0816 13:49:21.458975   58430 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 13:49:21.459016   58430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 13:49:21.476060   58430 system_svc.go:56] duration metric: took 17.075817ms WaitForService to wait for kubelet
	I0816 13:49:21.476086   58430 kubeadm.go:582] duration metric: took 4m22.705015833s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 13:49:21.476109   58430 node_conditions.go:102] verifying NodePressure condition ...
	I0816 13:49:21.479557   58430 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 13:49:21.479585   58430 node_conditions.go:123] node cpu capacity is 2
	I0816 13:49:21.479600   58430 node_conditions.go:105] duration metric: took 3.483638ms to run NodePressure ...
	I0816 13:49:21.479613   58430 start.go:241] waiting for startup goroutines ...
	I0816 13:49:21.479622   58430 start.go:246] waiting for cluster config update ...
	I0816 13:49:21.479637   58430 start.go:255] writing updated cluster config ...
	I0816 13:49:21.479949   58430 ssh_runner.go:195] Run: rm -f paused
	I0816 13:49:21.530237   58430 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 13:49:21.532328   58430 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-893736" cluster and "default" namespace by default
	I0816 13:49:20.998486   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:23.498358   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:25.498502   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:27.998622   57240 pod_ready.go:103] pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace has status "Ready":"False"
	I0816 13:49:30.491886   57240 pod_ready.go:82] duration metric: took 4m0.000539211s for pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace to be "Ready" ...
	E0816 13:49:30.491929   57240 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-9277d" in "kube-system" namespace to be "Ready" (will not retry!)
	I0816 13:49:30.491945   57240 pod_ready.go:39] duration metric: took 4m12.492024576s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:49:30.491972   57240 kubeadm.go:597] duration metric: took 4m19.795438093s to restartPrimaryControlPlane
	W0816 13:49:30.492032   57240 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0816 13:49:30.492059   57240 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 13:49:56.783263   57240 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.29118348s)
	I0816 13:49:56.783321   57240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 13:49:56.798550   57240 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 13:49:56.810542   57240 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 13:49:56.820837   57240 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 13:49:56.820873   57240 kubeadm.go:157] found existing configuration files:
	
	I0816 13:49:56.820947   57240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 13:49:56.831998   57240 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 13:49:56.832057   57240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 13:49:56.842351   57240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 13:49:56.852062   57240 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 13:49:56.852119   57240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 13:49:56.862337   57240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 13:49:56.872000   57240 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 13:49:56.872050   57240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 13:49:56.881764   57240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 13:49:56.891211   57240 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 13:49:56.891276   57240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 13:49:56.900969   57240 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 13:49:56.942823   57240 kubeadm.go:310] W0816 13:49:56.895203    2544 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 13:49:56.943751   57240 kubeadm.go:310] W0816 13:49:56.896255    2544 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 13:49:57.049491   57240 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 13:50:05.244505   57240 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0816 13:50:05.244561   57240 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 13:50:05.244657   57240 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 13:50:05.244775   57240 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 13:50:05.244901   57240 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0816 13:50:05.244989   57240 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 13:50:05.246568   57240 out.go:235]   - Generating certificates and keys ...
	I0816 13:50:05.246667   57240 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 13:50:05.246779   57240 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 13:50:05.246885   57240 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 13:50:05.246968   57240 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 13:50:05.247065   57240 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 13:50:05.247125   57240 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 13:50:05.247195   57240 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 13:50:05.247260   57240 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 13:50:05.247372   57240 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 13:50:05.247480   57240 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 13:50:05.247521   57240 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 13:50:05.247590   57240 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 13:50:05.247670   57240 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 13:50:05.247751   57240 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0816 13:50:05.247830   57240 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 13:50:05.247886   57240 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 13:50:05.247965   57240 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 13:50:05.248046   57240 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 13:50:05.248100   57240 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 13:50:05.249601   57240 out.go:235]   - Booting up control plane ...
	I0816 13:50:05.249698   57240 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 13:50:05.249779   57240 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 13:50:05.249835   57240 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 13:50:05.249930   57240 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 13:50:05.250007   57240 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 13:50:05.250046   57240 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 13:50:05.250184   57240 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0816 13:50:05.250289   57240 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0816 13:50:05.250343   57240 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002296228s
	I0816 13:50:05.250403   57240 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0816 13:50:05.250456   57240 kubeadm.go:310] [api-check] The API server is healthy after 5.002119618s
	I0816 13:50:05.250546   57240 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0816 13:50:05.250651   57240 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0816 13:50:05.250700   57240 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0816 13:50:05.250876   57240 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-302520 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0816 13:50:05.250930   57240 kubeadm.go:310] [bootstrap-token] Using token: dta4cr.diyk2wto3tx3ixlb
	I0816 13:50:05.252120   57240 out.go:235]   - Configuring RBAC rules ...
	I0816 13:50:05.252207   57240 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0816 13:50:05.252287   57240 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0816 13:50:05.252418   57240 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0816 13:50:05.252542   57240 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0816 13:50:05.252648   57240 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0816 13:50:05.252724   57240 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0816 13:50:05.252819   57240 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0816 13:50:05.252856   57240 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0816 13:50:05.252895   57240 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0816 13:50:05.252901   57240 kubeadm.go:310] 
	I0816 13:50:05.253004   57240 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0816 13:50:05.253022   57240 kubeadm.go:310] 
	I0816 13:50:05.253116   57240 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0816 13:50:05.253126   57240 kubeadm.go:310] 
	I0816 13:50:05.253155   57240 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0816 13:50:05.253240   57240 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0816 13:50:05.253283   57240 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0816 13:50:05.253289   57240 kubeadm.go:310] 
	I0816 13:50:05.253340   57240 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0816 13:50:05.253347   57240 kubeadm.go:310] 
	I0816 13:50:05.253405   57240 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0816 13:50:05.253423   57240 kubeadm.go:310] 
	I0816 13:50:05.253484   57240 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0816 13:50:05.253556   57240 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0816 13:50:05.253621   57240 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0816 13:50:05.253629   57240 kubeadm.go:310] 
	I0816 13:50:05.253710   57240 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0816 13:50:05.253840   57240 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0816 13:50:05.253855   57240 kubeadm.go:310] 
	I0816 13:50:05.253962   57240 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token dta4cr.diyk2wto3tx3ixlb \
	I0816 13:50:05.254087   57240 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4dc33a2efd2478f5cbf5aa38b4f971f510c5b41ce450063af834610254851641 \
	I0816 13:50:05.254122   57240 kubeadm.go:310] 	--control-plane 
	I0816 13:50:05.254126   57240 kubeadm.go:310] 
	I0816 13:50:05.254202   57240 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0816 13:50:05.254209   57240 kubeadm.go:310] 
	I0816 13:50:05.254280   57240 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token dta4cr.diyk2wto3tx3ixlb \
	I0816 13:50:05.254394   57240 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4dc33a2efd2478f5cbf5aa38b4f971f510c5b41ce450063af834610254851641 
	I0816 13:50:05.254407   57240 cni.go:84] Creating CNI manager for ""
	I0816 13:50:05.254416   57240 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 13:50:05.255889   57240 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 13:50:05.257086   57240 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 13:50:05.268668   57240 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 13:50:05.288676   57240 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 13:50:05.288735   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 13:50:05.288755   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-302520 minikube.k8s.io/updated_at=2024_08_16T13_50_05_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ab84f9bc76071a77c857a14f5c66dccc01002b05 minikube.k8s.io/name=embed-certs-302520 minikube.k8s.io/primary=true
	I0816 13:50:05.494987   57240 ops.go:34] apiserver oom_adj: -16
	I0816 13:50:05.495066   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 13:50:05.995792   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 13:50:06.495937   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 13:50:06.995513   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 13:50:07.495437   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 13:50:07.995600   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 13:50:08.495194   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 13:50:08.995101   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 13:50:09.495533   57240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 13:50:09.659383   57240 kubeadm.go:1113] duration metric: took 4.370714211s to wait for elevateKubeSystemPrivileges
	I0816 13:50:09.659425   57240 kubeadm.go:394] duration metric: took 4m59.010243945s to StartCluster
	I0816 13:50:09.659448   57240 settings.go:142] acquiring lock: {Name:mk96ffc9331ceacd8f3c1c33a59b38e047898a0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:50:09.659529   57240 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 13:50:09.661178   57240 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/kubeconfig: {Name:mk102d7d0e1fecb6a50b9d6c1ee82dcff7f7a898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 13:50:09.661475   57240 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 13:50:09.661579   57240 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 13:50:09.661662   57240 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-302520"
	I0816 13:50:09.661678   57240 addons.go:69] Setting default-storageclass=true in profile "embed-certs-302520"
	I0816 13:50:09.661693   57240 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-302520"
	W0816 13:50:09.661701   57240 addons.go:243] addon storage-provisioner should already be in state true
	I0816 13:50:09.661683   57240 config.go:182] Loaded profile config "embed-certs-302520": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 13:50:09.661707   57240 addons.go:69] Setting metrics-server=true in profile "embed-certs-302520"
	I0816 13:50:09.661730   57240 host.go:66] Checking if "embed-certs-302520" exists ...
	I0816 13:50:09.661732   57240 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-302520"
	I0816 13:50:09.661744   57240 addons.go:234] Setting addon metrics-server=true in "embed-certs-302520"
	W0816 13:50:09.661758   57240 addons.go:243] addon metrics-server should already be in state true
	I0816 13:50:09.661789   57240 host.go:66] Checking if "embed-certs-302520" exists ...
	I0816 13:50:09.662063   57240 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:50:09.662070   57240 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:50:09.662092   57240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:50:09.662093   57240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:50:09.662125   57240 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:50:09.662177   57240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:50:09.663568   57240 out.go:177] * Verifying Kubernetes components...
	I0816 13:50:09.665144   57240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 13:50:09.679643   57240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44127
	I0816 13:50:09.679976   57240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33121
	I0816 13:50:09.680138   57240 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:50:09.680460   57240 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:50:09.680652   57240 main.go:141] libmachine: Using API Version  1
	I0816 13:50:09.680677   57240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:50:09.681040   57240 main.go:141] libmachine: Using API Version  1
	I0816 13:50:09.681060   57240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:50:09.681084   57240 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:50:09.681449   57240 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:50:09.681659   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetState
	I0816 13:50:09.681706   57240 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:50:09.681737   57240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:50:09.682300   57240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42691
	I0816 13:50:09.682644   57240 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:50:09.683099   57240 main.go:141] libmachine: Using API Version  1
	I0816 13:50:09.683121   57240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:50:09.683464   57240 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:50:09.683993   57240 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:50:09.684020   57240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:50:09.684695   57240 addons.go:234] Setting addon default-storageclass=true in "embed-certs-302520"
	W0816 13:50:09.684713   57240 addons.go:243] addon default-storageclass should already be in state true
	I0816 13:50:09.684733   57240 host.go:66] Checking if "embed-certs-302520" exists ...
	I0816 13:50:09.685016   57240 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:50:09.685044   57240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:50:09.699612   57240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40707
	I0816 13:50:09.700235   57240 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:50:09.700244   57240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36139
	I0816 13:50:09.700776   57240 main.go:141] libmachine: Using API Version  1
	I0816 13:50:09.700795   57240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:50:09.700827   57240 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:50:09.701285   57240 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:50:09.701369   57240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40047
	I0816 13:50:09.701457   57240 main.go:141] libmachine: Using API Version  1
	I0816 13:50:09.701467   57240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:50:09.701939   57240 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-3966/.minikube/bin/docker-machine-driver-kvm2
	I0816 13:50:09.701980   57240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:50:09.702188   57240 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:50:09.702209   57240 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:50:09.702494   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetState
	I0816 13:50:09.702618   57240 main.go:141] libmachine: Using API Version  1
	I0816 13:50:09.702635   57240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:50:09.703042   57240 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:50:09.703250   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetState
	I0816 13:50:09.704568   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	I0816 13:50:09.705308   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	I0816 13:50:09.707074   57240 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0816 13:50:09.707074   57240 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 13:50:09.708773   57240 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 13:50:09.708792   57240 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 13:50:09.708813   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:50:09.708894   57240 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 13:50:09.708924   57240 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 13:50:09.708941   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:50:09.714305   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:50:09.714338   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:50:09.714812   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:50:09.714840   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:50:09.714874   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:50:09.714928   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:50:09.715181   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:50:09.715215   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:50:09.715363   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:50:09.715399   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:50:09.715512   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:50:09.715556   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:50:09.715634   57240 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/embed-certs-302520/id_rsa Username:docker}
	I0816 13:50:09.715876   57240 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/embed-certs-302520/id_rsa Username:docker}
	I0816 13:50:09.724172   57240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37325
	I0816 13:50:09.724636   57240 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:50:09.725184   57240 main.go:141] libmachine: Using API Version  1
	I0816 13:50:09.725213   57240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:50:09.725596   57240 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:50:09.725799   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetState
	I0816 13:50:09.727188   57240 main.go:141] libmachine: (embed-certs-302520) Calling .DriverName
	I0816 13:50:09.727410   57240 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 13:50:09.727426   57240 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 13:50:09.727447   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHHostname
	I0816 13:50:09.729840   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:50:09.730228   57240 main.go:141] libmachine: (embed-certs-302520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a3:1b", ip: ""} in network mk-embed-certs-302520: {Iface:virbr1 ExpiryTime:2024-08-16 14:44:54 +0000 UTC Type:0 Mac:52:54:00:15:a3:1b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-302520 Clientid:01:52:54:00:15:a3:1b}
	I0816 13:50:09.730255   57240 main.go:141] libmachine: (embed-certs-302520) DBG | domain embed-certs-302520 has defined IP address 192.168.39.125 and MAC address 52:54:00:15:a3:1b in network mk-embed-certs-302520
	I0816 13:50:09.730534   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHPort
	I0816 13:50:09.730723   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHKeyPath
	I0816 13:50:09.730867   57240 main.go:141] libmachine: (embed-certs-302520) Calling .GetSSHUsername
	I0816 13:50:09.731014   57240 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/embed-certs-302520/id_rsa Username:docker}
	I0816 13:50:09.899195   57240 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 13:50:09.939173   57240 node_ready.go:35] waiting up to 6m0s for node "embed-certs-302520" to be "Ready" ...
	I0816 13:50:09.958087   57240 node_ready.go:49] node "embed-certs-302520" has status "Ready":"True"
	I0816 13:50:09.958119   57240 node_ready.go:38] duration metric: took 18.911367ms for node "embed-certs-302520" to be "Ready" ...
	I0816 13:50:09.958130   57240 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:50:09.963326   57240 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-whnqh" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:10.083721   57240 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 13:50:10.184794   57240 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 13:50:10.203192   57240 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 13:50:10.203214   57240 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0816 13:50:10.285922   57240 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 13:50:10.285950   57240 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 13:50:10.370797   57240 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 13:50:10.370825   57240 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 13:50:10.420892   57240 main.go:141] libmachine: Making call to close driver server
	I0816 13:50:10.420942   57240 main.go:141] libmachine: (embed-certs-302520) Calling .Close
	I0816 13:50:10.421261   57240 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:50:10.421280   57240 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:50:10.421282   57240 main.go:141] libmachine: (embed-certs-302520) DBG | Closing plugin on server side
	I0816 13:50:10.421293   57240 main.go:141] libmachine: Making call to close driver server
	I0816 13:50:10.421303   57240 main.go:141] libmachine: (embed-certs-302520) Calling .Close
	I0816 13:50:10.421556   57240 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:50:10.421620   57240 main.go:141] libmachine: (embed-certs-302520) DBG | Closing plugin on server side
	I0816 13:50:10.421625   57240 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:50:10.427229   57240 main.go:141] libmachine: Making call to close driver server
	I0816 13:50:10.427250   57240 main.go:141] libmachine: (embed-certs-302520) Calling .Close
	I0816 13:50:10.427591   57240 main.go:141] libmachine: (embed-certs-302520) DBG | Closing plugin on server side
	I0816 13:50:10.427638   57240 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:50:10.427655   57240 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:50:10.454486   57240 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 13:50:11.225905   57240 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.041077031s)
	I0816 13:50:11.225958   57240 main.go:141] libmachine: Making call to close driver server
	I0816 13:50:11.225969   57240 main.go:141] libmachine: (embed-certs-302520) Calling .Close
	I0816 13:50:11.226248   57240 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:50:11.226268   57240 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:50:11.226273   57240 main.go:141] libmachine: (embed-certs-302520) DBG | Closing plugin on server side
	I0816 13:50:11.226295   57240 main.go:141] libmachine: Making call to close driver server
	I0816 13:50:11.226310   57240 main.go:141] libmachine: (embed-certs-302520) Calling .Close
	I0816 13:50:11.226561   57240 main.go:141] libmachine: (embed-certs-302520) DBG | Closing plugin on server side
	I0816 13:50:11.226608   57240 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:50:11.226627   57240 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:50:11.447454   57240 main.go:141] libmachine: Making call to close driver server
	I0816 13:50:11.447484   57240 main.go:141] libmachine: (embed-certs-302520) Calling .Close
	I0816 13:50:11.447823   57240 main.go:141] libmachine: (embed-certs-302520) DBG | Closing plugin on server side
	I0816 13:50:11.447890   57240 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:50:11.447908   57240 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:50:11.447924   57240 main.go:141] libmachine: Making call to close driver server
	I0816 13:50:11.447936   57240 main.go:141] libmachine: (embed-certs-302520) Calling .Close
	I0816 13:50:11.448179   57240 main.go:141] libmachine: Successfully made call to close driver server
	I0816 13:50:11.448195   57240 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 13:50:11.448241   57240 addons.go:475] Verifying addon metrics-server=true in "embed-certs-302520"
	I0816 13:50:11.450274   57240 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0816 13:50:11.451676   57240 addons.go:510] duration metric: took 1.790101568s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0816 13:50:11.971087   57240 pod_ready.go:103] pod "coredns-6f6b679f8f-whnqh" in "kube-system" namespace has status "Ready":"False"
	I0816 13:50:12.470167   57240 pod_ready.go:93] pod "coredns-6f6b679f8f-whnqh" in "kube-system" namespace has status "Ready":"True"
	I0816 13:50:12.470193   57240 pod_ready.go:82] duration metric: took 2.506842546s for pod "coredns-6f6b679f8f-whnqh" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:12.470203   57240 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-zh69g" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:12.474959   57240 pod_ready.go:93] pod "coredns-6f6b679f8f-zh69g" in "kube-system" namespace has status "Ready":"True"
	I0816 13:50:12.474980   57240 pod_ready.go:82] duration metric: took 4.769458ms for pod "coredns-6f6b679f8f-zh69g" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:12.474988   57240 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:12.479388   57240 pod_ready.go:93] pod "etcd-embed-certs-302520" in "kube-system" namespace has status "Ready":"True"
	I0816 13:50:12.479410   57240 pod_ready.go:82] duration metric: took 4.41564ms for pod "etcd-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:12.479421   57240 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:12.483567   57240 pod_ready.go:93] pod "kube-apiserver-embed-certs-302520" in "kube-system" namespace has status "Ready":"True"
	I0816 13:50:12.483589   57240 pod_ready.go:82] duration metric: took 4.159906ms for pod "kube-apiserver-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:12.483600   57240 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:14.490212   57240 pod_ready.go:103] pod "kube-controller-manager-embed-certs-302520" in "kube-system" namespace has status "Ready":"False"
	I0816 13:50:15.990204   57240 pod_ready.go:93] pod "kube-controller-manager-embed-certs-302520" in "kube-system" namespace has status "Ready":"True"
	I0816 13:50:15.990226   57240 pod_ready.go:82] duration metric: took 3.506618768s for pod "kube-controller-manager-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:15.990235   57240 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-spgtw" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:15.994580   57240 pod_ready.go:93] pod "kube-proxy-spgtw" in "kube-system" namespace has status "Ready":"True"
	I0816 13:50:15.994597   57240 pod_ready.go:82] duration metric: took 4.356588ms for pod "kube-proxy-spgtw" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:15.994605   57240 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:16.068472   57240 pod_ready.go:93] pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace has status "Ready":"True"
	I0816 13:50:16.068495   57240 pod_ready.go:82] duration metric: took 73.884906ms for pod "kube-scheduler-embed-certs-302520" in "kube-system" namespace to be "Ready" ...
	I0816 13:50:16.068503   57240 pod_ready.go:39] duration metric: took 6.110362477s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 13:50:16.068519   57240 api_server.go:52] waiting for apiserver process to appear ...
	I0816 13:50:16.068579   57240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:50:16.086318   57240 api_server.go:72] duration metric: took 6.424804798s to wait for apiserver process to appear ...
	I0816 13:50:16.086345   57240 api_server.go:88] waiting for apiserver healthz status ...
	I0816 13:50:16.086361   57240 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0816 13:50:16.091170   57240 api_server.go:279] https://192.168.39.125:8443/healthz returned 200:
	ok
	I0816 13:50:16.092122   57240 api_server.go:141] control plane version: v1.31.0
	I0816 13:50:16.092138   57240 api_server.go:131] duration metric: took 5.787898ms to wait for apiserver health ...
	I0816 13:50:16.092146   57240 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 13:50:16.271303   57240 system_pods.go:59] 9 kube-system pods found
	I0816 13:50:16.271338   57240 system_pods.go:61] "coredns-6f6b679f8f-whnqh" [6f4d69de-4130-4959-b1ef-9ddfbe5d6a72] Running
	I0816 13:50:16.271344   57240 system_pods.go:61] "coredns-6f6b679f8f-zh69g" [b65235cd-590b-4108-b5fc-b5f6072c8f5f] Running
	I0816 13:50:16.271348   57240 system_pods.go:61] "etcd-embed-certs-302520" [54a46f37-7b4c-4732-908d-df64558dd74f] Running
	I0816 13:50:16.271353   57240 system_pods.go:61] "kube-apiserver-embed-certs-302520" [d58b625b-c94e-44a7-ac30-18b1e2e8691e] Running
	I0816 13:50:16.271359   57240 system_pods.go:61] "kube-controller-manager-embed-certs-302520" [6bb26bff-7111-40c5-9f18-9ca1b733f990] Running
	I0816 13:50:16.271364   57240 system_pods.go:61] "kube-proxy-spgtw" [e9b2b029-a32e-4dd5-b4ff-bd3d61b97a02] Running
	I0816 13:50:16.271370   57240 system_pods.go:61] "kube-scheduler-embed-certs-302520" [aea7ddf8-67b1-468d-9ab8-c78b0bfecdbb] Running
	I0816 13:50:16.271379   57240 system_pods.go:61] "metrics-server-6867b74b74-q58h2" [1351eabe-df61-4b9c-b67b-2e9c963b0eaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 13:50:16.271389   57240 system_pods.go:61] "storage-provisioner" [8e139aaf-e6d1-4661-8c7b-90c1cc9827d4] Running
	I0816 13:50:16.271398   57240 system_pods.go:74] duration metric: took 179.244421ms to wait for pod list to return data ...
	I0816 13:50:16.271410   57240 default_sa.go:34] waiting for default service account to be created ...
	I0816 13:50:16.468167   57240 default_sa.go:45] found service account: "default"
	I0816 13:50:16.468196   57240 default_sa.go:55] duration metric: took 196.779435ms for default service account to be created ...
	I0816 13:50:16.468207   57240 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 13:50:16.670917   57240 system_pods.go:86] 9 kube-system pods found
	I0816 13:50:16.670943   57240 system_pods.go:89] "coredns-6f6b679f8f-whnqh" [6f4d69de-4130-4959-b1ef-9ddfbe5d6a72] Running
	I0816 13:50:16.670949   57240 system_pods.go:89] "coredns-6f6b679f8f-zh69g" [b65235cd-590b-4108-b5fc-b5f6072c8f5f] Running
	I0816 13:50:16.670953   57240 system_pods.go:89] "etcd-embed-certs-302520" [54a46f37-7b4c-4732-908d-df64558dd74f] Running
	I0816 13:50:16.670957   57240 system_pods.go:89] "kube-apiserver-embed-certs-302520" [d58b625b-c94e-44a7-ac30-18b1e2e8691e] Running
	I0816 13:50:16.670960   57240 system_pods.go:89] "kube-controller-manager-embed-certs-302520" [6bb26bff-7111-40c5-9f18-9ca1b733f990] Running
	I0816 13:50:16.670963   57240 system_pods.go:89] "kube-proxy-spgtw" [e9b2b029-a32e-4dd5-b4ff-bd3d61b97a02] Running
	I0816 13:50:16.670967   57240 system_pods.go:89] "kube-scheduler-embed-certs-302520" [aea7ddf8-67b1-468d-9ab8-c78b0bfecdbb] Running
	I0816 13:50:16.670972   57240 system_pods.go:89] "metrics-server-6867b74b74-q58h2" [1351eabe-df61-4b9c-b67b-2e9c963b0eaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 13:50:16.670976   57240 system_pods.go:89] "storage-provisioner" [8e139aaf-e6d1-4661-8c7b-90c1cc9827d4] Running
	I0816 13:50:16.670984   57240 system_pods.go:126] duration metric: took 202.771216ms to wait for k8s-apps to be running ...
	I0816 13:50:16.670990   57240 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 13:50:16.671040   57240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 13:50:16.686873   57240 system_svc.go:56] duration metric: took 15.876641ms WaitForService to wait for kubelet
	I0816 13:50:16.686906   57240 kubeadm.go:582] duration metric: took 7.025397638s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 13:50:16.686925   57240 node_conditions.go:102] verifying NodePressure condition ...
	I0816 13:50:16.869367   57240 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 13:50:16.869393   57240 node_conditions.go:123] node cpu capacity is 2
	I0816 13:50:16.869405   57240 node_conditions.go:105] duration metric: took 182.475776ms to run NodePressure ...
	I0816 13:50:16.869420   57240 start.go:241] waiting for startup goroutines ...
	I0816 13:50:16.869427   57240 start.go:246] waiting for cluster config update ...
	I0816 13:50:16.869436   57240 start.go:255] writing updated cluster config ...
	I0816 13:50:16.869686   57240 ssh_runner.go:195] Run: rm -f paused
	I0816 13:50:16.919168   57240 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 13:50:16.921207   57240 out.go:177] * Done! kubectl is now configured to use "embed-certs-302520" cluster and "default" namespace by default
	I0816 13:50:32.875973   57945 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0816 13:50:32.876092   57945 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0816 13:50:32.877853   57945 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0816 13:50:32.877964   57945 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 13:50:32.878066   57945 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 13:50:32.878184   57945 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 13:50:32.878286   57945 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0816 13:50:32.878362   57945 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 13:50:32.880211   57945 out.go:235]   - Generating certificates and keys ...
	I0816 13:50:32.880308   57945 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 13:50:32.880389   57945 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 13:50:32.880480   57945 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 13:50:32.880575   57945 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 13:50:32.880684   57945 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 13:50:32.880782   57945 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 13:50:32.880874   57945 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 13:50:32.880988   57945 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 13:50:32.881100   57945 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 13:50:32.881190   57945 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 13:50:32.881228   57945 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 13:50:32.881274   57945 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 13:50:32.881318   57945 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 13:50:32.881362   57945 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 13:50:32.881418   57945 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 13:50:32.881473   57945 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 13:50:32.881585   57945 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 13:50:32.881676   57945 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 13:50:32.881747   57945 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 13:50:32.881846   57945 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 13:50:32.883309   57945 out.go:235]   - Booting up control plane ...
	I0816 13:50:32.883394   57945 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 13:50:32.883493   57945 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 13:50:32.883563   57945 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 13:50:32.883661   57945 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 13:50:32.883867   57945 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0816 13:50:32.883916   57945 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0816 13:50:32.883985   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:50:32.884185   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:50:32.884285   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:50:32.884483   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:50:32.884557   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:50:32.884718   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:50:32.884775   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:50:32.884984   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:50:32.885058   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:50:32.885258   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:50:32.885272   57945 kubeadm.go:310] 
	I0816 13:50:32.885367   57945 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0816 13:50:32.885419   57945 kubeadm.go:310] 		timed out waiting for the condition
	I0816 13:50:32.885426   57945 kubeadm.go:310] 
	I0816 13:50:32.885455   57945 kubeadm.go:310] 	This error is likely caused by:
	I0816 13:50:32.885489   57945 kubeadm.go:310] 		- The kubelet is not running
	I0816 13:50:32.885579   57945 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0816 13:50:32.885587   57945 kubeadm.go:310] 
	I0816 13:50:32.885709   57945 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0816 13:50:32.885745   57945 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0816 13:50:32.885774   57945 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0816 13:50:32.885781   57945 kubeadm.go:310] 
	I0816 13:50:32.885866   57945 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0816 13:50:32.885938   57945 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0816 13:50:32.885945   57945 kubeadm.go:310] 
	I0816 13:50:32.886039   57945 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0816 13:50:32.886139   57945 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0816 13:50:32.886251   57945 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0816 13:50:32.886331   57945 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0816 13:50:32.886369   57945 kubeadm.go:310] 
	W0816 13:50:32.886438   57945 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0816 13:50:32.886474   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 13:50:33.351503   57945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 13:50:33.366285   57945 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 13:50:33.378157   57945 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 13:50:33.378180   57945 kubeadm.go:157] found existing configuration files:
	
	I0816 13:50:33.378241   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 13:50:33.389301   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 13:50:33.389358   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 13:50:33.400730   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 13:50:33.412130   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 13:50:33.412209   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 13:50:33.423484   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 13:50:33.433610   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 13:50:33.433676   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 13:50:33.445384   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 13:50:33.456098   57945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 13:50:33.456159   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 13:50:33.466036   57945 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 13:50:33.693238   57945 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 13:52:29.699171   57945 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0816 13:52:29.699367   57945 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0816 13:52:29.700903   57945 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0816 13:52:29.701036   57945 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 13:52:29.701228   57945 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 13:52:29.701460   57945 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 13:52:29.701761   57945 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0816 13:52:29.701863   57945 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 13:52:29.703486   57945 out.go:235]   - Generating certificates and keys ...
	I0816 13:52:29.703550   57945 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 13:52:29.703603   57945 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 13:52:29.703671   57945 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 13:52:29.703732   57945 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 13:52:29.703823   57945 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 13:52:29.703918   57945 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 13:52:29.704016   57945 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 13:52:29.704098   57945 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 13:52:29.704190   57945 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 13:52:29.704283   57945 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 13:52:29.704344   57945 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 13:52:29.704407   57945 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 13:52:29.704469   57945 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 13:52:29.704541   57945 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 13:52:29.704630   57945 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 13:52:29.704674   57945 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 13:52:29.704753   57945 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 13:52:29.704824   57945 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 13:52:29.704855   57945 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 13:52:29.704939   57945 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 13:52:29.706461   57945 out.go:235]   - Booting up control plane ...
	I0816 13:52:29.706555   57945 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 13:52:29.706672   57945 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 13:52:29.706744   57945 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 13:52:29.706836   57945 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 13:52:29.707002   57945 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0816 13:52:29.707047   57945 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0816 13:52:29.707126   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:52:29.707345   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:52:29.707438   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:52:29.707691   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:52:29.707752   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:52:29.707892   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:52:29.707969   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:52:29.708132   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:52:29.708219   57945 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 13:52:29.708478   57945 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 13:52:29.708500   57945 kubeadm.go:310] 
	I0816 13:52:29.708538   57945 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0816 13:52:29.708579   57945 kubeadm.go:310] 		timed out waiting for the condition
	I0816 13:52:29.708593   57945 kubeadm.go:310] 
	I0816 13:52:29.708633   57945 kubeadm.go:310] 	This error is likely caused by:
	I0816 13:52:29.708660   57945 kubeadm.go:310] 		- The kubelet is not running
	I0816 13:52:29.708743   57945 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0816 13:52:29.708750   57945 kubeadm.go:310] 
	I0816 13:52:29.708841   57945 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0816 13:52:29.708892   57945 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0816 13:52:29.708959   57945 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0816 13:52:29.708969   57945 kubeadm.go:310] 
	I0816 13:52:29.709120   57945 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0816 13:52:29.709237   57945 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0816 13:52:29.709248   57945 kubeadm.go:310] 
	I0816 13:52:29.709412   57945 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0816 13:52:29.709551   57945 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0816 13:52:29.709660   57945 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0816 13:52:29.709755   57945 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0816 13:52:29.709782   57945 kubeadm.go:310] 
	I0816 13:52:29.709836   57945 kubeadm.go:394] duration metric: took 7m57.514215667s to StartCluster
	I0816 13:52:29.709886   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 13:52:29.709942   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 13:52:29.753540   57945 cri.go:89] found id: ""
	I0816 13:52:29.753569   57945 logs.go:276] 0 containers: []
	W0816 13:52:29.753580   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0816 13:52:29.753588   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 13:52:29.753655   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 13:52:29.793951   57945 cri.go:89] found id: ""
	I0816 13:52:29.793975   57945 logs.go:276] 0 containers: []
	W0816 13:52:29.793983   57945 logs.go:278] No container was found matching "etcd"
	I0816 13:52:29.793988   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 13:52:29.794040   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 13:52:29.831303   57945 cri.go:89] found id: ""
	I0816 13:52:29.831334   57945 logs.go:276] 0 containers: []
	W0816 13:52:29.831345   57945 logs.go:278] No container was found matching "coredns"
	I0816 13:52:29.831356   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 13:52:29.831420   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 13:52:29.867252   57945 cri.go:89] found id: ""
	I0816 13:52:29.867277   57945 logs.go:276] 0 containers: []
	W0816 13:52:29.867285   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0816 13:52:29.867296   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 13:52:29.867349   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 13:52:29.901161   57945 cri.go:89] found id: ""
	I0816 13:52:29.901188   57945 logs.go:276] 0 containers: []
	W0816 13:52:29.901204   57945 logs.go:278] No container was found matching "kube-proxy"
	I0816 13:52:29.901212   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 13:52:29.901268   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 13:52:29.935781   57945 cri.go:89] found id: ""
	I0816 13:52:29.935808   57945 logs.go:276] 0 containers: []
	W0816 13:52:29.935816   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 13:52:29.935823   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 13:52:29.935873   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 13:52:29.970262   57945 cri.go:89] found id: ""
	I0816 13:52:29.970292   57945 logs.go:276] 0 containers: []
	W0816 13:52:29.970303   57945 logs.go:278] No container was found matching "kindnet"
	I0816 13:52:29.970310   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 13:52:29.970370   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 13:52:30.026580   57945 cri.go:89] found id: ""
	I0816 13:52:30.026610   57945 logs.go:276] 0 containers: []
	W0816 13:52:30.026621   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 13:52:30.026642   57945 logs.go:123] Gathering logs for dmesg ...
	I0816 13:52:30.026657   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 13:52:30.050718   57945 logs.go:123] Gathering logs for describe nodes ...
	I0816 13:52:30.050747   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 13:52:30.146600   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 13:52:30.146623   57945 logs.go:123] Gathering logs for CRI-O ...
	I0816 13:52:30.146637   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 13:52:30.268976   57945 logs.go:123] Gathering logs for container status ...
	I0816 13:52:30.269012   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 13:52:30.312306   57945 logs.go:123] Gathering logs for kubelet ...
	I0816 13:52:30.312341   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0816 13:52:30.363242   57945 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0816 13:52:30.363303   57945 out.go:270] * 
	W0816 13:52:30.363365   57945 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0816 13:52:30.363377   57945 out.go:270] * 
	W0816 13:52:30.364104   57945 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 13:52:30.366989   57945 out.go:201] 
	W0816 13:52:30.368192   57945 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0816 13:52:30.368293   57945 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0816 13:52:30.368318   57945 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0816 13:52:30.369674   57945 out.go:201] 
	
	
	==> CRI-O <==
	Aug 16 14:03:46 old-k8s-version-882237 crio[655]: time="2024-08-16 14:03:46.068459973Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723817026068440301,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0c57d3a8-da00-4a43-8cb6-6ad535dd68fd name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 14:03:46 old-k8s-version-882237 crio[655]: time="2024-08-16 14:03:46.069082116Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e9e1d15e-16c0-4912-acfa-8189a67f9482 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 14:03:46 old-k8s-version-882237 crio[655]: time="2024-08-16 14:03:46.069130676Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e9e1d15e-16c0-4912-acfa-8189a67f9482 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 14:03:46 old-k8s-version-882237 crio[655]: time="2024-08-16 14:03:46.069160058Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e9e1d15e-16c0-4912-acfa-8189a67f9482 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 14:03:46 old-k8s-version-882237 crio[655]: time="2024-08-16 14:03:46.101111694Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d8172650-c110-4c8c-88f1-ca25915e118d name=/runtime.v1.RuntimeService/Version
	Aug 16 14:03:46 old-k8s-version-882237 crio[655]: time="2024-08-16 14:03:46.101180311Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d8172650-c110-4c8c-88f1-ca25915e118d name=/runtime.v1.RuntimeService/Version
	Aug 16 14:03:46 old-k8s-version-882237 crio[655]: time="2024-08-16 14:03:46.102384609Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=adad7dc0-4969-47c3-805f-db7468860301 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 14:03:46 old-k8s-version-882237 crio[655]: time="2024-08-16 14:03:46.102842823Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723817026102817598,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=adad7dc0-4969-47c3-805f-db7468860301 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 14:03:46 old-k8s-version-882237 crio[655]: time="2024-08-16 14:03:46.103320550Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=22d5942a-80e6-417e-8189-a42fcadd951a name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 14:03:46 old-k8s-version-882237 crio[655]: time="2024-08-16 14:03:46.103375390Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=22d5942a-80e6-417e-8189-a42fcadd951a name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 14:03:46 old-k8s-version-882237 crio[655]: time="2024-08-16 14:03:46.103407499Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=22d5942a-80e6-417e-8189-a42fcadd951a name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 14:03:46 old-k8s-version-882237 crio[655]: time="2024-08-16 14:03:46.135818905Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b3878757-8885-4c5e-be0c-c34ad76c2c8c name=/runtime.v1.RuntimeService/Version
	Aug 16 14:03:46 old-k8s-version-882237 crio[655]: time="2024-08-16 14:03:46.135900015Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b3878757-8885-4c5e-be0c-c34ad76c2c8c name=/runtime.v1.RuntimeService/Version
	Aug 16 14:03:46 old-k8s-version-882237 crio[655]: time="2024-08-16 14:03:46.137334897Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d520e088-1885-4141-8cc1-68bb2db561fe name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 14:03:46 old-k8s-version-882237 crio[655]: time="2024-08-16 14:03:46.137802233Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723817026137777525,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d520e088-1885-4141-8cc1-68bb2db561fe name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 14:03:46 old-k8s-version-882237 crio[655]: time="2024-08-16 14:03:46.138304578Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=953ecf37-c819-4e92-8584-dd99ebc92867 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 14:03:46 old-k8s-version-882237 crio[655]: time="2024-08-16 14:03:46.138375563Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=953ecf37-c819-4e92-8584-dd99ebc92867 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 14:03:46 old-k8s-version-882237 crio[655]: time="2024-08-16 14:03:46.138414226Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=953ecf37-c819-4e92-8584-dd99ebc92867 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 14:03:46 old-k8s-version-882237 crio[655]: time="2024-08-16 14:03:46.170522666Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7916d36a-ecfe-4c98-9312-b32caa12709d name=/runtime.v1.RuntimeService/Version
	Aug 16 14:03:46 old-k8s-version-882237 crio[655]: time="2024-08-16 14:03:46.170657659Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7916d36a-ecfe-4c98-9312-b32caa12709d name=/runtime.v1.RuntimeService/Version
	Aug 16 14:03:46 old-k8s-version-882237 crio[655]: time="2024-08-16 14:03:46.172208382Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7e5864ed-f1aa-420a-ad1f-2c6094774c8f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 14:03:46 old-k8s-version-882237 crio[655]: time="2024-08-16 14:03:46.172645121Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723817026172622811,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7e5864ed-f1aa-420a-ad1f-2c6094774c8f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 14:03:46 old-k8s-version-882237 crio[655]: time="2024-08-16 14:03:46.173345211Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f74e6c7d-4e04-4132-bc87-2554a10ad148 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 14:03:46 old-k8s-version-882237 crio[655]: time="2024-08-16 14:03:46.173403685Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f74e6c7d-4e04-4132-bc87-2554a10ad148 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 14:03:46 old-k8s-version-882237 crio[655]: time="2024-08-16 14:03:46.173435437Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f74e6c7d-4e04-4132-bc87-2554a10ad148 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug16 13:44] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050110] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040174] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.904148] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.568641] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.570397] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000010] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.219540] systemd-fstab-generator[573]: Ignoring "noauto" option for root device
	[  +0.067905] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.075212] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.209113] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.188995] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.278563] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +6.705927] systemd-fstab-generator[901]: Ignoring "noauto" option for root device
	[  +0.067606] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.266713] systemd-fstab-generator[1025]: Ignoring "noauto" option for root device
	[ +11.277225] kauditd_printk_skb: 46 callbacks suppressed
	[Aug16 13:48] systemd-fstab-generator[5073]: Ignoring "noauto" option for root device
	[Aug16 13:50] systemd-fstab-generator[5354]: Ignoring "noauto" option for root device
	[  +0.065917] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 14:03:46 up 19 min,  0 users,  load average: 0.00, 0.02, 0.05
	Linux old-k8s-version-882237 5.10.207 #1 SMP Wed Aug 14 19:18:01 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 16 14:03:42 old-k8s-version-882237 kubelet[6835]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc00009e0c0, 0xc000a6b440)
	Aug 16 14:03:42 old-k8s-version-882237 kubelet[6835]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Aug 16 14:03:42 old-k8s-version-882237 kubelet[6835]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Aug 16 14:03:42 old-k8s-version-882237 kubelet[6835]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Aug 16 14:03:42 old-k8s-version-882237 kubelet[6835]: goroutine 156 [select]:
	Aug 16 14:03:42 old-k8s-version-882237 kubelet[6835]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0009e5ef0, 0x4f0ac20, 0xc0004bdd10, 0x1, 0xc00009e0c0)
	Aug 16 14:03:42 old-k8s-version-882237 kubelet[6835]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Aug 16 14:03:42 old-k8s-version-882237 kubelet[6835]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc00045a380, 0xc00009e0c0)
	Aug 16 14:03:42 old-k8s-version-882237 kubelet[6835]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Aug 16 14:03:42 old-k8s-version-882237 kubelet[6835]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Aug 16 14:03:42 old-k8s-version-882237 kubelet[6835]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Aug 16 14:03:42 old-k8s-version-882237 kubelet[6835]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000a650e0, 0xc000b8abc0)
	Aug 16 14:03:42 old-k8s-version-882237 kubelet[6835]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Aug 16 14:03:42 old-k8s-version-882237 kubelet[6835]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Aug 16 14:03:42 old-k8s-version-882237 kubelet[6835]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Aug 16 14:03:42 old-k8s-version-882237 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Aug 16 14:03:42 old-k8s-version-882237 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Aug 16 14:03:43 old-k8s-version-882237 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 137.
	Aug 16 14:03:43 old-k8s-version-882237 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Aug 16 14:03:43 old-k8s-version-882237 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Aug 16 14:03:43 old-k8s-version-882237 kubelet[6844]: I0816 14:03:43.575907    6844 server.go:416] Version: v1.20.0
	Aug 16 14:03:43 old-k8s-version-882237 kubelet[6844]: I0816 14:03:43.576260    6844 server.go:837] Client rotation is on, will bootstrap in background
	Aug 16 14:03:43 old-k8s-version-882237 kubelet[6844]: I0816 14:03:43.579408    6844 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Aug 16 14:03:43 old-k8s-version-882237 kubelet[6844]: W0816 14:03:43.580820    6844 manager.go:159] Cannot detect current cgroup on cgroup v2
	Aug 16 14:03:43 old-k8s-version-882237 kubelet[6844]: I0816 14:03:43.582081    6844 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-882237 -n old-k8s-version-882237
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-882237 -n old-k8s-version-882237: exit status 2 (221.354806ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-882237" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (130.53s)

                                                
                                    

Test pass (246/314)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 25.08
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.0/json-events 13.09
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.06
18 TestDownloadOnly/v1.31.0/DeleteAll 0.13
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.59
22 TestOffline 87.55
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 139.66
31 TestAddons/serial/GCPAuth/Namespaces 0.15
33 TestAddons/parallel/Registry 17.16
35 TestAddons/parallel/InspektorGadget 11.96
37 TestAddons/parallel/HelmTiller 12.48
39 TestAddons/parallel/CSI 90.49
40 TestAddons/parallel/Headlamp 18.11
41 TestAddons/parallel/CloudSpanner 6.59
42 TestAddons/parallel/LocalPath 56.27
43 TestAddons/parallel/NvidiaDevicePlugin 6.5
44 TestAddons/parallel/Yakd 11.79
46 TestCertOptions 86.21
47 TestCertExpiration 300.58
49 TestForceSystemdFlag 74.49
50 TestForceSystemdEnv 62.93
52 TestKVMDriverInstallOrUpdate 7.33
56 TestErrorSpam/setup 40.76
57 TestErrorSpam/start 0.33
58 TestErrorSpam/status 0.69
59 TestErrorSpam/pause 1.52
60 TestErrorSpam/unpause 1.68
61 TestErrorSpam/stop 6.4
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 49.71
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 31.42
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.07
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.34
73 TestFunctional/serial/CacheCmd/cache/add_local 2.24
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.04
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.65
78 TestFunctional/serial/CacheCmd/cache/delete 0.08
79 TestFunctional/serial/MinikubeKubectlCmd 0.1
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
81 TestFunctional/serial/ExtraConfig 33.66
82 TestFunctional/serial/ComponentHealth 0.06
83 TestFunctional/serial/LogsCmd 1.43
84 TestFunctional/serial/LogsFileCmd 1.42
85 TestFunctional/serial/InvalidService 4.49
87 TestFunctional/parallel/ConfigCmd 0.3
88 TestFunctional/parallel/DashboardCmd 13.13
89 TestFunctional/parallel/DryRun 0.29
90 TestFunctional/parallel/InternationalLanguage 0.13
91 TestFunctional/parallel/StatusCmd 0.96
95 TestFunctional/parallel/ServiceCmdConnect 7.89
96 TestFunctional/parallel/AddonsCmd 0.12
97 TestFunctional/parallel/PersistentVolumeClaim 48.75
99 TestFunctional/parallel/SSHCmd 0.4
100 TestFunctional/parallel/CpCmd 1.21
101 TestFunctional/parallel/MySQL 29.79
102 TestFunctional/parallel/FileSync 0.23
103 TestFunctional/parallel/CertSync 1.39
107 TestFunctional/parallel/NodeLabels 0.06
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.47
111 TestFunctional/parallel/License 0.62
112 TestFunctional/parallel/ServiceCmd/DeployApp 11.18
122 TestFunctional/parallel/Version/short 0.05
123 TestFunctional/parallel/Version/components 0.77
124 TestFunctional/parallel/ImageCommands/ImageListShort 0.83
125 TestFunctional/parallel/ImageCommands/ImageListTable 0.3
126 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
127 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
128 TestFunctional/parallel/ImageCommands/ImageBuild 4.19
129 TestFunctional/parallel/ImageCommands/Setup 1.98
130 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
131 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
132 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
133 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.61
134 TestFunctional/parallel/ProfileCmd/profile_not_create 0.31
135 TestFunctional/parallel/ProfileCmd/profile_list 0.28
136 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.91
137 TestFunctional/parallel/ProfileCmd/profile_json_output 0.28
138 TestFunctional/parallel/MountCmd/any-port 14.35
139 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.8
140 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.76
141 TestFunctional/parallel/ImageCommands/ImageRemove 0.51
142 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.37
143 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 3.65
144 TestFunctional/parallel/ServiceCmd/List 0.28
145 TestFunctional/parallel/ServiceCmd/JSONOutput 0.3
146 TestFunctional/parallel/ServiceCmd/HTTPS 0.33
147 TestFunctional/parallel/ServiceCmd/Format 0.33
148 TestFunctional/parallel/ServiceCmd/URL 0.34
149 TestFunctional/parallel/MountCmd/specific-port 1.91
150 TestFunctional/parallel/MountCmd/VerifyCleanup 1.34
151 TestFunctional/delete_echo-server_images 0.03
152 TestFunctional/delete_my-image_image 0.01
153 TestFunctional/delete_minikube_cached_images 0.01
157 TestMultiControlPlane/serial/StartCluster 245.5
158 TestMultiControlPlane/serial/DeployApp 8.11
159 TestMultiControlPlane/serial/PingHostFromPods 1.17
160 TestMultiControlPlane/serial/AddWorkerNode 54.91
161 TestMultiControlPlane/serial/NodeLabels 0.07
162 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.52
163 TestMultiControlPlane/serial/CopyFile 12.37
165 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.45
167 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.38
169 TestMultiControlPlane/serial/DeleteSecondaryNode 16.53
170 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.36
172 TestMultiControlPlane/serial/RestartCluster 319.13
173 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.39
174 TestMultiControlPlane/serial/AddSecondaryNode 77.87
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.53
179 TestJSONOutput/start/Command 84.4
180 TestJSONOutput/start/Audit 0
182 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
185 TestJSONOutput/pause/Command 0.73
186 TestJSONOutput/pause/Audit 0
188 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/unpause/Command 0.59
192 TestJSONOutput/unpause/Audit 0
194 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/stop/Command 7.34
198 TestJSONOutput/stop/Audit 0
200 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
202 TestErrorJSONOutput 0.18
207 TestMainNoArgs 0.04
208 TestMinikubeProfile 85.07
211 TestMountStart/serial/StartWithMountFirst 29.59
212 TestMountStart/serial/VerifyMountFirst 0.36
213 TestMountStart/serial/StartWithMountSecond 28.95
214 TestMountStart/serial/VerifyMountSecond 0.37
215 TestMountStart/serial/DeleteFirst 0.9
216 TestMountStart/serial/VerifyMountPostDelete 0.37
217 TestMountStart/serial/Stop 2.27
218 TestMountStart/serial/RestartStopped 22.79
219 TestMountStart/serial/VerifyMountPostStop 0.35
222 TestMultiNode/serial/FreshStart2Nodes 113.03
223 TestMultiNode/serial/DeployApp2Nodes 6.49
224 TestMultiNode/serial/PingHostFrom2Pods 0.8
225 TestMultiNode/serial/AddNode 50.57
226 TestMultiNode/serial/MultiNodeLabels 0.06
227 TestMultiNode/serial/ProfileList 0.2
228 TestMultiNode/serial/CopyFile 6.88
229 TestMultiNode/serial/StopNode 2.24
230 TestMultiNode/serial/StartAfterStop 40.57
232 TestMultiNode/serial/DeleteNode 1.96
234 TestMultiNode/serial/RestartMultiNode 185.92
235 TestMultiNode/serial/ValidateNameConflict 43.3
242 TestScheduledStopUnix 113.85
246 TestRunningBinaryUpgrade 213.66
258 TestPause/serial/Start 132.92
260 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
261 TestNoKubernetes/serial/StartWithK8s 56.98
263 TestNoKubernetes/serial/StartWithStopK8s 9.07
264 TestNoKubernetes/serial/Start 32.66
265 TestNoKubernetes/serial/VerifyK8sNotRunning 0.18
266 TestNoKubernetes/serial/ProfileList 13.98
267 TestNoKubernetes/serial/Stop 1.28
272 TestStoppedBinaryUpgrade/Setup 3.14
277 TestNetworkPlugins/group/false 2.92
278 TestStoppedBinaryUpgrade/Upgrade 102.98
282 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
283 TestStoppedBinaryUpgrade/MinikubeLogs 1.1
287 TestStartStop/group/no-preload/serial/FirstStart 116.06
289 TestStartStop/group/embed-certs/serial/FirstStart 53.61
290 TestStartStop/group/embed-certs/serial/DeployApp 10.28
291 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.99
293 TestStartStop/group/no-preload/serial/DeployApp 9.27
294 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.01
297 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 92.92
301 TestStartStop/group/embed-certs/serial/SecondStart 672.2
303 TestStartStop/group/no-preload/serial/SecondStart 562.35
304 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.27
305 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.99
307 TestStartStop/group/old-k8s-version/serial/Stop 3.28
308 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
311 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 425.84
321 TestStartStop/group/newest-cni/serial/FirstStart 47.57
322 TestStartStop/group/newest-cni/serial/DeployApp 0
323 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.03
324 TestStartStop/group/newest-cni/serial/Stop 7.31
325 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
326 TestStartStop/group/newest-cni/serial/SecondStart 38.61
327 TestNetworkPlugins/group/auto/Start 65.38
328 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
329 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
330 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
331 TestStartStop/group/newest-cni/serial/Pause 2.61
332 TestNetworkPlugins/group/kindnet/Start 74.38
333 TestNetworkPlugins/group/calico/Start 120.33
334 TestNetworkPlugins/group/auto/KubeletFlags 0.24
335 TestNetworkPlugins/group/auto/NetCatPod 13.29
336 TestNetworkPlugins/group/auto/DNS 0.17
337 TestNetworkPlugins/group/auto/Localhost 0.13
338 TestNetworkPlugins/group/auto/HairPin 0.14
339 TestNetworkPlugins/group/custom-flannel/Start 76.65
340 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
341 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
342 TestNetworkPlugins/group/kindnet/NetCatPod 11.26
343 TestNetworkPlugins/group/kindnet/DNS 0.16
344 TestNetworkPlugins/group/kindnet/Localhost 0.15
345 TestNetworkPlugins/group/kindnet/HairPin 0.13
346 TestNetworkPlugins/group/enable-default-cni/Start 87.86
347 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
348 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.71
349 TestNetworkPlugins/group/calico/ControllerPod 6.01
350 TestNetworkPlugins/group/flannel/Start 89.41
351 TestNetworkPlugins/group/calico/KubeletFlags 0.2
352 TestNetworkPlugins/group/calico/NetCatPod 10.23
353 TestNetworkPlugins/group/calico/DNS 0.21
354 TestNetworkPlugins/group/calico/Localhost 0.17
355 TestNetworkPlugins/group/calico/HairPin 0.17
356 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.21
357 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.29
358 TestNetworkPlugins/group/custom-flannel/DNS 0.21
359 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
360 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
361 TestNetworkPlugins/group/bridge/Start 60.4
362 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.22
363 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.25
364 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
365 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
366 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
367 TestNetworkPlugins/group/flannel/ControllerPod 6.01
368 TestNetworkPlugins/group/bridge/KubeletFlags 0.21
369 TestNetworkPlugins/group/bridge/NetCatPod 12.22
370 TestNetworkPlugins/group/flannel/KubeletFlags 0.24
371 TestNetworkPlugins/group/flannel/NetCatPod 11.26
372 TestNetworkPlugins/group/bridge/DNS 0.16
373 TestNetworkPlugins/group/bridge/Localhost 0.13
374 TestNetworkPlugins/group/bridge/HairPin 0.12
375 TestNetworkPlugins/group/flannel/DNS 0.16
376 TestNetworkPlugins/group/flannel/Localhost 0.12
377 TestNetworkPlugins/group/flannel/HairPin 0.12
x
+
TestDownloadOnly/v1.20.0/json-events (25.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-238279 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-238279 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (25.0838341s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (25.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-238279
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-238279: exit status 85 (54.73442ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-238279 | jenkins | v1.33.1 | 16 Aug 24 12:20 UTC |          |
	|         | -p download-only-238279        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 12:20:57
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 12:20:57.160112   11160 out.go:345] Setting OutFile to fd 1 ...
	I0816 12:20:57.160221   11160 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 12:20:57.160231   11160 out.go:358] Setting ErrFile to fd 2...
	I0816 12:20:57.160235   11160 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 12:20:57.160411   11160 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-3966/.minikube/bin
	W0816 12:20:57.160532   11160 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19423-3966/.minikube/config/config.json: open /home/jenkins/minikube-integration/19423-3966/.minikube/config/config.json: no such file or directory
	I0816 12:20:57.161119   11160 out.go:352] Setting JSON to true
	I0816 12:20:57.162054   11160 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":202,"bootTime":1723810655,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 12:20:57.162112   11160 start.go:139] virtualization: kvm guest
	I0816 12:20:57.164441   11160 out.go:97] [download-only-238279] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0816 12:20:57.164546   11160 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball: no such file or directory
	I0816 12:20:57.164597   11160 notify.go:220] Checking for updates...
	I0816 12:20:57.165931   11160 out.go:169] MINIKUBE_LOCATION=19423
	I0816 12:20:57.167351   11160 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 12:20:57.168984   11160 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 12:20:57.170560   11160 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-3966/.minikube
	I0816 12:20:57.172006   11160 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0816 12:20:57.174539   11160 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0816 12:20:57.174842   11160 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 12:20:57.270179   11160 out.go:97] Using the kvm2 driver based on user configuration
	I0816 12:20:57.270203   11160 start.go:297] selected driver: kvm2
	I0816 12:20:57.270214   11160 start.go:901] validating driver "kvm2" against <nil>
	I0816 12:20:57.270565   11160 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 12:20:57.270687   11160 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-3966/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 12:20:57.284780   11160 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0816 12:20:57.284829   11160 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 12:20:57.285322   11160 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0816 12:20:57.285466   11160 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0816 12:20:57.285521   11160 cni.go:84] Creating CNI manager for ""
	I0816 12:20:57.285533   11160 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 12:20:57.285542   11160 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0816 12:20:57.285583   11160 start.go:340] cluster config:
	{Name:download-only-238279 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-238279 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 12:20:57.285739   11160 iso.go:125] acquiring lock: {Name:mk00139b359c6fe4c84d80e843a2099303fb7c5b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 12:20:57.287404   11160 out.go:97] Downloading VM boot image ...
	I0816 12:20:57.287433   11160 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19423-3966/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso
	I0816 12:21:07.639359   11160 out.go:97] Starting "download-only-238279" primary control-plane node in "download-only-238279" cluster
	I0816 12:21:07.639381   11160 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0816 12:21:07.746019   11160 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0816 12:21:07.746052   11160 cache.go:56] Caching tarball of preloaded images
	I0816 12:21:07.746223   11160 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0816 12:21:07.748381   11160 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0816 12:21:07.748416   11160 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0816 12:21:07.861480   11160 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0816 12:21:20.359530   11160 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0816 12:21:20.359650   11160 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0816 12:21:21.258153   11160 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0816 12:21:21.258527   11160 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/download-only-238279/config.json ...
	I0816 12:21:21.258568   11160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/download-only-238279/config.json: {Name:mk7795b2b6e8ed89a8bd69c8f0a8e385005f6d6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:21:21.258746   11160 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0816 12:21:21.258972   11160 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19423-3966/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-238279 host does not exist
	  To start a cluster, run: "minikube start -p download-only-238279"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-238279
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (13.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-723080 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-723080 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (13.086188689s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (13.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-723080
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-723080: exit status 85 (56.910325ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-238279 | jenkins | v1.33.1 | 16 Aug 24 12:20 UTC |                     |
	|         | -p download-only-238279        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 16 Aug 24 12:21 UTC | 16 Aug 24 12:21 UTC |
	| delete  | -p download-only-238279        | download-only-238279 | jenkins | v1.33.1 | 16 Aug 24 12:21 UTC | 16 Aug 24 12:21 UTC |
	| start   | -o=json --download-only        | download-only-723080 | jenkins | v1.33.1 | 16 Aug 24 12:21 UTC |                     |
	|         | -p download-only-723080        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 12:21:22
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 12:21:22.547535   11420 out.go:345] Setting OutFile to fd 1 ...
	I0816 12:21:22.547793   11420 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 12:21:22.547803   11420 out.go:358] Setting ErrFile to fd 2...
	I0816 12:21:22.547808   11420 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 12:21:22.547975   11420 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-3966/.minikube/bin
	I0816 12:21:22.548503   11420 out.go:352] Setting JSON to true
	I0816 12:21:22.549407   11420 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":228,"bootTime":1723810655,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 12:21:22.549465   11420 start.go:139] virtualization: kvm guest
	I0816 12:21:22.551791   11420 out.go:97] [download-only-723080] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 12:21:22.551963   11420 notify.go:220] Checking for updates...
	I0816 12:21:22.553488   11420 out.go:169] MINIKUBE_LOCATION=19423
	I0816 12:21:22.554888   11420 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 12:21:22.556105   11420 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 12:21:22.557318   11420 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-3966/.minikube
	I0816 12:21:22.558713   11420 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0816 12:21:22.561016   11420 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0816 12:21:22.561286   11420 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 12:21:22.592392   11420 out.go:97] Using the kvm2 driver based on user configuration
	I0816 12:21:22.592420   11420 start.go:297] selected driver: kvm2
	I0816 12:21:22.592433   11420 start.go:901] validating driver "kvm2" against <nil>
	I0816 12:21:22.592743   11420 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 12:21:22.592838   11420 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-3966/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 12:21:22.607289   11420 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0816 12:21:22.607357   11420 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 12:21:22.607998   11420 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0816 12:21:22.608321   11420 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0816 12:21:22.608369   11420 cni.go:84] Creating CNI manager for ""
	I0816 12:21:22.608384   11420 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 12:21:22.608395   11420 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0816 12:21:22.608480   11420 start.go:340] cluster config:
	{Name:download-only-723080 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-723080 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 12:21:22.608708   11420 iso.go:125] acquiring lock: {Name:mk00139b359c6fe4c84d80e843a2099303fb7c5b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 12:21:22.610489   11420 out.go:97] Starting "download-only-723080" primary control-plane node in "download-only-723080" cluster
	I0816 12:21:22.610506   11420 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 12:21:23.209201   11420 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0816 12:21:23.209233   11420 cache.go:56] Caching tarball of preloaded images
	I0816 12:21:23.209398   11420 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 12:21:23.211443   11420 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0816 12:21:23.211457   11420 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 ...
	I0816 12:21:23.318883   11420 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:4a2ae163f7665ceaa95dee8ffc8efdba -> /home/jenkins/minikube-integration/19423-3966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-723080 host does not exist
	  To start a cluster, run: "minikube start -p download-only-723080"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-723080
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-862449 --alsologtostderr --binary-mirror http://127.0.0.1:43873 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-862449" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-862449
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestOffline (87.55s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-718759 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-718759 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m26.695040141s)
helpers_test.go:175: Cleaning up "offline-crio-718759" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-718759
--- PASS: TestOffline (87.55s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-966941
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-966941: exit status 85 (53.518708ms)

                                                
                                                
-- stdout --
	* Profile "addons-966941" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-966941"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-966941
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-966941: exit status 85 (53.843014ms)

                                                
                                                
-- stdout --
	* Profile "addons-966941" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-966941"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (139.66s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-966941 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-966941 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m19.658753445s)
--- PASS: TestAddons/Setup (139.66s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-966941 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-966941 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.16s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 2.965684ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-pbs55" [ce8c7d7b-e1bd-4400-989e-ff5ee6472906] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.002692681s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-ntgtj" [1d1c166b-3b57-45d7-a283-a4e340b16541] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003839435s
addons_test.go:342: (dbg) Run:  kubectl --context addons-966941 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-966941 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-966941 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.403021355s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-966941 ip
2024/08/16 12:24:37 [DEBUG] GET http://192.168.39.129:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-966941 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.16s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.96s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-5fg2r" [d6f660a5-4e98-4635-b7af-d40bc8f33ab0] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004381727s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-966941
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-966941: (5.953001811s)
--- PASS: TestAddons/parallel/InspektorGadget (11.96s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (12.48s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.319779ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-v26s2" [505f660d-cfba-443f-a970-69b28a26f3c1] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.00408187s
addons_test.go:475: (dbg) Run:  kubectl --context addons-966941 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-966941 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.869642373s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-966941 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (12.48s)

                                                
                                    
x
+
TestAddons/parallel/CSI (90.49s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 7.681319ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-966941 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-966941 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [8e163a45-1a78-42d6-bb47-7c3b46f72e89] Pending
helpers_test.go:344: "task-pv-pod" [8e163a45-1a78-42d6-bb47-7c3b46f72e89] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [8e163a45-1a78-42d6-bb47-7c3b46f72e89] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.004457432s
addons_test.go:590: (dbg) Run:  kubectl --context addons-966941 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-966941 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-966941 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-966941 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-966941 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-966941 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-966941 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [782130f2-f13b-49c0-9a87-c0168db5aeb4] Pending
helpers_test.go:344: "task-pv-pod-restore" [782130f2-f13b-49c0-9a87-c0168db5aeb4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [782130f2-f13b-49c0-9a87-c0168db5aeb4] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.00367101s
addons_test.go:632: (dbg) Run:  kubectl --context addons-966941 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-966941 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-966941 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-966941 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-966941 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.724078088s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-966941 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (90.49s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.11s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-966941 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-966941 --alsologtostderr -v=1: (1.249439957s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-qbwks" [c981e99c-4458-4a01-9478-490641ef3b18] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-qbwks" [c981e99c-4458-4a01-9478-490641ef3b18] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-qbwks" [c981e99c-4458-4a01-9478-490641ef3b18] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.004118023s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-966941 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-966941 addons disable headlamp --alsologtostderr -v=1: (5.859619776s)
--- PASS: TestAddons/parallel/Headlamp (18.11s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.59s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-c4bc9b5f8-bxxmx" [81ace2ee-d182-4834-8876-64536852dd05] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.005265432s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-966941
--- PASS: TestAddons/parallel/CloudSpanner (6.59s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (56.27s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-966941 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-966941 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966941 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [7657d6ba-63a2-4087-80b5-0026dce28f28] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [7657d6ba-63a2-4087-80b5-0026dce28f28] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [7657d6ba-63a2-4087-80b5-0026dce28f28] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.003637092s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-966941 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-966941 ssh "cat /opt/local-path-provisioner/pvc-e2d2f869-e0e4-4450-9779-9bdaae043e0c_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-966941 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-966941 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-966941 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-amd64 -p addons-966941 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.506451016s)
--- PASS: TestAddons/parallel/LocalPath (56.27s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.5s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-t2vgg" [67831983-255a-47c4-9db7-8be119bea725] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004753114s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-966941
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.50s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.79s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-pk5r7" [ae6fb18d-0d98-4148-b637-cca61f49efb4] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003706699s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-966941 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-966941 addons disable yakd --alsologtostderr -v=1: (5.785770566s)
--- PASS: TestAddons/parallel/Yakd (11.79s)

                                                
                                    
x
+
TestCertOptions (86.21s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-779306 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-779306 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m24.986872511s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-779306 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-779306 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-779306 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-779306" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-779306
--- PASS: TestCertOptions (86.21s)

                                                
                                    
x
+
TestCertExpiration (300.58s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-050553 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-050553 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m28.838011767s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-050553 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-050553 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (30.977746349s)
helpers_test.go:175: Cleaning up "cert-expiration-050553" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-050553
--- PASS: TestCertExpiration (300.58s)

                                                
                                    
x
+
TestForceSystemdFlag (74.49s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-981990 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-981990 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m13.311412513s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-981990 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-981990" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-981990
--- PASS: TestForceSystemdFlag (74.49s)

                                                
                                    
x
+
TestForceSystemdEnv (62.93s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-741583 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-741583 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m1.89728988s)
helpers_test.go:175: Cleaning up "force-systemd-env-741583" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-741583
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-741583: (1.029552258s)
--- PASS: TestForceSystemdEnv (62.93s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (7.33s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (7.33s)

                                                
                                    
x
+
TestErrorSpam/setup (40.76s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-454557 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-454557 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-454557 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-454557 --driver=kvm2  --container-runtime=crio: (40.757099537s)
--- PASS: TestErrorSpam/setup (40.76s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-454557 --log_dir /tmp/nospam-454557 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-454557 --log_dir /tmp/nospam-454557 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-454557 --log_dir /tmp/nospam-454557 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.69s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-454557 --log_dir /tmp/nospam-454557 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-454557 --log_dir /tmp/nospam-454557 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-454557 --log_dir /tmp/nospam-454557 status
--- PASS: TestErrorSpam/status (0.69s)

                                                
                                    
x
+
TestErrorSpam/pause (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-454557 --log_dir /tmp/nospam-454557 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-454557 --log_dir /tmp/nospam-454557 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-454557 --log_dir /tmp/nospam-454557 pause
--- PASS: TestErrorSpam/pause (1.52s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.68s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-454557 --log_dir /tmp/nospam-454557 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-454557 --log_dir /tmp/nospam-454557 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-454557 --log_dir /tmp/nospam-454557 unpause
--- PASS: TestErrorSpam/unpause (1.68s)

                                                
                                    
x
+
TestErrorSpam/stop (6.4s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-454557 --log_dir /tmp/nospam-454557 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-454557 --log_dir /tmp/nospam-454557 stop: (2.288051861s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-454557 --log_dir /tmp/nospam-454557 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-454557 --log_dir /tmp/nospam-454557 stop: (2.066811919s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-454557 --log_dir /tmp/nospam-454557 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-454557 --log_dir /tmp/nospam-454557 stop: (2.044698064s)
--- PASS: TestErrorSpam/stop (6.40s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19423-3966/.minikube/files/etc/test/nested/copy/11149/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (49.71s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-756697 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0816 12:33:56.823486   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/client.crt: no such file or directory" logger="UnhandledError"
E0816 12:33:56.830568   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/client.crt: no such file or directory" logger="UnhandledError"
E0816 12:33:56.841949   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/client.crt: no such file or directory" logger="UnhandledError"
E0816 12:33:56.863371   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/client.crt: no such file or directory" logger="UnhandledError"
E0816 12:33:56.904719   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/client.crt: no such file or directory" logger="UnhandledError"
E0816 12:33:56.986301   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/client.crt: no such file or directory" logger="UnhandledError"
E0816 12:33:57.147925   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/client.crt: no such file or directory" logger="UnhandledError"
E0816 12:33:57.469767   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/client.crt: no such file or directory" logger="UnhandledError"
E0816 12:33:58.111812   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/client.crt: no such file or directory" logger="UnhandledError"
E0816 12:33:59.394135   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/client.crt: no such file or directory" logger="UnhandledError"
E0816 12:34:01.955727   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/client.crt: no such file or directory" logger="UnhandledError"
E0816 12:34:07.077350   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/client.crt: no such file or directory" logger="UnhandledError"
E0816 12:34:17.319375   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-756697 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (49.714272673s)
--- PASS: TestFunctional/serial/StartWithProxy (49.71s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (31.42s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-756697 --alsologtostderr -v=8
E0816 12:34:37.801486   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-756697 --alsologtostderr -v=8: (31.417793708s)
functional_test.go:663: soft start took 31.418402992s for "functional-756697" cluster.
--- PASS: TestFunctional/serial/SoftStart (31.42s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-756697 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-756697 cache add registry.k8s.io/pause:3.1: (1.047372779s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-756697 cache add registry.k8s.io/pause:3.3: (1.245624138s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-756697 cache add registry.k8s.io/pause:latest: (1.045521544s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-756697 /tmp/TestFunctionalserialCacheCmdcacheadd_local1864537601/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 cache add minikube-local-cache-test:functional-756697
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-756697 cache add minikube-local-cache-test:functional-756697: (1.933024506s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 cache delete minikube-local-cache-test:functional-756697
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-756697
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.65s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-756697 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (203.760614ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.65s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 kubectl -- --context functional-756697 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-756697 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.66s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-756697 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0816 12:35:18.763700   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-756697 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.661922458s)
functional_test.go:761: restart took 33.662010124s for "functional-756697" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (33.66s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-756697 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.43s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-756697 logs: (1.426780531s)
--- PASS: TestFunctional/serial/LogsCmd (1.43s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.42s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 logs --file /tmp/TestFunctionalserialLogsFileCmd284259105/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-756697 logs --file /tmp/TestFunctionalserialLogsFileCmd284259105/001/logs.txt: (1.4162174s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.42s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.49s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-756697 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-756697
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-756697: exit status 115 (269.082426ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.73:31234 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-756697 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-756697 delete -f testdata/invalidsvc.yaml: (1.030622179s)
--- PASS: TestFunctional/serial/InvalidService (4.49s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-756697 config get cpus: exit status 14 (55.13041ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-756697 config get cpus: exit status 14 (43.54363ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-756697 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-756697 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 21012: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.13s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-756697 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-756697 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (166.207051ms)

                                                
                                                
-- stdout --
	* [functional-756697] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-3966/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-3966/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 12:35:53.721008   20861 out.go:345] Setting OutFile to fd 1 ...
	I0816 12:35:53.721266   20861 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 12:35:53.721275   20861 out.go:358] Setting ErrFile to fd 2...
	I0816 12:35:53.721280   20861 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 12:35:53.721454   20861 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-3966/.minikube/bin
	I0816 12:35:53.721941   20861 out.go:352] Setting JSON to false
	I0816 12:35:53.722815   20861 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1099,"bootTime":1723810655,"procs":231,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 12:35:53.722877   20861 start.go:139] virtualization: kvm guest
	I0816 12:35:53.725133   20861 out.go:177] * [functional-756697] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 12:35:53.726562   20861 notify.go:220] Checking for updates...
	I0816 12:35:53.726581   20861 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 12:35:53.727968   20861 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 12:35:53.729300   20861 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 12:35:53.730762   20861 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-3966/.minikube
	I0816 12:35:53.732081   20861 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 12:35:53.733389   20861 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 12:35:53.734996   20861 config.go:182] Loaded profile config "functional-756697": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 12:35:53.735403   20861 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:35:53.735448   20861 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:35:53.756799   20861 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46269
	I0816 12:35:53.757250   20861 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:35:53.761530   20861 main.go:141] libmachine: Using API Version  1
	I0816 12:35:53.761557   20861 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:35:53.764980   20861 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:35:53.765171   20861 main.go:141] libmachine: (functional-756697) Calling .DriverName
	I0816 12:35:53.765408   20861 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 12:35:53.765807   20861 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:35:53.765844   20861 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:35:53.786989   20861 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45467
	I0816 12:35:53.787463   20861 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:35:53.787934   20861 main.go:141] libmachine: Using API Version  1
	I0816 12:35:53.787949   20861 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:35:53.788400   20861 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:35:53.788509   20861 main.go:141] libmachine: (functional-756697) Calling .DriverName
	I0816 12:35:53.835380   20861 out.go:177] * Using the kvm2 driver based on existing profile
	I0816 12:35:53.836657   20861 start.go:297] selected driver: kvm2
	I0816 12:35:53.836682   20861 start.go:901] validating driver "kvm2" against &{Name:functional-756697 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:functional-756697 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.73 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 12:35:53.836821   20861 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 12:35:53.839266   20861 out.go:201] 
	W0816 12:35:53.840777   20861 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0816 12:35:53.841969   20861 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-756697 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-756697 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-756697 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (133.599146ms)

                                                
                                                
-- stdout --
	* [functional-756697] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-3966/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-3966/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 12:35:54.018450   20915 out.go:345] Setting OutFile to fd 1 ...
	I0816 12:35:54.018706   20915 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 12:35:54.018717   20915 out.go:358] Setting ErrFile to fd 2...
	I0816 12:35:54.018723   20915 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 12:35:54.019081   20915 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-3966/.minikube/bin
	I0816 12:35:54.019755   20915 out.go:352] Setting JSON to false
	I0816 12:35:54.021050   20915 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1099,"bootTime":1723810655,"procs":232,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 12:35:54.021150   20915 start.go:139] virtualization: kvm guest
	I0816 12:35:54.023421   20915 out.go:177] * [functional-756697] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0816 12:35:54.024787   20915 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 12:35:54.024791   20915 notify.go:220] Checking for updates...
	I0816 12:35:54.027200   20915 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 12:35:54.028491   20915 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 12:35:54.029692   20915 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-3966/.minikube
	I0816 12:35:54.030923   20915 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 12:35:54.032103   20915 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 12:35:54.033725   20915 config.go:182] Loaded profile config "functional-756697": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 12:35:54.034409   20915 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:35:54.034464   20915 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:35:54.049607   20915 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42461
	I0816 12:35:54.049988   20915 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:35:54.050619   20915 main.go:141] libmachine: Using API Version  1
	I0816 12:35:54.050643   20915 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:35:54.050989   20915 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:35:54.051193   20915 main.go:141] libmachine: (functional-756697) Calling .DriverName
	I0816 12:35:54.051459   20915 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 12:35:54.051874   20915 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 12:35:54.051917   20915 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 12:35:54.066144   20915 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37887
	I0816 12:35:54.066545   20915 main.go:141] libmachine: () Calling .GetVersion
	I0816 12:35:54.067022   20915 main.go:141] libmachine: Using API Version  1
	I0816 12:35:54.067057   20915 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 12:35:54.067402   20915 main.go:141] libmachine: () Calling .GetMachineName
	I0816 12:35:54.067598   20915 main.go:141] libmachine: (functional-756697) Calling .DriverName
	I0816 12:35:54.100090   20915 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0816 12:35:54.101601   20915 start.go:297] selected driver: kvm2
	I0816 12:35:54.101625   20915 start.go:901] validating driver "kvm2" against &{Name:functional-756697 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:functional-756697 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.73 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 12:35:54.101741   20915 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 12:35:54.103932   20915 out.go:201] 
	W0816 12:35:54.105341   20915 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0816 12:35:54.106687   20915 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-756697 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-756697 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-cqc9j" [6d2d7032-13ee-4972-9f62-e56408a0838a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-cqc9j" [6d2d7032-13ee-4972-9f62-e56408a0838a] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.003812331s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.73:31346
functional_test.go:1675: http://192.168.39.73:31346: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-cqc9j

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.73:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.73:31346
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.89s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (48.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [02d4b449-50ff-4b50-a8d4-0b806658b8d1] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004034959s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-756697 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-756697 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-756697 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-756697 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f9ebfb0d-0a27-48a9-8ff2-dd52f12cecb9] Pending
helpers_test.go:344: "sp-pod" [f9ebfb0d-0a27-48a9-8ff2-dd52f12cecb9] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [f9ebfb0d-0a27-48a9-8ff2-dd52f12cecb9] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.003558608s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-756697 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-756697 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-756697 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [b274ab4c-be7e-40fd-bdac-730511f8c509] Pending
helpers_test.go:344: "sp-pod" [b274ab4c-be7e-40fd-bdac-730511f8c509] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [b274ab4c-be7e-40fd-bdac-730511f8c509] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 27.004516676s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-756697 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (48.75s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 ssh -n functional-756697 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 cp functional-756697:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1887394530/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 ssh -n functional-756697 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 ssh -n functional-756697 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (29.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-756697 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-wb6kb" [d778710d-d304-4303-8ea6-b4da8e3cda0d] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-wb6kb" [d778710d-d304-4303-8ea6-b4da8e3cda0d] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 26.003427971s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-756697 exec mysql-6cdb49bbb-wb6kb -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-756697 exec mysql-6cdb49bbb-wb6kb -- mysql -ppassword -e "show databases;": exit status 1 (131.320034ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-756697 exec mysql-6cdb49bbb-wb6kb -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-756697 exec mysql-6cdb49bbb-wb6kb -- mysql -ppassword -e "show databases;": exit status 1 (135.122783ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-756697 exec mysql-6cdb49bbb-wb6kb -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (29.79s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/11149/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 ssh "sudo cat /etc/test/nested/copy/11149/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/11149.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 ssh "sudo cat /etc/ssl/certs/11149.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/11149.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 ssh "sudo cat /usr/share/ca-certificates/11149.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/111492.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 ssh "sudo cat /etc/ssl/certs/111492.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/111492.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 ssh "sudo cat /usr/share/ca-certificates/111492.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-756697 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-756697 ssh "sudo systemctl is-active docker": exit status 1 (228.207575ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-756697 ssh "sudo systemctl is-active containerd": exit status 1 (243.443209ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-756697 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-756697 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-n8rnw" [b224c4b8-4e40-449f-ad96-05b4fc495440] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-n8rnw" [b224c4b8-4e40-449f-ad96-05b4fc495440] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.003683323s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.18s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-756697 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-756697
localhost/kicbase/echo-server:functional-756697
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/kindest/kindnetd:v20240730-75a5af0c
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-756697 image ls --format short --alsologtostderr:
I0816 12:36:04.972297   21696 out.go:345] Setting OutFile to fd 1 ...
I0816 12:36:04.972437   21696 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 12:36:04.972448   21696 out.go:358] Setting ErrFile to fd 2...
I0816 12:36:04.972454   21696 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 12:36:04.972730   21696 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-3966/.minikube/bin
I0816 12:36:04.973473   21696 config.go:182] Loaded profile config "functional-756697": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0816 12:36:04.973585   21696 config.go:182] Loaded profile config "functional-756697": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0816 12:36:04.974037   21696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0816 12:36:04.974090   21696 main.go:141] libmachine: Launching plugin server for driver kvm2
I0816 12:36:04.990092   21696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42627
I0816 12:36:04.990595   21696 main.go:141] libmachine: () Calling .GetVersion
I0816 12:36:04.991234   21696 main.go:141] libmachine: Using API Version  1
I0816 12:36:04.991265   21696 main.go:141] libmachine: () Calling .SetConfigRaw
I0816 12:36:04.991601   21696 main.go:141] libmachine: () Calling .GetMachineName
I0816 12:36:04.991791   21696 main.go:141] libmachine: (functional-756697) Calling .GetState
I0816 12:36:04.993794   21696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0816 12:36:04.993840   21696 main.go:141] libmachine: Launching plugin server for driver kvm2
I0816 12:36:05.008883   21696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42423
I0816 12:36:05.009342   21696 main.go:141] libmachine: () Calling .GetVersion
I0816 12:36:05.009829   21696 main.go:141] libmachine: Using API Version  1
I0816 12:36:05.009859   21696 main.go:141] libmachine: () Calling .SetConfigRaw
I0816 12:36:05.010190   21696 main.go:141] libmachine: () Calling .GetMachineName
I0816 12:36:05.010364   21696 main.go:141] libmachine: (functional-756697) Calling .DriverName
I0816 12:36:05.010566   21696 ssh_runner.go:195] Run: systemctl --version
I0816 12:36:05.010609   21696 main.go:141] libmachine: (functional-756697) Calling .GetSSHHostname
I0816 12:36:05.013543   21696 main.go:141] libmachine: (functional-756697) DBG | domain functional-756697 has defined MAC address 52:54:00:28:52:0b in network mk-functional-756697
I0816 12:36:05.013935   21696 main.go:141] libmachine: (functional-756697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:52:0b", ip: ""} in network mk-functional-756697: {Iface:virbr1 ExpiryTime:2024-08-16 13:33:45 +0000 UTC Type:0 Mac:52:54:00:28:52:0b Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:functional-756697 Clientid:01:52:54:00:28:52:0b}
I0816 12:36:05.013956   21696 main.go:141] libmachine: (functional-756697) DBG | domain functional-756697 has defined IP address 192.168.39.73 and MAC address 52:54:00:28:52:0b in network mk-functional-756697
I0816 12:36:05.014188   21696 main.go:141] libmachine: (functional-756697) Calling .GetSSHPort
I0816 12:36:05.014349   21696 main.go:141] libmachine: (functional-756697) Calling .GetSSHKeyPath
I0816 12:36:05.014498   21696 main.go:141] libmachine: (functional-756697) Calling .GetSSHUsername
I0816 12:36:05.014648   21696 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/functional-756697/id_rsa Username:docker}
I0816 12:36:05.132135   21696 ssh_runner.go:195] Run: sudo crictl images --output json
I0816 12:36:05.749321   21696 main.go:141] libmachine: Making call to close driver server
I0816 12:36:05.749344   21696 main.go:141] libmachine: (functional-756697) Calling .Close
I0816 12:36:05.749625   21696 main.go:141] libmachine: Successfully made call to close driver server
I0816 12:36:05.749646   21696 main.go:141] libmachine: Making call to close connection to plugin binary
I0816 12:36:05.749660   21696 main.go:141] libmachine: Making call to close driver server
I0816 12:36:05.749662   21696 main.go:141] libmachine: (functional-756697) DBG | Closing plugin on server side
I0816 12:36:05.749667   21696 main.go:141] libmachine: (functional-756697) Calling .Close
I0816 12:36:05.749906   21696 main.go:141] libmachine: Successfully made call to close driver server
I0816 12:36:05.749929   21696 main.go:141] libmachine: Making call to close connection to plugin binary
I0816 12:36:05.749943   21696 main.go:141] libmachine: (functional-756697) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-756697 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-scheduler          | v1.31.0            | 1766f54c897f0 | 68.4MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kindest/kindnetd              | v20240730-75a5af0c | 917d7814b9b5b | 87.2MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| localhost/kicbase/echo-server           | functional-756697  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| docker.io/library/nginx                 | latest             | 5ef79149e0ec8 | 192MB  |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/kube-apiserver          | v1.31.0            | 604f5db92eaa8 | 95.2MB |
| registry.k8s.io/kube-controller-manager | v1.31.0            | 045733566833c | 89.4MB |
| registry.k8s.io/kube-proxy              | v1.31.0            | ad83b2ca7b09e | 92.7MB |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/minikube-local-cache-test     | functional-756697  | 31a9510491a99 | 3.33kB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-756697 image ls --format table --alsologtostderr:
I0816 12:36:08.330921   21839 out.go:345] Setting OutFile to fd 1 ...
I0816 12:36:08.331170   21839 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 12:36:08.331179   21839 out.go:358] Setting ErrFile to fd 2...
I0816 12:36:08.331183   21839 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 12:36:08.331350   21839 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-3966/.minikube/bin
I0816 12:36:08.331873   21839 config.go:182] Loaded profile config "functional-756697": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0816 12:36:08.331965   21839 config.go:182] Loaded profile config "functional-756697": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0816 12:36:08.332347   21839 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0816 12:36:08.332395   21839 main.go:141] libmachine: Launching plugin server for driver kvm2
I0816 12:36:08.347252   21839 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36713
I0816 12:36:08.347718   21839 main.go:141] libmachine: () Calling .GetVersion
I0816 12:36:08.348325   21839 main.go:141] libmachine: Using API Version  1
I0816 12:36:08.348348   21839 main.go:141] libmachine: () Calling .SetConfigRaw
I0816 12:36:08.348703   21839 main.go:141] libmachine: () Calling .GetMachineName
I0816 12:36:08.348891   21839 main.go:141] libmachine: (functional-756697) Calling .GetState
I0816 12:36:08.350684   21839 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0816 12:36:08.350721   21839 main.go:141] libmachine: Launching plugin server for driver kvm2
I0816 12:36:08.365529   21839 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41993
I0816 12:36:08.365939   21839 main.go:141] libmachine: () Calling .GetVersion
I0816 12:36:08.366469   21839 main.go:141] libmachine: Using API Version  1
I0816 12:36:08.366495   21839 main.go:141] libmachine: () Calling .SetConfigRaw
I0816 12:36:08.366786   21839 main.go:141] libmachine: () Calling .GetMachineName
I0816 12:36:08.366974   21839 main.go:141] libmachine: (functional-756697) Calling .DriverName
I0816 12:36:08.367171   21839 ssh_runner.go:195] Run: systemctl --version
I0816 12:36:08.367195   21839 main.go:141] libmachine: (functional-756697) Calling .GetSSHHostname
I0816 12:36:08.369714   21839 main.go:141] libmachine: (functional-756697) DBG | domain functional-756697 has defined MAC address 52:54:00:28:52:0b in network mk-functional-756697
I0816 12:36:08.370176   21839 main.go:141] libmachine: (functional-756697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:52:0b", ip: ""} in network mk-functional-756697: {Iface:virbr1 ExpiryTime:2024-08-16 13:33:45 +0000 UTC Type:0 Mac:52:54:00:28:52:0b Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:functional-756697 Clientid:01:52:54:00:28:52:0b}
I0816 12:36:08.370215   21839 main.go:141] libmachine: (functional-756697) DBG | domain functional-756697 has defined IP address 192.168.39.73 and MAC address 52:54:00:28:52:0b in network mk-functional-756697
I0816 12:36:08.370447   21839 main.go:141] libmachine: (functional-756697) Calling .GetSSHPort
I0816 12:36:08.370625   21839 main.go:141] libmachine: (functional-756697) Calling .GetSSHKeyPath
I0816 12:36:08.370763   21839 main.go:141] libmachine: (functional-756697) Calling .GetSSHUsername
I0816 12:36:08.370918   21839 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/functional-756697/id_rsa Username:docker}
I0816 12:36:08.482764   21839 ssh_runner.go:195] Run: sudo crictl images --output json
I0816 12:36:08.581702   21839 main.go:141] libmachine: Making call to close driver server
I0816 12:36:08.581721   21839 main.go:141] libmachine: (functional-756697) Calling .Close
I0816 12:36:08.581996   21839 main.go:141] libmachine: Successfully made call to close driver server
I0816 12:36:08.582024   21839 main.go:141] libmachine: Making call to close connection to plugin binary
I0816 12:36:08.582024   21839 main.go:141] libmachine: (functional-756697) DBG | Closing plugin on server side
I0816 12:36:08.582032   21839 main.go:141] libmachine: Making call to close driver server
I0816 12:36:08.582041   21839 main.go:141] libmachine: (functional-756697) Calling .Close
I0816 12:36:08.582237   21839 main.go:141] libmachine: Successfully made call to close driver server
I0816 12:36:08.582282   21839 main.go:141] libmachine: Making call to close connection to plugin binary
I0816 12:36:08.582306   21839 main.go:141] libmachine: (functional-756697) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-756697 image ls --format json --alsologtostderr:
[{"id":"917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557","repoDigests":["docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3","docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"],"repoTags":["docker.io/kindest/kindnetd:v20240730-75a5af0c"],"size":"87165492"},{"id":"31a9510491a9994db4d814b24afe807ed8104d51116de8bfbeee2c7062e86104","repoDigests":["localhost/minikube-local-cache-test@sha256:106dd4f6abd0d3ebaae99560c3d50df6c6330286ff4ee37651d2d3c0a2ee1832"],"repoTags":["localhost/minikube-local-cache-test:functional-756697"],"size":"3330"},{"id":"1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94","repoDigests":["registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a","registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"
68420936"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-756697"],"size":"4943877"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3","repoDigests":["registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf","registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"95233506"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@s
ha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"5ef79149e0ec84a7a9f9284c3f91aa3c20608f8391f5445eabe92ef07dbda03c","repoDigests":["docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add","docker.io/library/nginx@sha25
6:5f0574409b3add89581b96c68afe9e9c7b284651c3a974b6e8bac46bf95e6b7f"],"repoTags":["docker.io/library/nginx:latest"],"size":"191841612"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"ad83b2ca7b09e6162f96f933ee
cded731cbebf049c78f941fd0ce560a86b6494","repoDigests":["registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf","registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"92728217"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"
],"size":"149009664"},{"id":"045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d","registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"89437512"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-756697 image ls --format json --alsologtostderr:
I0816 12:36:08.052017   21805 out.go:345] Setting OutFile to fd 1 ...
I0816 12:36:08.052222   21805 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 12:36:08.052296   21805 out.go:358] Setting ErrFile to fd 2...
I0816 12:36:08.052321   21805 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 12:36:08.053106   21805 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-3966/.minikube/bin
I0816 12:36:08.053883   21805 config.go:182] Loaded profile config "functional-756697": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0816 12:36:08.054026   21805 config.go:182] Loaded profile config "functional-756697": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0816 12:36:08.054543   21805 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0816 12:36:08.054597   21805 main.go:141] libmachine: Launching plugin server for driver kvm2
I0816 12:36:08.070549   21805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35985
I0816 12:36:08.071019   21805 main.go:141] libmachine: () Calling .GetVersion
I0816 12:36:08.071617   21805 main.go:141] libmachine: Using API Version  1
I0816 12:36:08.071645   21805 main.go:141] libmachine: () Calling .SetConfigRaw
I0816 12:36:08.071967   21805 main.go:141] libmachine: () Calling .GetMachineName
I0816 12:36:08.072178   21805 main.go:141] libmachine: (functional-756697) Calling .GetState
I0816 12:36:08.074053   21805 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0816 12:36:08.074088   21805 main.go:141] libmachine: Launching plugin server for driver kvm2
I0816 12:36:08.089654   21805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34297
I0816 12:36:08.089989   21805 main.go:141] libmachine: () Calling .GetVersion
I0816 12:36:08.090465   21805 main.go:141] libmachine: Using API Version  1
I0816 12:36:08.090511   21805 main.go:141] libmachine: () Calling .SetConfigRaw
I0816 12:36:08.090868   21805 main.go:141] libmachine: () Calling .GetMachineName
I0816 12:36:08.091077   21805 main.go:141] libmachine: (functional-756697) Calling .DriverName
I0816 12:36:08.091300   21805 ssh_runner.go:195] Run: systemctl --version
I0816 12:36:08.091328   21805 main.go:141] libmachine: (functional-756697) Calling .GetSSHHostname
I0816 12:36:08.094073   21805 main.go:141] libmachine: (functional-756697) DBG | domain functional-756697 has defined MAC address 52:54:00:28:52:0b in network mk-functional-756697
I0816 12:36:08.094469   21805 main.go:141] libmachine: (functional-756697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:52:0b", ip: ""} in network mk-functional-756697: {Iface:virbr1 ExpiryTime:2024-08-16 13:33:45 +0000 UTC Type:0 Mac:52:54:00:28:52:0b Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:functional-756697 Clientid:01:52:54:00:28:52:0b}
I0816 12:36:08.094500   21805 main.go:141] libmachine: (functional-756697) DBG | domain functional-756697 has defined IP address 192.168.39.73 and MAC address 52:54:00:28:52:0b in network mk-functional-756697
I0816 12:36:08.094797   21805 main.go:141] libmachine: (functional-756697) Calling .GetSSHPort
I0816 12:36:08.094978   21805 main.go:141] libmachine: (functional-756697) Calling .GetSSHKeyPath
I0816 12:36:08.095138   21805 main.go:141] libmachine: (functional-756697) Calling .GetSSHUsername
I0816 12:36:08.095278   21805 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/functional-756697/id_rsa Username:docker}
I0816 12:36:08.186638   21805 ssh_runner.go:195] Run: sudo crictl images --output json
I0816 12:36:08.285633   21805 main.go:141] libmachine: Making call to close driver server
I0816 12:36:08.285649   21805 main.go:141] libmachine: (functional-756697) Calling .Close
I0816 12:36:08.285896   21805 main.go:141] libmachine: Successfully made call to close driver server
I0816 12:36:08.285921   21805 main.go:141] libmachine: Making call to close connection to plugin binary
I0816 12:36:08.285938   21805 main.go:141] libmachine: Making call to close driver server
I0816 12:36:08.285898   21805 main.go:141] libmachine: (functional-756697) DBG | Closing plugin on server side
I0816 12:36:08.285948   21805 main.go:141] libmachine: (functional-756697) Calling .Close
I0816 12:36:08.286203   21805 main.go:141] libmachine: Successfully made call to close driver server
I0816 12:36:08.286219   21805 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-756697 image ls --format yaml --alsologtostderr:
- id: 31a9510491a9994db4d814b24afe807ed8104d51116de8bfbeee2c7062e86104
repoDigests:
- localhost/minikube-local-cache-test@sha256:106dd4f6abd0d3ebaae99560c3d50df6c6330286ff4ee37651d2d3c0a2ee1832
repoTags:
- localhost/minikube-local-cache-test:functional-756697
size: "3330"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494
repoDigests:
- registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf
- registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "92728217"
- id: 917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557
repoDigests:
- docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3
- docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a
repoTags:
- docker.io/kindest/kindnetd:v20240730-75a5af0c
size: "87165492"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a
- registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "68420936"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 5ef79149e0ec84a7a9f9284c3f91aa3c20608f8391f5445eabe92ef07dbda03c
repoDigests:
- docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add
- docker.io/library/nginx@sha256:5f0574409b3add89581b96c68afe9e9c7b284651c3a974b6e8bac46bf95e6b7f
repoTags:
- docker.io/library/nginx:latest
size: "191841612"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-756697
size: "4943877"
- id: 045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d
- registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "89437512"
- id: 604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf
- registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "95233506"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-756697 image ls --format yaml --alsologtostderr:
I0816 12:36:05.795659   21720 out.go:345] Setting OutFile to fd 1 ...
I0816 12:36:05.795775   21720 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 12:36:05.795785   21720 out.go:358] Setting ErrFile to fd 2...
I0816 12:36:05.795791   21720 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 12:36:05.795959   21720 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-3966/.minikube/bin
I0816 12:36:05.796512   21720 config.go:182] Loaded profile config "functional-756697": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0816 12:36:05.796629   21720 config.go:182] Loaded profile config "functional-756697": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0816 12:36:05.797015   21720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0816 12:36:05.797074   21720 main.go:141] libmachine: Launching plugin server for driver kvm2
I0816 12:36:05.813021   21720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36761
I0816 12:36:05.813442   21720 main.go:141] libmachine: () Calling .GetVersion
I0816 12:36:05.814020   21720 main.go:141] libmachine: Using API Version  1
I0816 12:36:05.814049   21720 main.go:141] libmachine: () Calling .SetConfigRaw
I0816 12:36:05.814408   21720 main.go:141] libmachine: () Calling .GetMachineName
I0816 12:36:05.814585   21720 main.go:141] libmachine: (functional-756697) Calling .GetState
I0816 12:36:05.816285   21720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0816 12:36:05.816319   21720 main.go:141] libmachine: Launching plugin server for driver kvm2
I0816 12:36:05.833719   21720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42781
I0816 12:36:05.834202   21720 main.go:141] libmachine: () Calling .GetVersion
I0816 12:36:05.834695   21720 main.go:141] libmachine: Using API Version  1
I0816 12:36:05.834729   21720 main.go:141] libmachine: () Calling .SetConfigRaw
I0816 12:36:05.835032   21720 main.go:141] libmachine: () Calling .GetMachineName
I0816 12:36:05.835188   21720 main.go:141] libmachine: (functional-756697) Calling .DriverName
I0816 12:36:05.835386   21720 ssh_runner.go:195] Run: systemctl --version
I0816 12:36:05.835415   21720 main.go:141] libmachine: (functional-756697) Calling .GetSSHHostname
I0816 12:36:05.838207   21720 main.go:141] libmachine: (functional-756697) DBG | domain functional-756697 has defined MAC address 52:54:00:28:52:0b in network mk-functional-756697
I0816 12:36:05.838606   21720 main.go:141] libmachine: (functional-756697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:52:0b", ip: ""} in network mk-functional-756697: {Iface:virbr1 ExpiryTime:2024-08-16 13:33:45 +0000 UTC Type:0 Mac:52:54:00:28:52:0b Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:functional-756697 Clientid:01:52:54:00:28:52:0b}
I0816 12:36:05.838630   21720 main.go:141] libmachine: (functional-756697) DBG | domain functional-756697 has defined IP address 192.168.39.73 and MAC address 52:54:00:28:52:0b in network mk-functional-756697
I0816 12:36:05.838737   21720 main.go:141] libmachine: (functional-756697) Calling .GetSSHPort
I0816 12:36:05.838928   21720 main.go:141] libmachine: (functional-756697) Calling .GetSSHKeyPath
I0816 12:36:05.839075   21720 main.go:141] libmachine: (functional-756697) Calling .GetSSHUsername
I0816 12:36:05.839241   21720 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/functional-756697/id_rsa Username:docker}
I0816 12:36:05.943996   21720 ssh_runner.go:195] Run: sudo crictl images --output json
I0816 12:36:06.004634   21720 main.go:141] libmachine: Making call to close driver server
I0816 12:36:06.004650   21720 main.go:141] libmachine: (functional-756697) Calling .Close
I0816 12:36:06.004932   21720 main.go:141] libmachine: Successfully made call to close driver server
I0816 12:36:06.004947   21720 main.go:141] libmachine: Making call to close connection to plugin binary
I0816 12:36:06.004958   21720 main.go:141] libmachine: Making call to close driver server
I0816 12:36:06.004965   21720 main.go:141] libmachine: (functional-756697) Calling .Close
I0816 12:36:06.005167   21720 main.go:141] libmachine: Successfully made call to close driver server
I0816 12:36:06.005187   21720 main.go:141] libmachine: Making call to close connection to plugin binary
I0816 12:36:06.005194   21720 main.go:141] libmachine: (functional-756697) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-756697 ssh pgrep buildkitd: exit status 1 (182.519306ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 image build -t localhost/my-image:functional-756697 testdata/build --alsologtostderr
2024/08/16 12:36:07 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-756697 image build -t localhost/my-image:functional-756697 testdata/build --alsologtostderr: (3.785616553s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-756697 image build -t localhost/my-image:functional-756697 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 26d0dcec3b0
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-756697
--> b85e7c90d7e
Successfully tagged localhost/my-image:functional-756697
b85e7c90d7e70d0206b813ff9c51167ad0065fbe1c64083b3155b0cfe944e165
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-756697 image build -t localhost/my-image:functional-756697 testdata/build --alsologtostderr:
I0816 12:36:06.235678   21774 out.go:345] Setting OutFile to fd 1 ...
I0816 12:36:06.235823   21774 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 12:36:06.235832   21774 out.go:358] Setting ErrFile to fd 2...
I0816 12:36:06.235836   21774 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 12:36:06.236003   21774 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-3966/.minikube/bin
I0816 12:36:06.236564   21774 config.go:182] Loaded profile config "functional-756697": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0816 12:36:06.237097   21774 config.go:182] Loaded profile config "functional-756697": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0816 12:36:06.237463   21774 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0816 12:36:06.237497   21774 main.go:141] libmachine: Launching plugin server for driver kvm2
I0816 12:36:06.252534   21774 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34119
I0816 12:36:06.252993   21774 main.go:141] libmachine: () Calling .GetVersion
I0816 12:36:06.253527   21774 main.go:141] libmachine: Using API Version  1
I0816 12:36:06.253547   21774 main.go:141] libmachine: () Calling .SetConfigRaw
I0816 12:36:06.253897   21774 main.go:141] libmachine: () Calling .GetMachineName
I0816 12:36:06.254087   21774 main.go:141] libmachine: (functional-756697) Calling .GetState
I0816 12:36:06.255906   21774 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0816 12:36:06.255948   21774 main.go:141] libmachine: Launching plugin server for driver kvm2
I0816 12:36:06.271367   21774 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42939
I0816 12:36:06.271786   21774 main.go:141] libmachine: () Calling .GetVersion
I0816 12:36:06.272330   21774 main.go:141] libmachine: Using API Version  1
I0816 12:36:06.272363   21774 main.go:141] libmachine: () Calling .SetConfigRaw
I0816 12:36:06.272699   21774 main.go:141] libmachine: () Calling .GetMachineName
I0816 12:36:06.272851   21774 main.go:141] libmachine: (functional-756697) Calling .DriverName
I0816 12:36:06.273053   21774 ssh_runner.go:195] Run: systemctl --version
I0816 12:36:06.273079   21774 main.go:141] libmachine: (functional-756697) Calling .GetSSHHostname
I0816 12:36:06.276063   21774 main.go:141] libmachine: (functional-756697) DBG | domain functional-756697 has defined MAC address 52:54:00:28:52:0b in network mk-functional-756697
I0816 12:36:06.276455   21774 main.go:141] libmachine: (functional-756697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:52:0b", ip: ""} in network mk-functional-756697: {Iface:virbr1 ExpiryTime:2024-08-16 13:33:45 +0000 UTC Type:0 Mac:52:54:00:28:52:0b Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:functional-756697 Clientid:01:52:54:00:28:52:0b}
I0816 12:36:06.276483   21774 main.go:141] libmachine: (functional-756697) DBG | domain functional-756697 has defined IP address 192.168.39.73 and MAC address 52:54:00:28:52:0b in network mk-functional-756697
I0816 12:36:06.276597   21774 main.go:141] libmachine: (functional-756697) Calling .GetSSHPort
I0816 12:36:06.276803   21774 main.go:141] libmachine: (functional-756697) Calling .GetSSHKeyPath
I0816 12:36:06.277051   21774 main.go:141] libmachine: (functional-756697) Calling .GetSSHUsername
I0816 12:36:06.277210   21774 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/functional-756697/id_rsa Username:docker}
I0816 12:36:06.375896   21774 build_images.go:161] Building image from path: /tmp/build.4113026463.tar
I0816 12:36:06.375971   21774 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0816 12:36:06.397256   21774 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4113026463.tar
I0816 12:36:06.403671   21774 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4113026463.tar: stat -c "%s %y" /var/lib/minikube/build/build.4113026463.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4113026463.tar': No such file or directory
I0816 12:36:06.403705   21774 ssh_runner.go:362] scp /tmp/build.4113026463.tar --> /var/lib/minikube/build/build.4113026463.tar (3072 bytes)
I0816 12:36:06.447697   21774 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4113026463
I0816 12:36:06.460986   21774 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4113026463 -xf /var/lib/minikube/build/build.4113026463.tar
I0816 12:36:06.472788   21774 crio.go:315] Building image: /var/lib/minikube/build/build.4113026463
I0816 12:36:06.472851   21774 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-756697 /var/lib/minikube/build/build.4113026463 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0816 12:36:09.953119   21774 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-756697 /var/lib/minikube/build/build.4113026463 --cgroup-manager=cgroupfs: (3.480240925s)
I0816 12:36:09.953205   21774 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4113026463
I0816 12:36:09.964962   21774 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4113026463.tar
I0816 12:36:09.974737   21774 build_images.go:217] Built localhost/my-image:functional-756697 from /tmp/build.4113026463.tar
I0816 12:36:09.974768   21774 build_images.go:133] succeeded building to: functional-756697
I0816 12:36:09.974775   21774 build_images.go:134] failed building to: 
I0816 12:36:09.974814   21774 main.go:141] libmachine: Making call to close driver server
I0816 12:36:09.974827   21774 main.go:141] libmachine: (functional-756697) Calling .Close
I0816 12:36:09.975133   21774 main.go:141] libmachine: Successfully made call to close driver server
I0816 12:36:09.975173   21774 main.go:141] libmachine: Making call to close connection to plugin binary
I0816 12:36:09.975188   21774 main.go:141] libmachine: Making call to close driver server
I0816 12:36:09.975199   21774 main.go:141] libmachine: (functional-756697) Calling .Close
I0816 12:36:09.975198   21774 main.go:141] libmachine: (functional-756697) DBG | Closing plugin on server side
I0816 12:36:09.975468   21774 main.go:141] libmachine: (functional-756697) DBG | Closing plugin on server side
I0816 12:36:09.975505   21774 main.go:141] libmachine: Successfully made call to close driver server
I0816 12:36:09.975533   21774 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.9553041s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-756697
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.98s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 image load --daemon kicbase/echo-server:functional-756697 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-756697 image load --daemon kicbase/echo-server:functional-756697 --alsologtostderr: (1.388926655s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.61s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "232.767395ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "44.91544ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 image load --daemon kicbase/echo-server:functional-756697 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "231.586494ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "44.025254ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (14.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-756697 /tmp/TestFunctionalparallelMountCmdany-port925479812/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1723811746228902588" to /tmp/TestFunctionalparallelMountCmdany-port925479812/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1723811746228902588" to /tmp/TestFunctionalparallelMountCmdany-port925479812/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1723811746228902588" to /tmp/TestFunctionalparallelMountCmdany-port925479812/001/test-1723811746228902588
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-756697 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (197.97212ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 16 12:35 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 16 12:35 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 16 12:35 test-1723811746228902588
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 ssh cat /mount-9p/test-1723811746228902588
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-756697 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [da07f983-f446-45ac-9fcf-eaed0b123574] Pending
helpers_test.go:344: "busybox-mount" [da07f983-f446-45ac-9fcf-eaed0b123574] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [da07f983-f446-45ac-9fcf-eaed0b123574] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [da07f983-f446-45ac-9fcf-eaed0b123574] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 12.010562078s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-756697 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-756697 /tmp/TestFunctionalparallelMountCmdany-port925479812/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (14.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-756697
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 image load --daemon kicbase/echo-server:functional-756697 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 image save kicbase/echo-server:functional-756697 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 image rm kicbase/echo-server:functional-756697 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-756697
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 image save --daemon kicbase/echo-server:functional-756697 --alsologtostderr
functional_test.go:424: (dbg) Done: out/minikube-linux-amd64 -p functional-756697 image save --daemon kicbase/echo-server:functional-756697 --alsologtostderr: (3.610077326s)
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-756697
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.65s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 service list -o json
functional_test.go:1494: Took "304.03146ms" to run "out/minikube-linux-amd64 -p functional-756697 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.73:31923
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.73:31923
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-756697 /tmp/TestFunctionalparallelMountCmdspecific-port157436270/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-756697 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (235.588127ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-756697 /tmp/TestFunctionalparallelMountCmdspecific-port157436270/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-756697 ssh "sudo umount -f /mount-9p": exit status 1 (269.583735ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-756697 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-756697 /tmp/TestFunctionalparallelMountCmdspecific-port157436270/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-756697 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2535356287/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-756697 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2535356287/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-756697 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2535356287/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-756697 ssh "findmnt -T" /mount1: exit status 1 (291.704208ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-756697 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-756697 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-756697 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2535356287/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-756697 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2535356287/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-756697 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2535356287/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.34s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-756697
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-756697
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-756697
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (245.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-863936 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0816 12:36:40.685090   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/client.crt: no such file or directory" logger="UnhandledError"
E0816 12:38:56.826803   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/client.crt: no such file or directory" logger="UnhandledError"
E0816 12:39:24.527041   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-863936 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (4m4.860791458s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (245.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-863936 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-863936 -- rollout status deployment/busybox
E0816 12:40:40.921107   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/functional-756697/client.crt: no such file or directory" logger="UnhandledError"
E0816 12:40:40.927500   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/functional-756697/client.crt: no such file or directory" logger="UnhandledError"
E0816 12:40:40.938910   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/functional-756697/client.crt: no such file or directory" logger="UnhandledError"
E0816 12:40:40.960480   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/functional-756697/client.crt: no such file or directory" logger="UnhandledError"
E0816 12:40:41.001894   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/functional-756697/client.crt: no such file or directory" logger="UnhandledError"
E0816 12:40:41.083397   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/functional-756697/client.crt: no such file or directory" logger="UnhandledError"
E0816 12:40:41.244922   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/functional-756697/client.crt: no such file or directory" logger="UnhandledError"
E0816 12:40:41.566722   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/functional-756697/client.crt: no such file or directory" logger="UnhandledError"
E0816 12:40:42.208737   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/functional-756697/client.crt: no such file or directory" logger="UnhandledError"
E0816 12:40:43.490236   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/functional-756697/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-863936 -- rollout status deployment/busybox: (5.963887385s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-863936 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-863936 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-863936 -- exec busybox-7dff88458-gm458 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-863936 -- exec busybox-7dff88458-t5tjw -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-863936 -- exec busybox-7dff88458-zqpfx -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-863936 -- exec busybox-7dff88458-gm458 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-863936 -- exec busybox-7dff88458-t5tjw -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-863936 -- exec busybox-7dff88458-zqpfx -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-863936 -- exec busybox-7dff88458-gm458 -- nslookup kubernetes.default.svc.cluster.local
E0816 12:40:46.051566   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/functional-756697/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-863936 -- exec busybox-7dff88458-t5tjw -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-863936 -- exec busybox-7dff88458-zqpfx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-863936 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-863936 -- exec busybox-7dff88458-gm458 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-863936 -- exec busybox-7dff88458-gm458 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-863936 -- exec busybox-7dff88458-t5tjw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-863936 -- exec busybox-7dff88458-t5tjw -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-863936 -- exec busybox-7dff88458-zqpfx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-863936 -- exec busybox-7dff88458-zqpfx -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (54.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-863936 -v=7 --alsologtostderr
E0816 12:40:51.173671   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/functional-756697/client.crt: no such file or directory" logger="UnhandledError"
E0816 12:41:01.415166   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/functional-756697/client.crt: no such file or directory" logger="UnhandledError"
E0816 12:41:21.897153   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/functional-756697/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-863936 -v=7 --alsologtostderr: (54.100474264s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (54.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-863936 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 cp testdata/cp-test.txt ha-863936:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 ssh -n ha-863936 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 cp ha-863936:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2848660471/001/cp-test_ha-863936.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 ssh -n ha-863936 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 cp ha-863936:/home/docker/cp-test.txt ha-863936-m02:/home/docker/cp-test_ha-863936_ha-863936-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 ssh -n ha-863936 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 ssh -n ha-863936-m02 "sudo cat /home/docker/cp-test_ha-863936_ha-863936-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 cp ha-863936:/home/docker/cp-test.txt ha-863936-m03:/home/docker/cp-test_ha-863936_ha-863936-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 ssh -n ha-863936 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 ssh -n ha-863936-m03 "sudo cat /home/docker/cp-test_ha-863936_ha-863936-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 cp ha-863936:/home/docker/cp-test.txt ha-863936-m04:/home/docker/cp-test_ha-863936_ha-863936-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 ssh -n ha-863936 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 ssh -n ha-863936-m04 "sudo cat /home/docker/cp-test_ha-863936_ha-863936-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 cp testdata/cp-test.txt ha-863936-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 ssh -n ha-863936-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 cp ha-863936-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2848660471/001/cp-test_ha-863936-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 ssh -n ha-863936-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 cp ha-863936-m02:/home/docker/cp-test.txt ha-863936:/home/docker/cp-test_ha-863936-m02_ha-863936.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 ssh -n ha-863936-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 ssh -n ha-863936 "sudo cat /home/docker/cp-test_ha-863936-m02_ha-863936.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 cp ha-863936-m02:/home/docker/cp-test.txt ha-863936-m03:/home/docker/cp-test_ha-863936-m02_ha-863936-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 ssh -n ha-863936-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 ssh -n ha-863936-m03 "sudo cat /home/docker/cp-test_ha-863936-m02_ha-863936-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 cp ha-863936-m02:/home/docker/cp-test.txt ha-863936-m04:/home/docker/cp-test_ha-863936-m02_ha-863936-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 ssh -n ha-863936-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 ssh -n ha-863936-m04 "sudo cat /home/docker/cp-test_ha-863936-m02_ha-863936-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 cp testdata/cp-test.txt ha-863936-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 ssh -n ha-863936-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 cp ha-863936-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2848660471/001/cp-test_ha-863936-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 ssh -n ha-863936-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 cp ha-863936-m03:/home/docker/cp-test.txt ha-863936:/home/docker/cp-test_ha-863936-m03_ha-863936.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 ssh -n ha-863936-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 ssh -n ha-863936 "sudo cat /home/docker/cp-test_ha-863936-m03_ha-863936.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 cp ha-863936-m03:/home/docker/cp-test.txt ha-863936-m02:/home/docker/cp-test_ha-863936-m03_ha-863936-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 ssh -n ha-863936-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 ssh -n ha-863936-m02 "sudo cat /home/docker/cp-test_ha-863936-m03_ha-863936-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 cp ha-863936-m03:/home/docker/cp-test.txt ha-863936-m04:/home/docker/cp-test_ha-863936-m03_ha-863936-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 ssh -n ha-863936-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 ssh -n ha-863936-m04 "sudo cat /home/docker/cp-test_ha-863936-m03_ha-863936-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 cp testdata/cp-test.txt ha-863936-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 ssh -n ha-863936-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 cp ha-863936-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2848660471/001/cp-test_ha-863936-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 ssh -n ha-863936-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 cp ha-863936-m04:/home/docker/cp-test.txt ha-863936:/home/docker/cp-test_ha-863936-m04_ha-863936.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 ssh -n ha-863936-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 ssh -n ha-863936 "sudo cat /home/docker/cp-test_ha-863936-m04_ha-863936.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 cp ha-863936-m04:/home/docker/cp-test.txt ha-863936-m02:/home/docker/cp-test_ha-863936-m04_ha-863936-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 ssh -n ha-863936-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 ssh -n ha-863936-m02 "sudo cat /home/docker/cp-test_ha-863936-m04_ha-863936-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 cp ha-863936-m04:/home/docker/cp-test.txt ha-863936-m03:/home/docker/cp-test_ha-863936-m04_ha-863936-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 ssh -n ha-863936-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 ssh -n ha-863936-m03 "sudo cat /home/docker/cp-test_ha-863936-m04_ha-863936-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.448545025s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-863936 node delete m03 -v=7 --alsologtostderr: (15.817083819s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (319.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-863936 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0816 12:53:56.825298   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/client.crt: no such file or directory" logger="UnhandledError"
E0816 12:55:40.921108   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/functional-756697/client.crt: no such file or directory" logger="UnhandledError"
E0816 12:57:03.984307   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/functional-756697/client.crt: no such file or directory" logger="UnhandledError"
E0816 12:58:56.823177   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-863936 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m18.384261972s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (319.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (77.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-863936 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-863936 --control-plane -v=7 --alsologtostderr: (1m17.06547418s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-863936 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (77.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.53s)

                                                
                                    
x
+
TestJSONOutput/start/Command (84.4s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-859246 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0816 13:00:40.921117   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/functional-756697/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-859246 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m24.4036246s)
--- PASS: TestJSONOutput/start/Command (84.40s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.73s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-859246 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.73s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-859246 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.34s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-859246 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-859246 --output=json --user=testUser: (7.34401587s)
--- PASS: TestJSONOutput/stop/Command (7.34s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-554568 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-554568 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (57.492276ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"23613338-77f4-458e-a58d-53a4aa0fa099","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-554568] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b9032db9-f556-4732-a9c8-546c7b4cc284","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19423"}}
	{"specversion":"1.0","id":"8b78840f-e389-4441-ab74-b6b3abd854d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d466420d-1919-4e9d-95c9-eef04f981c7e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19423-3966/kubeconfig"}}
	{"specversion":"1.0","id":"821f5f55-ea8f-4d95-8e2e-4d12dd2c921d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-3966/.minikube"}}
	{"specversion":"1.0","id":"c893cbfe-675f-47c3-b182-3809afdb2809","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"98c91bbf-d595-47b3-92fe-b31e1b801c57","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"954cc326-41f8-485b-ae7f-275cc183067a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-554568" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-554568
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (85.07s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-932845 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-932845 --driver=kvm2  --container-runtime=crio: (42.345752229s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-935196 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-935196 --driver=kvm2  --container-runtime=crio: (40.129937907s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-932845
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-935196
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-935196" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-935196
helpers_test.go:175: Cleaning up "first-932845" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-932845
--- PASS: TestMinikubeProfile (85.07s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (29.59s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-665009 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0816 13:03:56.825120   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-665009 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.590926344s)
--- PASS: TestMountStart/serial/StartWithMountFirst (29.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-665009 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-665009 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.95s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-677072 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-677072 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.944246542s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.95s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-677072 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-677072 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.9s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-665009 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.90s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-677072 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-677072 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-677072
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-677072: (2.272826678s)
--- PASS: TestMountStart/serial/Stop (2.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.79s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-677072
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-677072: (21.788707384s)
--- PASS: TestMountStart/serial/RestartStopped (22.79s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-677072 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-677072 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (113.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-336982 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0816 13:05:40.921167   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/functional-756697/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-336982 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m52.645678282s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-336982 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (113.03s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-336982 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-336982 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-336982 -- rollout status deployment/busybox: (5.068927431s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-336982 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-336982 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-336982 -- exec busybox-7dff88458-m9dxd -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-336982 -- exec busybox-7dff88458-vcx2m -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-336982 -- exec busybox-7dff88458-m9dxd -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-336982 -- exec busybox-7dff88458-vcx2m -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-336982 -- exec busybox-7dff88458-m9dxd -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-336982 -- exec busybox-7dff88458-vcx2m -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.49s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-336982 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-336982 -- exec busybox-7dff88458-m9dxd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-336982 -- exec busybox-7dff88458-m9dxd -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-336982 -- exec busybox-7dff88458-vcx2m -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-336982 -- exec busybox-7dff88458-vcx2m -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.80s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (50.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-336982 -v 3 --alsologtostderr
E0816 13:06:59.890944   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-336982 -v 3 --alsologtostderr: (50.015243053s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-336982 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (50.57s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-336982 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.20s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-336982 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-336982 cp testdata/cp-test.txt multinode-336982:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-336982 ssh -n multinode-336982 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-336982 cp multinode-336982:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile804343114/001/cp-test_multinode-336982.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-336982 ssh -n multinode-336982 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-336982 cp multinode-336982:/home/docker/cp-test.txt multinode-336982-m02:/home/docker/cp-test_multinode-336982_multinode-336982-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-336982 ssh -n multinode-336982 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-336982 ssh -n multinode-336982-m02 "sudo cat /home/docker/cp-test_multinode-336982_multinode-336982-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-336982 cp multinode-336982:/home/docker/cp-test.txt multinode-336982-m03:/home/docker/cp-test_multinode-336982_multinode-336982-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-336982 ssh -n multinode-336982 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-336982 ssh -n multinode-336982-m03 "sudo cat /home/docker/cp-test_multinode-336982_multinode-336982-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-336982 cp testdata/cp-test.txt multinode-336982-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-336982 ssh -n multinode-336982-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-336982 cp multinode-336982-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile804343114/001/cp-test_multinode-336982-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-336982 ssh -n multinode-336982-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-336982 cp multinode-336982-m02:/home/docker/cp-test.txt multinode-336982:/home/docker/cp-test_multinode-336982-m02_multinode-336982.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-336982 ssh -n multinode-336982-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-336982 ssh -n multinode-336982 "sudo cat /home/docker/cp-test_multinode-336982-m02_multinode-336982.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-336982 cp multinode-336982-m02:/home/docker/cp-test.txt multinode-336982-m03:/home/docker/cp-test_multinode-336982-m02_multinode-336982-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-336982 ssh -n multinode-336982-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-336982 ssh -n multinode-336982-m03 "sudo cat /home/docker/cp-test_multinode-336982-m02_multinode-336982-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-336982 cp testdata/cp-test.txt multinode-336982-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-336982 ssh -n multinode-336982-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-336982 cp multinode-336982-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile804343114/001/cp-test_multinode-336982-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-336982 ssh -n multinode-336982-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-336982 cp multinode-336982-m03:/home/docker/cp-test.txt multinode-336982:/home/docker/cp-test_multinode-336982-m03_multinode-336982.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-336982 ssh -n multinode-336982-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-336982 ssh -n multinode-336982 "sudo cat /home/docker/cp-test_multinode-336982-m03_multinode-336982.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-336982 cp multinode-336982-m03:/home/docker/cp-test.txt multinode-336982-m02:/home/docker/cp-test_multinode-336982-m03_multinode-336982-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-336982 ssh -n multinode-336982-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-336982 ssh -n multinode-336982-m02 "sudo cat /home/docker/cp-test_multinode-336982-m03_multinode-336982-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.88s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-336982 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-336982 node stop m03: (1.42460495s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-336982 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-336982 status: exit status 7 (402.192779ms)

                                                
                                                
-- stdout --
	multinode-336982
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-336982-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-336982-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-336982 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-336982 status --alsologtostderr: exit status 7 (412.263395ms)

                                                
                                                
-- stdout --
	multinode-336982
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-336982-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-336982-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 13:07:54.995456   39075 out.go:345] Setting OutFile to fd 1 ...
	I0816 13:07:54.995730   39075 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 13:07:54.995740   39075 out.go:358] Setting ErrFile to fd 2...
	I0816 13:07:54.995745   39075 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 13:07:54.995934   39075 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-3966/.minikube/bin
	I0816 13:07:54.996080   39075 out.go:352] Setting JSON to false
	I0816 13:07:54.996104   39075 mustload.go:65] Loading cluster: multinode-336982
	I0816 13:07:54.996155   39075 notify.go:220] Checking for updates...
	I0816 13:07:54.996596   39075 config.go:182] Loaded profile config "multinode-336982": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 13:07:54.996616   39075 status.go:255] checking status of multinode-336982 ...
	I0816 13:07:54.997103   39075 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 13:07:54.997139   39075 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:07:55.017526   39075 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35387
	I0816 13:07:55.017950   39075 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:07:55.018528   39075 main.go:141] libmachine: Using API Version  1
	I0816 13:07:55.018556   39075 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:07:55.018856   39075 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:07:55.019029   39075 main.go:141] libmachine: (multinode-336982) Calling .GetState
	I0816 13:07:55.020625   39075 status.go:330] multinode-336982 host status = "Running" (err=<nil>)
	I0816 13:07:55.020639   39075 host.go:66] Checking if "multinode-336982" exists ...
	I0816 13:07:55.020938   39075 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 13:07:55.020970   39075 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:07:55.035473   39075 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43837
	I0816 13:07:55.035807   39075 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:07:55.036218   39075 main.go:141] libmachine: Using API Version  1
	I0816 13:07:55.036237   39075 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:07:55.036503   39075 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:07:55.036684   39075 main.go:141] libmachine: (multinode-336982) Calling .GetIP
	I0816 13:07:55.039005   39075 main.go:141] libmachine: (multinode-336982) DBG | domain multinode-336982 has defined MAC address 52:54:00:26:1f:11 in network mk-multinode-336982
	I0816 13:07:55.039413   39075 main.go:141] libmachine: (multinode-336982) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:1f:11", ip: ""} in network mk-multinode-336982: {Iface:virbr1 ExpiryTime:2024-08-16 14:05:09 +0000 UTC Type:0 Mac:52:54:00:26:1f:11 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-336982 Clientid:01:52:54:00:26:1f:11}
	I0816 13:07:55.039446   39075 main.go:141] libmachine: (multinode-336982) DBG | domain multinode-336982 has defined IP address 192.168.39.208 and MAC address 52:54:00:26:1f:11 in network mk-multinode-336982
	I0816 13:07:55.039532   39075 host.go:66] Checking if "multinode-336982" exists ...
	I0816 13:07:55.039811   39075 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 13:07:55.039847   39075 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:07:55.054532   39075 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33071
	I0816 13:07:55.054931   39075 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:07:55.055395   39075 main.go:141] libmachine: Using API Version  1
	I0816 13:07:55.055423   39075 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:07:55.055709   39075 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:07:55.055895   39075 main.go:141] libmachine: (multinode-336982) Calling .DriverName
	I0816 13:07:55.056086   39075 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 13:07:55.056105   39075 main.go:141] libmachine: (multinode-336982) Calling .GetSSHHostname
	I0816 13:07:55.058762   39075 main.go:141] libmachine: (multinode-336982) DBG | domain multinode-336982 has defined MAC address 52:54:00:26:1f:11 in network mk-multinode-336982
	I0816 13:07:55.059095   39075 main.go:141] libmachine: (multinode-336982) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:1f:11", ip: ""} in network mk-multinode-336982: {Iface:virbr1 ExpiryTime:2024-08-16 14:05:09 +0000 UTC Type:0 Mac:52:54:00:26:1f:11 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-336982 Clientid:01:52:54:00:26:1f:11}
	I0816 13:07:55.059121   39075 main.go:141] libmachine: (multinode-336982) DBG | domain multinode-336982 has defined IP address 192.168.39.208 and MAC address 52:54:00:26:1f:11 in network mk-multinode-336982
	I0816 13:07:55.059216   39075 main.go:141] libmachine: (multinode-336982) Calling .GetSSHPort
	I0816 13:07:55.059387   39075 main.go:141] libmachine: (multinode-336982) Calling .GetSSHKeyPath
	I0816 13:07:55.059547   39075 main.go:141] libmachine: (multinode-336982) Calling .GetSSHUsername
	I0816 13:07:55.059689   39075 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/multinode-336982/id_rsa Username:docker}
	I0816 13:07:55.136790   39075 ssh_runner.go:195] Run: systemctl --version
	I0816 13:07:55.142902   39075 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 13:07:55.158015   39075 kubeconfig.go:125] found "multinode-336982" server: "https://192.168.39.208:8443"
	I0816 13:07:55.158049   39075 api_server.go:166] Checking apiserver status ...
	I0816 13:07:55.158099   39075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 13:07:55.171575   39075 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1079/cgroup
	W0816 13:07:55.187507   39075 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1079/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 13:07:55.187549   39075 ssh_runner.go:195] Run: ls
	I0816 13:07:55.191947   39075 api_server.go:253] Checking apiserver healthz at https://192.168.39.208:8443/healthz ...
	I0816 13:07:55.195729   39075 api_server.go:279] https://192.168.39.208:8443/healthz returned 200:
	ok
	I0816 13:07:55.195747   39075 status.go:422] multinode-336982 apiserver status = Running (err=<nil>)
	I0816 13:07:55.195757   39075 status.go:257] multinode-336982 status: &{Name:multinode-336982 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 13:07:55.195776   39075 status.go:255] checking status of multinode-336982-m02 ...
	I0816 13:07:55.196095   39075 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 13:07:55.196132   39075 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:07:55.210852   39075 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41631
	I0816 13:07:55.211271   39075 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:07:55.211757   39075 main.go:141] libmachine: Using API Version  1
	I0816 13:07:55.211778   39075 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:07:55.212071   39075 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:07:55.212248   39075 main.go:141] libmachine: (multinode-336982-m02) Calling .GetState
	I0816 13:07:55.213892   39075 status.go:330] multinode-336982-m02 host status = "Running" (err=<nil>)
	I0816 13:07:55.213909   39075 host.go:66] Checking if "multinode-336982-m02" exists ...
	I0816 13:07:55.214186   39075 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 13:07:55.214217   39075 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:07:55.229370   39075 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38229
	I0816 13:07:55.229768   39075 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:07:55.230216   39075 main.go:141] libmachine: Using API Version  1
	I0816 13:07:55.230237   39075 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:07:55.230520   39075 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:07:55.230702   39075 main.go:141] libmachine: (multinode-336982-m02) Calling .GetIP
	I0816 13:07:55.233389   39075 main.go:141] libmachine: (multinode-336982-m02) DBG | domain multinode-336982-m02 has defined MAC address 52:54:00:1b:c9:f0 in network mk-multinode-336982
	I0816 13:07:55.233744   39075 main.go:141] libmachine: (multinode-336982-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:c9:f0", ip: ""} in network mk-multinode-336982: {Iface:virbr1 ExpiryTime:2024-08-16 14:06:13 +0000 UTC Type:0 Mac:52:54:00:1b:c9:f0 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:multinode-336982-m02 Clientid:01:52:54:00:1b:c9:f0}
	I0816 13:07:55.233770   39075 main.go:141] libmachine: (multinode-336982-m02) DBG | domain multinode-336982-m02 has defined IP address 192.168.39.190 and MAC address 52:54:00:1b:c9:f0 in network mk-multinode-336982
	I0816 13:07:55.233904   39075 host.go:66] Checking if "multinode-336982-m02" exists ...
	I0816 13:07:55.234176   39075 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 13:07:55.234231   39075 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:07:55.249364   39075 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46205
	I0816 13:07:55.249728   39075 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:07:55.250106   39075 main.go:141] libmachine: Using API Version  1
	I0816 13:07:55.250123   39075 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:07:55.250406   39075 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:07:55.250571   39075 main.go:141] libmachine: (multinode-336982-m02) Calling .DriverName
	I0816 13:07:55.250740   39075 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 13:07:55.250757   39075 main.go:141] libmachine: (multinode-336982-m02) Calling .GetSSHHostname
	I0816 13:07:55.253184   39075 main.go:141] libmachine: (multinode-336982-m02) DBG | domain multinode-336982-m02 has defined MAC address 52:54:00:1b:c9:f0 in network mk-multinode-336982
	I0816 13:07:55.253587   39075 main.go:141] libmachine: (multinode-336982-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:c9:f0", ip: ""} in network mk-multinode-336982: {Iface:virbr1 ExpiryTime:2024-08-16 14:06:13 +0000 UTC Type:0 Mac:52:54:00:1b:c9:f0 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:multinode-336982-m02 Clientid:01:52:54:00:1b:c9:f0}
	I0816 13:07:55.253608   39075 main.go:141] libmachine: (multinode-336982-m02) DBG | domain multinode-336982-m02 has defined IP address 192.168.39.190 and MAC address 52:54:00:1b:c9:f0 in network mk-multinode-336982
	I0816 13:07:55.253742   39075 main.go:141] libmachine: (multinode-336982-m02) Calling .GetSSHPort
	I0816 13:07:55.253895   39075 main.go:141] libmachine: (multinode-336982-m02) Calling .GetSSHKeyPath
	I0816 13:07:55.254033   39075 main.go:141] libmachine: (multinode-336982-m02) Calling .GetSSHUsername
	I0816 13:07:55.254154   39075 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-3966/.minikube/machines/multinode-336982-m02/id_rsa Username:docker}
	I0816 13:07:55.336238   39075 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 13:07:55.350152   39075 status.go:257] multinode-336982-m02 status: &{Name:multinode-336982-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0816 13:07:55.350182   39075 status.go:255] checking status of multinode-336982-m03 ...
	I0816 13:07:55.350493   39075 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 13:07:55.350526   39075 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 13:07:55.366013   39075 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39407
	I0816 13:07:55.366432   39075 main.go:141] libmachine: () Calling .GetVersion
	I0816 13:07:55.366880   39075 main.go:141] libmachine: Using API Version  1
	I0816 13:07:55.366897   39075 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 13:07:55.367226   39075 main.go:141] libmachine: () Calling .GetMachineName
	I0816 13:07:55.367386   39075 main.go:141] libmachine: (multinode-336982-m03) Calling .GetState
	I0816 13:07:55.368706   39075 status.go:330] multinode-336982-m03 host status = "Stopped" (err=<nil>)
	I0816 13:07:55.368721   39075 status.go:343] host is not running, skipping remaining checks
	I0816 13:07:55.368728   39075 status.go:257] multinode-336982-m03 status: &{Name:multinode-336982-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.24s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (40.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-336982 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-336982 node start m03 -v=7 --alsologtostderr: (39.964912804s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-336982 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (40.57s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-336982 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-336982 node delete m03: (1.462748469s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-336982 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.96s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (185.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-336982 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0816 13:18:56.822623   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-336982 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m5.409279209s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-336982 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (185.92s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (43.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-336982
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-336982-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-336982-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (61.390782ms)

                                                
                                                
-- stdout --
	* [multinode-336982-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-3966/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-3966/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-336982-m02' is duplicated with machine name 'multinode-336982-m02' in profile 'multinode-336982'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-336982-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-336982-m03 --driver=kvm2  --container-runtime=crio: (42.040204423s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-336982
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-336982: exit status 80 (197.332395ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-336982 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-336982-m03 already exists in multinode-336982-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-336982-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (43.30s)

                                                
                                    
x
+
TestScheduledStopUnix (113.85s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-802327 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-802327 --memory=2048 --driver=kvm2  --container-runtime=crio: (42.317670585s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-802327 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-802327 -n scheduled-stop-802327
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-802327 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-802327 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-802327 -n scheduled-stop-802327
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-802327
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-802327 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-802327
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-802327: exit status 7 (63.639163ms)

                                                
                                                
-- stdout --
	scheduled-stop-802327
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-802327 -n scheduled-stop-802327
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-802327 -n scheduled-stop-802327: exit status 7 (60.132654ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-802327" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-802327
--- PASS: TestScheduledStopUnix (113.85s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (213.66s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2342132582 start -p running-upgrade-729731 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2342132582 start -p running-upgrade-729731 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m15.882504029s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-729731 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-729731 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m13.943109729s)
helpers_test.go:175: Cleaning up "running-upgrade-729731" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-729731
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-729731: (1.237431196s)
--- PASS: TestRunningBinaryUpgrade (213.66s)

                                                
                                    
x
+
TestPause/serial/Start (132.92s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-356375 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
E0816 13:28:56.824733   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-356375 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (2m12.922442026s)
--- PASS: TestPause/serial/Start (132.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-169820 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-169820 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (75.825005ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-169820] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-3966/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-3966/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (56.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-169820 --driver=kvm2  --container-runtime=crio
E0816 13:30:23.987930   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/functional-756697/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-169820 --driver=kvm2  --container-runtime=crio: (56.725853002s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-169820 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (56.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (9.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-169820 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-169820 --no-kubernetes --driver=kvm2  --container-runtime=crio: (7.766105395s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-169820 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-169820 status -o json: exit status 2 (243.912822ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-169820","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-169820
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-169820: (1.060094835s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (9.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (32.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-169820 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0816 13:30:40.921150   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/functional-756697/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-169820 --no-kubernetes --driver=kvm2  --container-runtime=crio: (32.663717948s)
--- PASS: TestNoKubernetes/serial/Start (32.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-169820 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-169820 "sudo systemctl is-active --quiet service kubelet": exit status 1 (184.289644ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (13.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (13.373969981s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (13.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-169820
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-169820: (1.276559673s)
--- PASS: TestNoKubernetes/serial/Stop (1.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.14s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (2.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-251866 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-251866 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (104.430922ms)

                                                
                                                
-- stdout --
	* [false-251866] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-3966/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-3966/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 13:31:40.274814   50295 out.go:345] Setting OutFile to fd 1 ...
	I0816 13:31:40.274925   50295 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 13:31:40.274934   50295 out.go:358] Setting ErrFile to fd 2...
	I0816 13:31:40.274938   50295 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 13:31:40.275106   50295 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-3966/.minikube/bin
	I0816 13:31:40.275667   50295 out.go:352] Setting JSON to false
	I0816 13:31:40.276579   50295 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4445,"bootTime":1723810655,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 13:31:40.276641   50295 start.go:139] virtualization: kvm guest
	I0816 13:31:40.279137   50295 out.go:177] * [false-251866] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 13:31:40.280449   50295 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 13:31:40.280462   50295 notify.go:220] Checking for updates...
	I0816 13:31:40.283083   50295 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 13:31:40.284427   50295 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-3966/kubeconfig
	I0816 13:31:40.285801   50295 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-3966/.minikube
	I0816 13:31:40.287119   50295 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 13:31:40.288575   50295 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 13:31:40.290491   50295 config.go:182] Loaded profile config "NoKubernetes-169820": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0816 13:31:40.290635   50295 config.go:182] Loaded profile config "kubernetes-upgrade-759623": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0816 13:31:40.290737   50295 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 13:31:40.326825   50295 out.go:177] * Using the kvm2 driver based on user configuration
	I0816 13:31:40.328078   50295 start.go:297] selected driver: kvm2
	I0816 13:31:40.328101   50295 start.go:901] validating driver "kvm2" against <nil>
	I0816 13:31:40.328112   50295 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 13:31:40.330270   50295 out.go:201] 
	W0816 13:31:40.331476   50295 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0816 13:31:40.332787   50295 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-251866 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-251866

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-251866

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-251866

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-251866

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-251866

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-251866

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-251866

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-251866

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-251866

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-251866

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251866"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251866"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251866"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-251866

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251866"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251866"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-251866" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-251866" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-251866" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-251866" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-251866" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-251866" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-251866" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-251866" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251866"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251866"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251866"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251866"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251866"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-251866" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-251866" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-251866" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251866"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251866"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251866"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251866"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251866"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-251866

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251866"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251866"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251866"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251866"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251866"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251866"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251866"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251866"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251866"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251866"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251866"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251866"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251866"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251866"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251866"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251866"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251866"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251866"

                                                
                                                
----------------------- debugLogs end: false-251866 [took: 2.658391586s] --------------------------------
helpers_test.go:175: Cleaning up "false-251866" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-251866
--- PASS: TestNetworkPlugins/group/false (2.92s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (102.98s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3992723317 start -p stopped-upgrade-760817 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3992723317 start -p stopped-upgrade-760817 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (55.63195666s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3992723317 -p stopped-upgrade-760817 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3992723317 -p stopped-upgrade-760817 stop: (2.136895703s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-760817 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-760817 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (45.214477053s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (102.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-169820 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-169820 "sudo systemctl is-active --quiet service kubelet": exit status 1 (197.677708ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-760817
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-760817: (1.099813572s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (116.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-311070 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-311070 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (1m56.064874341s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (116.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (53.61s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-302520 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
E0816 13:35:40.921230   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/functional-756697/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-302520 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (53.605457065s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (53.61s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-302520 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [69c9d264-90a3-4a34-8334-cd771631e880] Pending
helpers_test.go:344: "busybox" [69c9d264-90a3-4a34-8334-cd771631e880] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [69c9d264-90a3-4a34-8334-cd771631e880] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003998325s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-302520 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-302520 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-302520 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-311070 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9af952ee-3d22-4bd5-8138-87534a89702c] Pending
helpers_test.go:344: "busybox" [9af952ee-3d22-4bd5-8138-87534a89702c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9af952ee-3d22-4bd5-8138-87534a89702c] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004364144s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-311070 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-311070 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-311070 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (92.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-893736 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-893736 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (1m32.917385471s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (92.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (672.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-302520 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-302520 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (11m11.949473533s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-302520 -n embed-certs-302520
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (672.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (562.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-311070 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-311070 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (9m22.101360423s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-311070 -n no-preload-311070
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (562.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-893736 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a2a34a97-11aa-4c0e-b5e7-061dba89ed2d] Pending
helpers_test.go:344: "busybox" [a2a34a97-11aa-4c0e-b5e7-061dba89ed2d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a2a34a97-11aa-4c0e-b5e7-061dba89ed2d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004147585s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-893736 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-893736 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-893736 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-882237 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-882237 --alsologtostderr -v=3: (3.282670259s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-882237 -n old-k8s-version-882237
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-882237 -n old-k8s-version-882237: exit status 7 (60.816748ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-882237 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (425.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-893736 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
E0816 13:43:56.824034   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:45:40.921099   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/functional-756697/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:47:03.989526   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/functional-756697/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-893736 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (7m5.586284995s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-893736 -n default-k8s-diff-port-893736
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (425.84s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (47.57s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-375308 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
E0816 14:03:56.823521   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/addons-966941/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-375308 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (47.57078292s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (47.57s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-375308 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-375308 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.03163213s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-375308 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-375308 --alsologtostderr -v=3: (7.31460618s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-375308 -n newest-cni-375308
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-375308 -n newest-cni-375308: exit status 7 (64.847885ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-375308 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (38.61s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-375308 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-375308 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (38.260160159s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-375308 -n newest-cni-375308
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (38.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (65.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-251866 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-251866 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m5.376559538s)
--- PASS: TestNetworkPlugins/group/auto/Start (65.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-375308 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.61s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-375308 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-375308 -n newest-cni-375308
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-375308 -n newest-cni-375308: exit status 2 (242.400605ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-375308 -n newest-cni-375308
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-375308 -n newest-cni-375308: exit status 2 (256.563015ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-375308 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-375308 -n newest-cni-375308
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-375308 -n newest-cni-375308
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (74.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-251866 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-251866 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m14.380078178s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (74.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (120.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-251866 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
E0816 14:05:40.921653   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/functional-756697/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-251866 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (2m0.334176001s)
--- PASS: TestNetworkPlugins/group/calico/Start (120.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-251866 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-251866 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-9dxx9" [89a85269-dfa0-4aeb-985f-36430cdbf6fb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-9dxx9" [89a85269-dfa0-4aeb-985f-36430cdbf6fb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.003190665s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-251866 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-251866 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-251866 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (76.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-251866 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-251866 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m16.654661642s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (76.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-2fm6p" [b7cc6cb7-e700-4e2f-8d2d-ca86de87c49a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00501844s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-251866 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-251866 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-q8jnr" [131f65f5-e519-47c9-8671-4b2acbeefb18] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0816 14:06:50.532061   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070/client.crt: no such file or directory" logger="UnhandledError"
E0816 14:06:50.538481   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070/client.crt: no such file or directory" logger="UnhandledError"
E0816 14:06:50.549850   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070/client.crt: no such file or directory" logger="UnhandledError"
E0816 14:06:50.571308   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070/client.crt: no such file or directory" logger="UnhandledError"
E0816 14:06:50.612743   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070/client.crt: no such file or directory" logger="UnhandledError"
E0816 14:06:50.694224   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070/client.crt: no such file or directory" logger="UnhandledError"
E0816 14:06:50.855757   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070/client.crt: no such file or directory" logger="UnhandledError"
E0816 14:06:51.177229   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070/client.crt: no such file or directory" logger="UnhandledError"
E0816 14:06:51.819477   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070/client.crt: no such file or directory" logger="UnhandledError"
E0816 14:06:53.101525   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-q8jnr" [131f65f5-e519-47c9-8671-4b2acbeefb18] Running
E0816 14:06:55.663608   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004893504s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-251866 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-251866 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-251866 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (87.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-251866 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-251866 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m27.86289416s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (87.86s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-893736 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-893736 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-893736 -n default-k8s-diff-port-893736
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-893736 -n default-k8s-diff-port-893736: exit status 2 (261.158805ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-893736 -n default-k8s-diff-port-893736
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-893736 -n default-k8s-diff-port-893736: exit status 2 (259.355552ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-893736 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-893736 -n default-k8s-diff-port-893736
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-893736 -n default-k8s-diff-port-893736
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-vb59k" [a8cc2259-e554-46bd-ad70-51d9952ed71c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004982448s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (89.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-251866 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E0816 14:07:33.845282   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/client.crt: no such file or directory" logger="UnhandledError"
E0816 14:07:33.851698   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/client.crt: no such file or directory" logger="UnhandledError"
E0816 14:07:33.863182   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/client.crt: no such file or directory" logger="UnhandledError"
E0816 14:07:33.884602   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/client.crt: no such file or directory" logger="UnhandledError"
E0816 14:07:33.925980   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/client.crt: no such file or directory" logger="UnhandledError"
E0816 14:07:34.007403   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/client.crt: no such file or directory" logger="UnhandledError"
E0816 14:07:34.169073   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/client.crt: no such file or directory" logger="UnhandledError"
E0816 14:07:34.490917   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/client.crt: no such file or directory" logger="UnhandledError"
E0816 14:07:35.132932   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/client.crt: no such file or directory" logger="UnhandledError"
E0816 14:07:36.414856   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-251866 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m29.413335102s)
--- PASS: TestNetworkPlugins/group/flannel/Start (89.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-251866 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-251866 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-fzj97" [92dc7619-d169-4fa0-ae58-1621ad3642cf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0816 14:07:38.976207   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-fzj97" [92dc7619-d169-4fa0-ae58-1621ad3642cf] Running
E0816 14:07:44.097579   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.005477027s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-251866 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-251866 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-251866 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-251866 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-251866 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-mj5rv" [e386bdbd-67f5-4078-9707-db4920493cce] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0816 14:07:54.339187   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-mj5rv" [e386bdbd-67f5-4078-9707-db4920493cce] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.005323725s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-251866 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-251866 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-251866 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (60.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-251866 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E0816 14:08:12.471376   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/no-preload-311070/client.crt: no such file or directory" logger="UnhandledError"
E0816 14:08:14.820961   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-251866 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m0.401491975s)
--- PASS: TestNetworkPlugins/group/bridge/Start (60.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-251866 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-251866 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-tnf5m" [14c882f5-9018-4e5b-9e49-2041df1e47ec] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-tnf5m" [14c882f5-9018-4e5b-9e49-2041df1e47ec] Running
E0816 14:08:55.782673   11149 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/old-k8s-version-882237/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.003544802s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-251866 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-251866 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-251866 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-8qvtw" [a5090fb0-293f-4170-aed3-96cd231a3038] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00475882s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-251866 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-251866 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-nz4xt" [254880ce-4daa-4e9d-abb3-bfc6b6ea2530] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-nz4xt" [254880ce-4daa-4e9d-abb3-bfc6b6ea2530] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.005139501s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-251866 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-251866 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-qt9c9" [9621700e-df71-407b-bc2b-462295a89f67] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-qt9c9" [9621700e-df71-407b-bc2b-462295a89f67] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004195643s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-251866 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-251866 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-251866 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-251866 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-251866 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-251866 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    

Test skip (37/314)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.31.0/cached-images 0
15 TestDownloadOnly/v1.31.0/binaries 0
16 TestDownloadOnly/v1.31.0/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0
38 TestAddons/parallel/Olm 0
48 TestDockerFlags 0
51 TestDockerEnvContainerd 0
53 TestHyperKitDriverInstallOrUpdate 0
54 TestHyperkitDriverSkipUpgrade 0
105 TestFunctional/parallel/DockerEnv 0
106 TestFunctional/parallel/PodmanEnv 0
114 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
115 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
116 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
117 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
118 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
119 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
120 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
121 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
154 TestGvisorAddon 0
176 TestImageBuild 0
203 TestKicCustomNetwork 0
204 TestKicExistingNetwork 0
205 TestKicCustomSubnet 0
206 TestKicStaticIP 0
238 TestChangeNoneUser 0
241 TestScheduledStopWindows 0
243 TestSkaffold 0
245 TestInsufficientStorage 0
249 TestMissingContainerUpgrade 0
255 TestStartStop/group/disable-driver-mounts 0.14
271 TestNetworkPlugins/group/kubenet 3.05
281 TestNetworkPlugins/group/cilium 2.96
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-338033" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-338033
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-251866 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-251866

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-251866

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-251866

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-251866

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-251866

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-251866

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-251866

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-251866

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-251866

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-251866

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251866"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251866"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251866"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-251866

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251866"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251866"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-251866" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-251866" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-251866" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-251866" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-251866" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-251866" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-251866" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-251866" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251866"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251866"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251866"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251866"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251866"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-251866" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-251866" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-251866" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251866"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251866"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251866"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251866"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251866"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19423-3966/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 16 Aug 2024 13:30:48 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.72.176:8443
name: running-upgrade-729731
contexts:
- context:
cluster: running-upgrade-729731
extensions:
- extension:
last-update: Fri, 16 Aug 2024 13:30:48 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: running-upgrade-729731
name: running-upgrade-729731
current-context: ""
kind: Config
preferences: {}
users:
- name: running-upgrade-729731
user:
client-certificate: /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/running-upgrade-729731/client.crt
client-key: /home/jenkins/minikube-integration/19423-3966/.minikube/profiles/running-upgrade-729731/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-251866

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251866"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251866"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251866"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251866"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251866"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251866"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251866"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251866"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251866"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251866"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251866"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251866"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251866"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251866"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251866"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251866"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251866"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251866"

                                                
                                                
----------------------- debugLogs end: kubenet-251866 [took: 2.918350869s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-251866" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-251866
--- SKIP: TestNetworkPlugins/group/kubenet (3.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-251866 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-251866

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-251866

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-251866

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-251866

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-251866

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-251866

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-251866

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-251866

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-251866

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-251866

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251866"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251866"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251866"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-251866

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251866"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251866"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-251866" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-251866" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-251866" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-251866" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-251866" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-251866" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-251866" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-251866" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251866"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251866"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251866"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251866"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251866"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-251866

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-251866

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-251866" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-251866" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-251866

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-251866

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-251866" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-251866" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-251866" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-251866" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-251866" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251866"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251866"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251866"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251866"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251866"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-251866

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251866"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251866"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251866"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251866"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251866"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251866"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251866"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251866"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251866"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251866"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251866"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251866"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251866"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251866"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251866"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251866"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251866"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-251866" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251866"

                                                
                                                
----------------------- debugLogs end: cilium-251866 [took: 2.828543583s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-251866" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-251866
--- SKIP: TestNetworkPlugins/group/cilium (2.96s)

                                                
                                    
Copied to clipboard